Well… its not a particularly new insight that Facebook isn‘t the authority to ask when it comes to privacy.
Doesn‘t make Apples move any better though
Well… its not a particularly new insight that Facebook isn‘t the authority to ask when it comes to privacy.
Are you kidding me right now?!? Are you serious? Really? Are you serious?I applaud Apple’s afford to curb child pornography. If you are this worried about on device search, what are you hiding?
Here is the link to the YouTube video so people don't have to hunt for just the link.
Even considering development of such a system is the single worst decision ever made since the founding of Apple. The surveilence privacy abuse and security implications of this system are astounding. I hope every imaginable avenue of communication to Apple is being flooded with requests to scrap this big brother system and instead focus all developer time on features that actually improve privacy and security and prevent survilence of any kind, in any country.
For what purpose are they accessing and scanning iCloud photos server-side? iCloud photos are stored encrypted.They have every right to access and scan photos on their Cloud, which they have already been doing. Your first sentence doesn't jive with what Apple has already been doing.
Stop giving Apple your money is only part of the solution. One is not helping society by depriving Apple of funds unless one goes one step further and continues to object to its behaviour after becoming a non customer. As we have learned over many years, silence is not a solution to reprehensible actions by bad characters.If you really want to send a message to Apple, one that will be heard loud and clear by those who count, the solution is easy. VOTE WITH YOUR WALLET.
Will you do it?
How about anyone else here - feel free to respond with a YES or NO.
I was thinking about Google and Microsoft when in error said iCloud scanning is already active. That is not the case. I apologize for the error and have amended my previous post.For what purpose are they accessing and scanning iCloud photos server-side? iCloud photos are stored encrypted.
While Apple could decrypt them as they have the key I’m not aware of any ongoing activities to do so. do you have sources?
Also, if that were actually the case, then what’s this outcry over CSAM pics for??
But client-side verification trumps server-side scanning as the data never leaves your device.I was thinking about Google and Microsoft when in error said iCloud scanning is already active. That is not the case. I apologize for the error and have amended my previous post.
That being said, I still stand by my point, that if all Apple is concerned about is proactively keeping said filth off their servers, that can be done server side without the need for on device scanning. I think they are doing the on device for the reasons I posited earlier. It now appears privacy means something different with Apple depending on the subject.
But client-side verification trumps server-side scanning as the data never leaves your device.
As always Apple makes it more difficult for itself (like with Face scanning) due to privacy reasons. It would be much easier to do it all server side. Remember when face scanning was first introduced? It happened on device and there was no sync to other devices. Very annoying. Because they were not happy doing it until it could be done in privacy friendly way.
Same here.
Yes, but there’s a few ways I can think of to help avoid some of that.Android does similar things.
Every URL used by an app (including browsers) are analysed by the system and if they're found to be potentially harmful, they are being sent to Google.
Most Android phones also has a system service which will scan part of the file system and delete stuff if found to be malicious.
But Google does most of their scanning on their servers. Almost every Android user outside China uses Google Photos and gets their images scanned. Google also reports through the same system.
Not really. Coz if you use iCloud the data is transferred to Apple anyways. So no need to scan on-device, which is - irrespective of how they do it - snooping.But client-side verification trumps server-side scanning as the data never leaves your device.
As always Apple makes it more difficult for itself (like with Face scanning) due to privacy reasons. It would be much easier to do it all server side. Remember when face scanning was first introduced? It happened on device and there was no sync to other devices. Very annoying. Because they were not happy doing it until it could be done in privacy friendly way.
Same here.
If people can get by without storing anything that is highly sensitive or mission critical in the cloud, then it proves that the cloud systems are not needed and if people get snared by mindless algorithms, they have no one to blame but themselves.I personally place a high priority on privacy and security. Apple's move doesn't bother me, though, because one of my baseline rules is to never store anything that is highly sensitive or mission critical in the cloud.
But this just does not line up that it can track "crops", "pixel change", "color adjustments", "rotations" and "transformations". The way you described it, it would be a true 1:1 hash match. But there is some leeway. So yes a truly similar photo can results in a similar hash that might get flagged.The NeuralHash and MicrosoftDNA algorithms were designed to do the opposite of what you are describing. NeuralHash is optimised to not catch pictures of a similar "feel".
Both of these systems are optimised only to catch changes to a specific photo: cropping, changing colour, hue contrast, mirroring.
Here's an example:
Picture 1: Someone takes a picture of you where you live. Lets say you standing in front of a window.
You walk away for a few minutes.
Picture 2: Someone takes a picture of you standing in front of the same window and you have approx. the same pose as in the last picture.
Somehow picture 1 becomes part of the CSAM database. If NeuralHash is any good it should just flag picture 1 and not picture 2.
That's why its do difficult to misuse it. You can't just provide similar pictures of a protest and hope for the system to get people with "pictures of protests".
Still, you will get collisions and it's why Apple has a threshold on collisions before they are notified. With the threshold they have chosen they have estimated it to be 1 in 1 trillion accounts per year.
For obvious reason this can’t be opt in.A fair point. But the sole purpose of this is to incriminate someone. Obviously if they're guilty, they deserve it. But don't use my device to do it. Or at least give me a choice in the matter.
He also read John Gruber’s take on this in one of the latest videos and said it’s a reasonable take.I’ve watched Louis Rossmann’s videos for quite a while, just to see what it’s like to repair an Apple product from a technical point-of-view. I never really took anything he said against Apple personally, as everyone is entitled their opinion, despite how “vocal” they are. In fact, I thought he went overboard with his views. However, the more I revisit his old videos and listen to his opinions on Apple’s behavior over the years, the more I’ve come to realize I’ve made a big mistake.
But this just does not line up that it can track "crops", "pixel change", "color adjustments", "rotations" and "transformations". The way you described it, it would be a true 1:1 hash match. But there is some leeway. So yes a truly similar photo can results in a similar hash that might get flagged.
Even with the best intentions where does Tim Cook think this is leading?
Then Apple is clearly overstating how it can be so flexible by tracking "crops", "pixel change", "color adjustments", "rotations" and "transformations". If one image gets flagged and a second image does not, and I am only off by a pixel in my pose, then Apple is falsely stating it can know about these manipulations to the photo.You’re underestimating the complexity and elegance of those systems.
We can’t give the “layman common sense” treatment to any subject.
Can’t remember where I read it but I think those systems divide the picture in a lot of small squares and extrapolate some parameters from each little square, hence they can catch cropped/edited pics but not mistake it for an unrelated pic..
They cite 3 independent security scholars who they showed the system to in their press release.I would like to know how Apple tested this system (and who was involved) and how they arrived at the statement of accuracy.
Then Apple is clearly overstating how it can be so flexible by tracking "crops", "pixel change", "color adjustments", "rotations" and "transformations". If one image gets flagged and a second image does not, and I am only off by a pixel in my pose, then Apple is falsely stating it can know about these manipulations to the photo.
And again, I am just trying to understand here. How can this feature track "crops", "pixel change", "color adjustments", "rotations" and "transformations" but at the SAME TIME, a VERY VERY similar picture doesn't get flagged. Can you not see the contradiction here?
Agreed. There is a clear contradiction here with Apple's claims that say "any adjustments to that matched photo will still be a match", yet "oh but an entirely separate photo that is nearly identical won't get flagged so don't worry!I would like to know how Apple tested this system (and who was involved) and how they arrived at the statement of accuracy.
The contradiction is:Again, this is all your limitation in understanding how this works. No contradiction. Read up about Microsoft PhotoDNA.