If they already have this capability then why is this CSAM even needed? You don’t seem to be following your own logic to its conclusion.
“CSAM” is, quite literally, “Child Sexual Abuse Material”.
Nobody needs it. Please keep the language straight - CSAM is not code, it is not an algorithm, it is not anything that Apple or Google has written - it is photos or movies of kids being abused and/or raped. Take care with how you toss the term around.
There is an organization called the National Center for Missing and Exploited Children (NCMEC), who are the only ones in the US legally allowed to possess such images. They have them for the purpose of compiling a database of hashes of these images, which they then make available. The hash makes it easy to test if a given random image is in the database (you hash it in the same way and look up the resulting number to see if it’s there), but impossible to go the other way - theres simply no way to reconstruct an image from the hash value.
The code that Apple wrote, takes each one of your images, immediately before uploading to iCloud, and hashes it, and attempts to look it up in the database. When it fails to match, it uploads it as normal. Because Apple can tell the image is safe before uploading, it can encrypt the image, with a key only you possess, before uploading. So all your images on iCloud are protected from spying eyes, whether they be some government agency, a rogue Apple employee, a hacker, or whatever - even if they can get the file, it’s meaningless gibberish to them without the key required to decrypt it.
This other approach is to upload all your photos
unencrypted, and then scan them all on the server for CSAM (this is no doubt already happening in Apple’s case, and is definitely happening on Google’s servers as demonstrated by the article cited earlier today), and
then they’re sitting on the server without any encryption, where that government agency, or rogue employee, or hacker, can do whatever they want with them, if they can get into the server.
Apple literally built a system that is
more private for their users, and people screamed bloody murder because the scanning is happening on their phone immediately before uploading, rather than on the server immediately after uploading, like
everyone else is already doing.
The comment you were replying to was making the point that people are getting worried about “what if they put an image of $POLITICIAN into the CSAM hash database!!1!” are entirely missing the point that the government already has a much easier mechanism to abuse - coerce $COMPANY to let them run any scanner of said government’s choosing (including an AI one that looks for any image vaguely resembling $POLITICIAN, or, say, any image of someone wearing a red hat)… across all those
unencrypted images
currently sitting on their server.
The database-of-hashes approach will only work with specific images, not “types” of images, or items in an image, only specific images - it’s much harder to misuse for nefarious purposes than what we’ve already got running right now.
You can’t judge these things in a vacuum, you have to look at them in comparison to the alternatives. And a lot of people have been trained to not look further than the sound bite they’re currently being fed.
(Edit: fixed a single-character typo.)