This. Matter of time until the (innocent) non-nude photos of people's children suddenly get blurred and red-flagged. It could even be a clothed child wearing a salmon-colored (skin-colored) clothes, which could trick the CSAM algorithm into thinking the child was unclothed.
You are showing a fundamental lack of understanding the of system that was proposed.
There's no "CSAM algorithm" that's looking for nudity. The agency that's been authorized to deal with this (I don't recall the name - they're the only entity in the US legally allowed to have copies of CSAM images/video, for this specific purpose) publishes a list of hashes of the CSAM material, the
really awful stuff. Then, your phone - right before uploading your pictures to iCloud - makes hashes of your pictures and looks to see if there's exact matches of any of those hashes in the CSAM database. If it gets more than a certain number of hits, then it flags things for further manual examination. It's
never going to match your kid wearing a salmon-colored swimsuit or whatever. Unless, in the picture, your kid is in the midst of being raped
and that picture has already been circulating amongst pedophiles. It's not looking for "kinds" of things, or for bare skin, it's looking for
very specific images that have already been found to be circulating in the pedophile "community". And (if I recall correctly), the scanning never happens if the pictures aren't uploaded to iCloud.
Scanning on your phone right before uploading means your pictures (and everyone else's) can be stored on iCloud encrypted with a key that only you have. The alternative is that your pictures are uploaded
unencrypted, and are scanned in the same way on the iCloud servers. The scanning is going to happen one way or the other. Scanning before sending means your pictures on iCloud are safe from random hackers finding some way to download them.
If the scanning is done on Apple's servers, that means there are petabytes of unencrypted ("plaintext") images sitting there that governments could pressure Apple to scan for other types of images, or rogue employees could be paid off to scan for <whatever some well-funded entity thought might be lucrative>, or for hackers to break in and download. If the scanning is done on your device before sending to iCloud, then the pictures can be sent already-encrypted from your phone, and all those other scenarios fall apart - they simply cannot happen. Now, could the CSAM database be compromised to scan for other photos? Yes, theoretically. But it's a single point of failure to watch carefully, rather than the thousand points of failure we have now. And if you're worrying about that, you should be worrying more about spying code being inserted directly into all the other parts of the OS. You quickly reach a point where you just shouldn't be using a smartphone at all. You either have to allow
some level of trust, or you go back to not communicating electronically
at all.
Now, Apple also has a
completely separate system, which parents can
optionally enable, for
child accounts only, that
does look for nudity and such, using machine learning algorithms. But what
that system does, if it finds something, it present the
child only with a dialog box
only, that says, "this image may be inappropriate, do you still want to see it?" - and neither the event nor the child's response are reported to the parents, or to Apple, or to any authorities.
Apple made a major PR misstep in announcing the CSAM-scanning system and the "warn kids about nudity" systems at the same time, and people inevitably conflated the two systems in their heads and multiplied their outrage. Too much of this is largely uninformed people getting hold of crumbs of data and getting outraged, assuming they have a reasonably good understanding of the whole situation, and all the other moving pieces, when they very much do not.