There was someone that posted info about the company behind the CSAM database, that is has no transparency requirements and it’s a semi government company. They would just need to add the hashes of “problematic material” (EG: a pic depicting the US president as a clown”) to the database and send the updated version to Apple.
https://forums.macrumors.com/thread...t-csam-in-icloud-photos.2400202/post-32423927
No need to request to expand the feature as it’s made to work in any kind of pic and Apple wouldn’t know because they’re just hashes, no way to identify the pic they’re related to
And someone else replied to that and posted
info from Apple on how they were planning to mitigate that risk that I will quote again here:
"The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety orga- nizations operating in separate sovereign jurisdictions. Apple does not add to the set of known CSAM image hashes, and the system is designed to be auditable. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design"
The ability to do it on device that is the slippery slope part.
Even if governments can request either way. Requesting expansion can be easier than requesting Apple the build something completely new.
Continuing from the same source above:
"We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it."
So this isn't a new thing for Apple, and I'm guessing the reason they went through the trouble of developing this whole system was to find a way to address demands from law enforcement while retaining as much user privacy as possible. They put "we will not accede to any government's request to expand it" in black and white, setting them up for a huge hit to reputation and value if they ever do.
Ok, so let's leave that aside and consider Apple equally untrustworthy. Then what?
Well, there are tons of scanning processes on our devices already that are way more beneficial to oppressive governments than this this crazy NeuralHash scheme. This scheme is designed to cast the narrowest net possible, looking only images that match specific known images. That doesn't sound useful to a government looking to root out subversives.
You phone already does a ton of AI scanning on you device: text scanning in your email, facial and text recognition in images, classification of images by general content and context categories, it knows your location and creates a database of important locations you visit frequently, has your entire contact list and calendar, it even analyzes your movements and behaviors to classify the type of behavior you are engaged in for fitness all the way down to when it should and shouldn't charge the device.
Not to mention if you don't trust Apple you have no way of knowing what else they're doing on your device.
So if you can't trust Apple, you're already pretty screwed and any one of those other methods is way more useful to a bad government than trying to slip enough hashes of images they consider subversive into a child protection function so that the people they're looking for match at least 30 of them and then trigger a manual review and reporting to an agency aimed at child safety.
Wouldn't it be easier to just force Apple to scan text and images for faces and phrases like "down with the dictator" and send them an encrypted blip when they get a hit?