Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
In December 2021, Apple removed the above update and all references to its CSAM detection plans from its Child Safety page, but an Apple spokesperson informed The Verge that Apple's plans for the feature had not changed. To the best of our knowledge, however, Apple has not publicly commented on the plans since that time.
They were accurate, here. I wasn’t even looking for it and I found the PDF on Apple’s website, so they didn’t remove ALL references. :)
 

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
Fixed it for you, because THAT is what will happen. People will be arrested under false accusations. Maybe you will be among them. Me, I'm looking for an alternative to iCloud. Even if I have to build my own version.
😂😂😂😂

No one will be arrested under a false accusation. That is NOT how this works. Name a single time when someone has been arrested under this system…. 🦗🦗🦗 I really need to up my stake in tin foil stocks…
 

Analog Kid

macrumors G3
Mar 4, 2003
9,017
11,788
This. Matter of time until the (innocent) non-nude photos of people's children suddenly get blurred and red-flagged. It could even be a clothed child wearing a salmon-colored (skin-colored) clothes, which could trick the CSAM algorithm into thinking the child was unclothed.

And then Child Protection Services and local police suddenly comes knocking at the Parents' door.... hmm.... could be a tragic scenario.

Think more like checksumming and less like facial recognition. A checksum is a rote computation on the contents of your file, if anything is different, the checksum changes. Facial recognition is making a subjective judgement about "what looks like what".

The hashing here isn't asking "does this look like a child being abused?", the hashing is asking "is this image on a list of specific images of abuse known to be in circulation?". It is not extrapolating and looking for new images, only images that are already known to be circulating.

Unlike a basic checksum, the image is manipulated before hashing to normalize for things like resizing and color table changes, to prevent simple image manipulations for fooling the hash. These are image manipulations, not machine learning inferences, so are not "recognition" tasks.

Like a checksum, information is lost in the hashing-- a complex image is reduced to a long number. The upside is that means the image can't be recreated from the hash. It also means that there is a remote chance that two images reduce to the same long number-- but that doesn't mean they'll look anything alike to the human eye, or that they contain "similar" content.

Because of the risk of false positives, there's a human check involved before forwarding to law enforcement. The question they are answering is "is this objectively the same image as the CSAM image that produced the hash". They are not answering "is this subjectively an image of child abuse", but it doesn't much matter because false positives are far more likely to be pictures of architecture or food than child abuse.

That also means that there's no reason to think that if you trigger a false positive that your image is "close to being bad"-- it would just be any other random image.
 
Last edited:

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
The hashing here isn't asking "does this look like a child being abused?", the hashing is asking "is this image on a list of specific images of abuse known to be in circulation?". It is not extrapolating and looking for new images, only images that are already known to be circulating.
I just wanted to bold this, because people clearly do not understand. Or they are just making trouble under their tin foil hats. I’m not sure which. Being a police officer for nearly 40 years, I can tell you, police do not have the time or resources to be chasing up baby bath photos!
 

jonblatho

macrumors 68030
Jan 20, 2014
2,513
6,214
Oklahoma
Matter of time, yes. And, that time is probably a few thousand years. :) While I know you’re likely not interested in knowing how it works, in case anyone else finding this thread is interested…

There are images of illegal activities that have been captured. A mathematical hash of these images have been created. All image host companies (including Apple) scans their repositories, not using an actual image or machine language algorithm (that may cause false positives), but specifically using the hash to see if any images match. The CSAM algorithm is not looking for “salmon-colored (skin-colored)” tones. It mathematically computes the image’s hash and determines if that hash exactly matches one of the illegal images.
Point of clarification: It’s not looking for an exact match between two images, though. Instead of comparing two images directly, the algorithm first performs some “fuzzing” which is supposed to thwart trivial ways of working around an implementation that strictly looks for an exact match (like slightly cropping or rotating the image). Assuming they’re using a secure hashing algorithm underneath all this, modifying even a single pixel of the image should result in a completely different hash. That’s why the fuzzing happens, and of course the specifics are (and should be) unknown to the general public. CSAM detection implementations have done this basically from the beginning.

The flipside of that is that the fuzzing could more easily lead to false matches, which is why Apple would have a threshold of a certain number of matched images required along with manual review before disabling the user’s account and reporting the matter to authorities. (Note: It’s unclear whether the “manual review” part is actually legal.)
 
Last edited:

bsolar

macrumors 68000
Jun 20, 2011
1,535
1,751
The issue with these initiatives is that the underlying capability is bad whereas the alleged use-case presented is good.

The underlying capability is matching content on the user's device with whatever the authority wants to look for, which is definitely not good. The use-case limits it to known child-abuse photographs if iCloud is enabled, which in itself looks pretty good.

The problem is, once the capability is there, it basically lays the groundwork for authorities to push to implement different use-cases than original intended, e.g. having it enabled even without iCloud enabled (it technically does not need it), or covering "terrorist propaganda" or whatever a given authority wants to prosecute or, sometimes, persecute (hash matching in itself is technically not limited to child-abuse photos, it can match whatever content).
 

Solomani

macrumors 601
Sep 25, 2012
4,785
10,477
Slapfish, North Carolina
As long as we're being conspiratorial, why stop there?

It's probably been enabled since the late 70's and all of our false positives triggered by baby bath photos are being kept in sealed inditements until we retire because they're really just after our social security money.
No man. They won't settle for your paltry near-bankrupt Social Security savings. Big Ebil Gubmint is waiting until your $BTC matures to a value of around $400,000 per token. Then they will bust you for the baby bath pics and seize all your crypto assets.

BTC at 400k might take a while tho. But they play The Long Game.... just like Communist China, the Illuminati, and Hunter Biden's Laptop.
 

freedomlinux

macrumors 6502
Jul 27, 2008
254
406
CT, USA
Regarding the possibility of back doors, this is just a matter of trust. Apple already clearly stated that they would not allow this feature to be abused by law enforcement agencies for any other purpose, and I choose to believe that.
Surely you're joking. Once Apple demonstrates that this ability exists, certain governments will provide them with lists of additional images & hashes to detect.

Apple might not know exactly what they are flagging, but the motivation for governments to flag photos of protestors and social undesirables is easy to understand. Apple "follows all local laws" and frankly I don't believe for 1 minute they will refuse to flag those "suggested" images.
 
  • Like
Reactions: BurgDog and EyeTack

Naraxus

macrumors 68020
Oct 13, 2016
2,111
8,562
No idea what you’re talking about. I never said podofile… 🤷🏻‍♂️ Maybe you need to check a dictionary. Clearly you’re confusing children with podiatry. Weird….
Yes that was a mistake on my part. When you said paedophile I read it as podophilia for some asinine reason.
 
  • Haha
Reactions: steve09090

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
Well…. Basically, we need to look at the facts as presented. There are no facts to support the theory that hashing specific images identified as being a part of a child exploitation system will lead to anything other than what has been stated. Anything other than that is conjecture and a theory the government are conspiring to do more. Thats a conspiracy theory. People may not trust the government/s however people here claiming hashes will be used for other things as a certainty purely based on paranoia, not facts.

The reality, is that we know governments can tap any phones now anyway, if they choose. Such is the ability of government espionage. They don’t need hashing on files, just an imei & phone number.
 
Last edited by a moderator:

mr_jomo

Cancelled
Dec 9, 2018
429
530
There's a BIG difference between how Apple's feature is handling this and how the other providers you mentioned handle it. They all do it on the server. Apple's feature operates entirely ON DEVICE. The hashes never leave the device unless significant thresholds are crossed.

This was clearly explained by Apple, but those against this feature continue to compare it to how the other providers are doing it... on their servers, where privacy invasion can happen much more easily. That has never been Apple's approach with this feature.
Almost correct but for: the 'safety voucher' leaves the client device at the same time the image is uploaded to iCloud. The threshold algo. and Apple's ability to subsequently decrypt the data package in the voucher (including the hash and a 'visual derivative') once exceeded is entirely server side (from our perspective).

But, that just makes the process even better cryptographically, since a malicious actor could potentially exploit the on-device threshold to, for instance, reverse engineering the original imageset. Also, leaving the voucher on device open them up for potential tampering.

Having researched this back during the original upheaval - IMHO Apple build an extremely secure approach, disregarding any other implications.

That said, it's all a mood point from all perspectives: EU will most likely make this a prerequisite for all cloud services - if they didn't already I lost track back late last year - and it's probably a two-three day job for any coder to build a bulletproof, cryto-protected image-sharing app using iCloud to just host BLOBs anyway.
 
  • Like
Reactions: CarlJ

Zest28

macrumors 68020
Jul 11, 2022
2,242
3,101
So basically you cannot make family pictures together with your kids or else run the risk of being flagged despite it being regular family pictures.

No algorithm works 100% and I'm sure many false positive will show up. And once you get accused despite being innocent, the damage is already done.
 

LV426

macrumors 68000
Jan 22, 2013
1,844
2,277
I think there is potential for this system to go badly where innocent photos are viewed in a different context. For example…

Let’s say, you are a parent and you take completely innocent photos of your child.
Some parents take photos where the child may be nude (doesn’t everyone have the classic embarrassing “baby butt” shot in their childhood photo album?) but nobody is offended because everyone knows there is no malintent behind the image. It’s just a child in a state of undress.
So, you have a photo like this of your kid, or your kid in the bath, or your kid at the pool/beach, etc. And you post it to social media, and nobody thinks anything of it because to anyone with a properly working brain, there is nothing to think about it.

British journalist Julia Somerville was famously detained by the Metropolitan Police for taking a snap of her kid in the bath. Saving an iCloud photo in 2022 is in some ways similar to having a photo developed at Boots in 1995.
 

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
You honestly think algorithms don't make mistakes?
No point discussing this with you. You're clearly not interested in facts. But thanks for the question.

British journalist Julia Somerville was famously detained by the Metropolitan Police for taking a baby snap of her kid in the bath. Saving an iCloud photo in 2022 is in some ways similar to having a photo developed at Boots in 1995.
What about Sally Mann? No she wasn’t and she has had massive photography exhibitions of her kids nude. There are extreme examples everywhere and that is more about the law, not this technology Or this law. That’s straight decency laws. Basically your point is irrelevant
 
  • Like
Reactions: CarlJ

Zest28

macrumors 68020
Jul 11, 2022
2,242
3,101
British journalist Julia Somerville was famously detained by the Metropolitan Police for taking a baby snap of her kid in the bath. Saving an iCloud photo in 2022 is in some ways similar to having a photo developed at Boots in 1995.

This is indeed the bulls-hit that will happen.

The funny thing is, the people who actually do have ill intend behind, they are not stupid enough to put their stuff on iCloud.

So all this does, is just mass surveillance on regular people with the risk of being flagged for something that is innocent.

And I'm sure the US government will then later extend this mass surveillance capabilities for other things.
 

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
This is indeed the bulls-hit that will happen.

The funny thing is, the people who actually do have ill intend behind, they are not stupid enough to put their stuff on iCloud.

So all this does, is just mass surveillance on regular people with the risk of being flagged for something that is innocent.

And I'm sure the US government will then later extend this mass surveillance capabilities for other things.
Like I said, you’re not interested in facts. Show me the facts. Give specific examples of actual events where the government has done this, not just wild conjecture. You’re not basing it on anything factual.
 
  • Like
Reactions: strongy

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Think more like checksumming and less like facial recognition. A checksum is a rote computation on the contents of your file, if anything is different, the checksum changes. Facial recognition is making a subjective judgement about "what looks like what".
Cloudflare has a pretty good breakdown (WITH some visual examples) of the feature that they’ve made available to their customers that I hadn’t come across previously.
Again, only for those interested in the reality of how these things currently work.
 
  • Like
Reactions: CarlJ

Zest28

macrumors 68020
Jul 11, 2022
2,242
3,101
Like I said, you’re not interested in facts. Show me the facts. Give specific examples of actual events where the government has done this, not just wild conjecture. You’re not basing it on anything factual.

Have you heard about this guy called Snowden?

And stop mentioning facts, when you don't even know the facts about Machine Learning, thinking it perfectly works on real world data.
 
  • Angry
  • Like
Reactions: pdoherty and CarlJ

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Surely you're joking. Once Apple demonstrates that this ability exists, certain governments will provide them with lists of additional images & hashes to detect.
Any government that would want this kind of information already has it through the government ownership or cooperation thereof of the means of transmitting the data. And, through a far more direct method than having to wait for any company to scan the data, detect a hash and then communicate it.
 
  • Like
Reactions: CarlJ

steve09090

macrumors 68020
Aug 12, 2008
2,194
4,201
Have you heard about this guy called Snowden?

And stop mentioning facts, when you don't even know the facts about Machine Learning, thinking it perfectly works on real world data.
I already mentioned the ability of governments. It has nothing to do with this. So…. No facts then?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.