Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
Hashing expert here. Here’s where you miss the mark. You are essentially saying their entire argument is moot because they don’t understand hashing.

Hashing isn’t the issue here. It’s the list they compare it to. That’s where the vulnerability for abuse can happen. In one country, could they be comparing hashes to a list of banned memes? This system cannot exist on-device. I’m all for protecting apple’s servers, but my device should be private

But why would you store anti-government meme images in your iCloud Photo Library if you live in a country with an authoritarian government?

Also, if you're looking for similar images, which would me much more effective with meme images, there are much more potent solutions like the one already implemented in iPhoto.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
The issue with these initiatives is that the underlying capability is bad whereas the alleged use-case presented is good.

The underlying capability is matching content on the user's device with whatever the authority wants to look for, which is definitely not good. The use-case limits it to known child-abuse photographs if iCloud is enabled, which in itself looks pretty good.

The problem is, once the capability is there, it basically lays the groundwork for authorities to push to implement different use-cases than original intended, e.g. having it enabled even without iCloud enabled (it technically does not need it), or covering "terrorist propaganda" or whatever a given authority wants to prosecute or, sometimes, persecute (hash matching in itself is technically not limited to child-abuse photos, it can match whatever content).

They don't need CSAM do that. The software needed is already in Photos.

Also, finding "terrorist propaganda" would be almost impossible with the CSAM Detection System unless there is an iconic image they all share and the government has this in their database.

The CSAM Detection System can't find "similar images". It is trained to be exceptional bad at it.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
So basically you cannot make family pictures together with your kids or else run the risk of being flagged despite it being regular family pictures.

No algorithm works 100% and I'm sure many false positive will show up. And once you get accused despite being innocent, the damage is already done.

You only believe that because you don't know how it would work.

The CSAM Detection System was developed and trained to be great at finding an exact copy (or derivative) of a picture in its database and at the same time being exceptionally bad at finding similar images.

The CSAM Detection System can't find nude images at all. And AFAIK Apple tested in on a collection of several hundreds thousand porn images to verify it's inaptitude in finding nude images.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
This is indeed the bulls-hit that will happen.

The funny thing is, the people who actually do have ill intend behind, they are not stupid enough to put their stuff on iCloud.

So all this does, is just mass surveillance on regular people with the risk of being flagged for something that is innocent.

And I'm sure the US government will then later extend this mass surveillance capabilities for other things.

The security vouchers will only be revealed to Apple when it detects 30 images which would be in the database. Then Apple would have manually looked at an representation of the image to determine if it looks like child pornography.

The system can't find nudity at all. You could have 50 000 nude images of your children and the possibility of the system marking 30 of them has an existing known CSAM would be extremely low.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
Of course, there are also countries in this world that like to fill up "their CSAM" hashes with their own (political) targets to see where and with whom they are photographed.
Such attempt will be easy to blackmail. So their opponents might also be quickly localized and more easily destroyed.

Weighing this always fictional (because never recognizable) killing potential against that of dirty porn is the difficult problem.

Yes, it also makes me angry that the evil of child pornography can not be eradicated. But unfortunately not with CSAM either.

But this means you don't know how the system works.

Let's say a government's enemy #1 is person A.

They have a bunch of pictures of person A which they force Apple to hash with their special program AND include in Apple's database.

A lot of people take photos of person A both alone and with themselves. Hundreds of thousands of such images are taken by person A.

The CSAM Detection System will not detect these images since they aren't in the database.


So it's an extremely bad solution for governments. Now, the Photos app has all the code needed.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
My argument for why this should never be implemented is quite simple. Several countries around the globe is moving closer and closer to fascism. The U.S, Hungary and Russia chief among them. Let’s say this get implemented and we see the black clouds of fascism once again engulf Europe and the U.S as well, a scenario not that far away. How long before Apple is forced by governments to flag pictures of demonstrations and known people fighting against the government?

The CSAM Detection System can't be used to flag pictures of demonstrations and known people.

The CSAM Detection System isn't designed to find images of ansimilar nature and so would we useless for this purpose.

The software you're looking for is called Photos and has been included with iPhone since 2007.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
The issue here is that they're using AI to generate the hashes, it isn't simply a scan for identical hashes of photos. You don't need any AI to compare hashes, so what they're doing and how effective it is all speculation.

Bottom line is this system would use AI on your phone to scan your content, somehow. Given how primitive AI seems to be recognizing objects when using the camera, my speculation would be that it would make lots of mistakes.

And how long before this same technology is used to identify terrorism, and how effective and prone to mistake/bias would that be? Imagine what they would train the AI with to find terrorists--brown skinned, young men?

Just not a good idea to have our tech reporting on us to the government. The government has access to amazing technology itself, let them use their own resources and not use our computers against us.

Apple isn't using AI on the images. Part of the algorithm was developed by machine learning.

Code algorithm -> run the algorithm through machine learning -> new fine-tuned algorithm -> deploy algorithm as part of iOS

The technology can't be used to identify terrorism. It can't find images of similar nature. It can only detect an exact copy of known images (or close derivatives).

The system can't detect dog images, terrorism images, nude images or any particular sort of images.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
If no AI is used at all in this portion, than I have been misled or misunderstood.

I wonder if the misunderstanding comes from the creation of the derivative image. Do you know how that is accomplished?

There is a database (a type) of hashes which represent illegal images. The system is designed to detect if you have those images and be exceptional bad at detecting images which aren't in the database.

So the CSAM Detection System can't find child pornography at all and it will not flag new or unknown child pornography.

Let's say that in a country taking pictures of a car with a Christmas tree in the background was illegal. All other images of cars and Christmas trees would be legal. The government had provided with the help of Apple hashes of 10 000 images of cars with Christmas trees.

The CSAM Detection System would be equally good for this without changing a line of code. It would detect if anyone had a copy of one the illegal 10 000 images but they could take new pictures of cars with Christmas trees without the system triggering at all.

This is extremely inefficient if the government would like to catch all images of cars with Christmas trees. They wanted Apple to create a system which find any pictures with cars and Christmas trees: old, new, known and unknown.

Apple wouldn't need to create anything new, they could just use the algorithms in the Photos app and iCloud Backup to get the photos back to the government.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
Expect CSAM opens up for the possibility to scan for, let’s say, demonstrations in real time. All you’d need to do is to make a law that requires this implementation and then Apple would be forced to use this technology for other purposes than the intention was.

The technology opens up the possibility to scan for just about anything, that’s the issue.

No, the CSAM Detection system can't be used to detect images of demonstrations in real time.

What you're looking for are regular image recognition. That's already implemented in the Photos app and Apple could just make small changes to that app to create this.

The CSAM Detection System would be utterly useless to detect demonstrations, terrorists, people with weapons, people smoking pot, dog images, cat pictures etc.
 

hans1972

macrumors 68040
Apr 5, 2010
3,340
2,916
So your argument is that since it is so easy for government to already invade our privacy there’s no point opposing technology that makes it even easier?

The CSAM Detection System is pretty bad ad invading privacy compared to the Photos app combined with iCloud backup.

And yet, there are no big protest against that.

If you're worried you should be worried at the most effective technologies first.
 
  • Like
Reactions: CarlJ

mdatwood

macrumors 6502a
Mar 14, 2010
919
908
East Coast, USA
Uninformed? The agency's name is NMEC, and their database contains pictures that *might be* CSAM. The vast majority is intimate selfies of teens.
Go read up on the facts.
It sounds like you're implying NMEC itself is corrupt and/or the database does not just have CSAM.

This database is already used by all the cloud providers to check for CSAM today.
 

dk001

macrumors demi-god
Oct 3, 2014
10,684
15,033
Sage, Lightning, and Mountains
I hope everyone who has CSAM in their iCloud photos gets what's coming to them, which is Bubba in a 5 x 7 cell.

For the ignorant people who don't know how hashing works, take five minutes out of your day and learn about it. Take another five minutes and read a few victim impacts statements from children who are/were impacted by distribution of CSAM. If you're not victimizing children, then move along, you have nothing to worry about. Yeah, hashing iCloud albums will create a "back door". Ignorance at its best. More people that don't know how hashing works.

BTW, Facebook, Google, Dropbox and a host of others have been hashing images and videos for years now. And thank god, it's put a lot of pedophiles behind bars. Have you heard about any "back doors" from these providers?
Adding @coolfactor

You really should learn how this Apple tech was designed to work before you commented And how it differs from what others are doing.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,117
8,060
This would require them re-engineering how the system works. What's preventing Chinese law from forcing Apple to engineer something today, with the additional requirement that Apple not disclose it? Better yet, use the existing metadata that's already on your phone without any re-engineering required.
It’s always entertaining that the conspiracies always only go “so far”. China can only tell Apple to modify what hashes they’re checking, but they can’t tell Apple to do any of the far more useful things they’d want. :)
 
  • Like
Reactions: CarlJ

Unregistered 4U

macrumors G4
Jul 22, 2002
10,117
8,060
Most of those people didn't know about how the specific technology worked. So I would say those of us who studied it knows more than most of those people.

Being a security researcher don't help you since what is needed is an understanding of some of the algorithms and the mathematics behind them.
Plus, as a security researcher, if you want to get your name circulated, write something about Apple that, to your peers will be sorta, “Yes, that’s how things work”, but to the general public would seem very alarming.
 
  • Like
Reactions: CarlJ

dk001

macrumors demi-god
Oct 3, 2014
10,684
15,033
Sage, Lightning, and Mountains
You're right that understanding hashing isn't important, here. But you're missing the mark on what the issue is. Specifically, the hashes have to be agreed-upon from competing jurisdictions. "banned memes" would have be banned in non-cooperative (politically, etc.) jurisdictions for the hashes to be included in the scan --- so, it doesn't matter if e.g., Russia bans memes against Putin, unless the U.S. also bans those memes. This is the type of safety mechanism the researchers who spoke out against this said would need to be required for this type of system to remain safe against government abuse.

But that comparison is also the issue. There were States asking about the comparison functionality as that aspect really intrigued them. That would be for only one reason I can fathom; collaborative lists to search for what they deem “necessary”.

Another aspect never really explained was why Apple was interjecting themselves into the personal visual comparison function.
 

steve09090

macrumors 68020
Aug 12, 2008
2,166
4,150
I still fail to see why Apple wanted to go down this path. Nothing has arisen to show they were forced or coerced. Not seeing any monetization from it.
They aren’t forced to contribute to ConnectED, Global Giving, Employee Giving Program ($750M last year), Product Red, or donate to Hurricane, Fire or Flood victims eh? Sure, some of these benefit Apple, but they also do things like laying Ethernet Cables for schools etc etc which aren’t product related. They do a lot more for the community than most.
 

dk001

macrumors demi-god
Oct 3, 2014
10,684
15,033
Sage, Lightning, and Mountains
They aren’t forced to contribute to ConnectED, Global Giving, Employee Giving Program ($750M last year), Product Red, or donate to Hurricane, Fire or Flood victims eh? Sure, some of these benefit Apple, but they also do things like laying Ethernet Cables for schools etc etc which aren’t product related. They do a lot more for the community than most.

And all of it falls under the “deductible” column.
The CSAM tool? Nope.
We should all remember in the end, there is $$$ attached somewhere that can be leveraged unless they are forced to do something via regulation / law.
 
  • Like
Reactions: arkitect

dk001

macrumors demi-god
Oct 3, 2014
10,684
15,033
Sage, Lightning, and Mountains
For any who want to go back through the mountains of info (good, bad, and other) on this, in the end it wasn’t the tech (that was pretty well done) but rather how it could be leveraged outside of Apple’s direct control. There were Nation States already asking questions and looking into how to leverage this technology including EU members and the US.

Never mind the legality of it. That was also a big unknown. NCMEC board members were even commenting on the questionable legality.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,117
8,060
I still fail to see why Apple wanted to go down this path. Nothing has arisen to show they were forced or coerced. Not seeing any monetization from it.
Apple REALLY wanted to be able to tout that photos in iCloud Photos are encrypted. However, government agencies will not allow Apple to be an unwilling protector of those that deal in CP. So, no comprehensive encryption.

UNLESS!

Apple figured that since all any government would want is an assurance that they aren’t hosting CP, they could resolve that thorn by looking for groups of matches on the phone, but ONLY if that phone was uploading images to iCloud. If they found enough matches, they would lock the account and alert authorities. With that assurance, they’d be able to encrypt all the images such that Apple wouldn’t be able to provide anything even with a search warrant.

I think they can still do this… but instead of for “everyone” they’d have to make it opt in. So, maybe the time they’ve taken is to make the feature more granular, at a per account level. There’s still documentation on Apple’s site on the plan, all I’m waiting for is the “when”.
 
  • Like
Reactions: CarlJ

dk001

macrumors demi-god
Oct 3, 2014
10,684
15,033
Sage, Lightning, and Mountains
Apple REALLY wanted to be able to tout that photos in iCloud Photos are encrypted. However, government agencies will not allow Apple to be an unwilling protector of those that deal in CP. So, no comprehensive encryption.

UNLESS!

Apple figured that since all any government would want is an assurance that they aren’t hosting CP, they could resolve that thorn by looking for groups of matches on the phone, but ONLY if that phone was uploading images to iCloud. If they found enough matches, they would lock the account and alert authorities. With that assurance, they’d be able to encrypt all the images such that Apple wouldn’t be able to provide anything even with a search warrant.

I think they can still do this… but instead of for “everyone” they’d have to make it opt in. So, maybe the time they’ve taken is to make the feature more granular, at a per account level. There’s still documentation on Apple’s site on the plan, all I’m waiting for is the “when”.

From what I orginally found was that the FBI (and others) didn’t want Apple to encrypt the iCloud and not have a key for access and Apple obliged them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.