Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

britboyj

macrumors 6502a
Apr 8, 2009
814
1,086
I think the European Comission or the European Parliament are asking ways to detect this content in an easy way, as well as ending or limiting the E2E encryption. I don’t have the sources at hand but I’ve read news about it.

The EU of all places will absolutely not be asking Apple to snoop through private files. Have you MET the EU?
 

hagar

macrumors 68020
Jan 19, 2008
2,014
5,080
Imagine if the Chinese government could one day use this system to check who has the Tank Man image in their cloud storage and deport that person to a muder camp the next day without question.

Imagine the Chinese government forces Apple to check who was the Tank Man image on their phone while this system doesn’t exist yet.

In that case Apple also has to comply. It’s simply ridiculous the government would wait and force Apple to hijack another system.
 
  • Like
Reactions: Unregistered 4U

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
You want to scan images uploaded to iCloud, fine, I have no expectation of privacy there....but on my device....pound sand.
You may not know this, but the “scanning on device” was to be enabled when iCloud was enabled. iCloud not enabled, no scan.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Of course, there are also countries in this world that like to fill up "their CSAM" hashes with their own (political) targets to see where and with whom they are photographed.
Very likely not, as they are far more likely to use traditional hashes on the unencrypted cloud image repositories (like MD5, SHA1, and SHA256) instead of the fuzzy hashes that CSAM uses.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Let’s say this get implemented and we see the black clouds of fascism once again engulf Europe and the U.S as well, a scenario not that far away. How long before Apple is forced by governments to flag pictures of demonstrations and known people fighting against the government?
Apple doesn’t even have to be forced by governments, those governments can just search social media for pics folks have taken, then run traditional hashes against those images to track whomever they’re interested in. I doubt governments are thinking “OK, there was a demonstration, let’s not use the plethora of surveillance tools that we have access to and, instead, wait until we get a communication from Apple. Yes, I know that doesn’t make sense, but conspiracies don’t have to!”
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
The issue here is that they're using AI to generate the hashes, it isn't simply a scan for identical hashes of photos.
Well, then it’s not an issue because they don’t use AI to generate hashes, no machine learning at all. Hashes are generated using a mathematical algorithm.
 
  • Like
Reactions: strongy

nt5672

macrumors 68040
Jun 30, 2007
3,413
7,268
Midwest USA
I trust that Apple will follow the law. Presumably, none of us support corporations who thinks they are above the law and can do whatever they want.
After seeing what our American politicians have done recently surely you can't believe in the rule of law any more. The goal is to make a law against everything, then only prosecute the political opposition (including individuals).

I mean we have pharmaceutical companies, politicians, defense companies, federal bureaucrats, etc. all breaking the law and none of them are ever going to be punished or even held accountable.

In the future, if you speak out about anything that is not politically correct you will run the risk of being prosecuted. We are not quit there yet, but CASM puts us a lot closer simply because it offers an easy way for a law breaking bureaucrat to just do whatever they want and Apple will have to comply.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Tomorrow it’s anti public policy memes. And anyone who thinks this can’t happen has probably been in a coma for the past two years.
No, TODAY it’s anti public policy memes. Not only can this happen, it’s being done right now using many different means. Anyone who thinks their biting public policy memes are NOT known about and would ONLY be uncovered using technology like this really just doesn’t understand the surveillance state.
 
  • Like
Reactions: Jim Lahey

MuppetGate

macrumors 6502a
Jan 20, 2012
651
1,086
You're right that understanding hashing isn't important, here. But you're missing the mark on what the issue is. Specifically, the hashes have to be agreed-upon from competing jurisdictions. "banned memes" would have be banned in non-cooperative (politically, etc.) jurisdictions for the hashes to be included in the scan --- so, it doesn't matter if e.g., Russia bans memes against Putin, unless the U.S. also bans those memes. This is the type of safety mechanism the researchers who spoke out against this said would need to be required for this type of system to remain safe against government abuse.

So what you’re saying is that if the Chinese tell Apple to use a hashed database they supply by law, Apple will … refuse?
 
  • Like
Reactions: pdoherty

SpectatorHere

macrumors 6502a
Apr 21, 2010
501
109
Well, then it’s not an issue because they don’t use AI to generate hashes, no machine learning at all. Hashes are generated using a mathematical algorithm.
The question is what is hashed, not the hashing. I didn't mean to suggest the AI is somehow doing some super-hashing or something. A hash a one way function, and nothing new or concerning. My understanding is that Apple is using AI to scan what's on your phone, then creating a hash, then comparing the out put with AI-scanned and hashed known-bad images. @jonblatho says I have that wrong though, and that could be the case.
 
Last edited:

SpectatorHere

macrumors 6502a
Apr 21, 2010
501
109
Nowhere, because it’s not part of the iCloud Photos CSAM detection implementation.

If you heard somewhere that machine learning/computer vision is involved with that, either it’s being confused with the aforementioned iMessage feature or it’s just flat-out wrong.
If no AI is used at all in this portion, than I have been misled or misunderstood.

I wonder if the misunderstanding comes from the creation of the derivative image. Do you know how that is accomplished?
 

Jim Lahey

macrumors 68030
Apr 8, 2014
2,643
5,422
No, TODAY it’s anti public policy memes. Not only can this happen, it’s being done right now using many different means. Anyone who thinks their biting public policy memes are NOT known about and would ONLY be uncovered using technology like this really just doesn’t understand the surveillance state.

Yes quite probably. I was just trying to put the point across that the scanning criteria is irrelevant, since the criteria can and will change.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
but CASM puts us a lot closer simply because it offers an easy way for a law breaking bureaucrat to just do whatever they want and Apple will have to comply.
There are easy ways RIGHT NOW for a law breaking bureaucrat to do whatever they want WITHOUT Apple. Or, are you convinced that the governments entire surveillance infrastructure today depends solely on Apple’s cooperation?
 

jonblatho

macrumors 68030
Jan 20, 2014
2,513
6,214
Oklahoma
If no AI is used at all in this portion, than I have been misled or misunderstood.

I wonder if the misunderstanding comes from the creation of the derivative image. Do you know how that is accomplished?
We don’t know the specifics, but basically a series of edits are applied to the known CSAM before hashing, and the same edits are applied to uploaded photos before hashing. This is because if these edits aren’t made, modifying even a single pixel of an image will result in a completely different hash (that is, the new hash won’t be just one character off or anything along those lines). Those edits are intended to fuzz out such common attempts at thwarting the detection system, so it provides a baseline between the two images.

But of course, we don’t know exactly which edits are made before hashing because that’d reveal ways of working around the system. Slicing and edge detection probably make the most sense as part of the process, but that’s just a guess.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,216
8,203
Yes quite probably. I was just trying to put the point across that the scanning criteria is irrelevant, since the criteria can and will change.
The scanning criteria for this wouldn’t HAVE to change for the government to continue gathering information on folks using ALL of the different ways they do now, beyond this. And, all more effective than having Apple compare a fuzzy hash and then waiting for Apple to get back to them.
 

fishmoose

macrumors 68000
Jul 1, 2008
1,851
346
Sweden
Apple doesn’t even have to be forced by governments, those governments can just search social media for pics folks have taken, then run traditional hashes against those images to track whomever they’re interested in. I doubt governments are thinking “OK, there was a demonstration, let’s not use the plethora of surveillance tools that we have access to and, instead, wait until we get a communication from Apple. Yes, I know that doesn’t make sense, but conspiracies don’t have to!”

Expect CSAM opens up for the possibility to scan for, let’s say, demonstrations in real time. All you’d need to do is to make a law that requires this implementation and then Apple would be forced to use this technology for other purposes than the intention was.

The technology opens up the possibility to scan for just about anything, that’s the issue.
 

rme

macrumors 6502
Jul 19, 2008
292
436
The technology opens up the possibility to scan for just about anything, that’s the issue.
Exactly. And that's why "law enforcement" types want this CSAM scanning. It's a fantastic gateway. They couldn't care less about a few pedophiles, but they'd love to be able to scan everyone's phone.
 
Last edited:

CarlJ

macrumors 604
Feb 23, 2004
6,976
12,140
San Diego, CA, USA
This. Matter of time until the (innocent) non-nude photos of people's children suddenly get blurred and red-flagged. It could even be a clothed child wearing a salmon-colored (skin-colored) clothes, which could trick the CSAM algorithm into thinking the child was unclothed.
You are showing a fundamental lack of understanding the of system that was proposed. There's no "CSAM algorithm" that's looking for nudity. The agency that's been authorized to deal with this (I don't recall the name - they're the only entity in the US legally allowed to have copies of CSAM images/video, for this specific purpose) publishes a list of hashes of the CSAM material, the really awful stuff. Then, your phone - right before uploading your pictures to iCloud - makes hashes of your pictures and looks to see if there's exact matches of any of those hashes in the CSAM database. If it gets more than a certain number of hits, then it flags things for further manual examination. It's never going to match your kid wearing a salmon-colored swimsuit or whatever. Unless, in the picture, your kid is in the midst of being raped and that picture has already been circulating amongst pedophiles. It's not looking for "kinds" of things, or for bare skin, it's looking for very specific images that have already been found to be circulating in the pedophile "community". And (if I recall correctly), the scanning never happens if the pictures aren't uploaded to iCloud.

Scanning on your phone right before uploading means your pictures (and everyone else's) can be stored on iCloud encrypted with a key that only you have. The alternative is that your pictures are uploaded unencrypted, and are scanned in the same way on the iCloud servers. The scanning is going to happen one way or the other. Scanning before sending means your pictures on iCloud are safe from random hackers finding some way to download them.

If the scanning is done on Apple's servers, that means there are petabytes of unencrypted ("plaintext") images sitting there that governments could pressure Apple to scan for other types of images, or rogue employees could be paid off to scan for <whatever some well-funded entity thought might be lucrative>, or for hackers to break in and download. If the scanning is done on your device before sending to iCloud, then the pictures can be sent already-encrypted from your phone, and all those other scenarios fall apart - they simply cannot happen. Now, could the CSAM database be compromised to scan for other photos? Yes, theoretically. But it's a single point of failure to watch carefully, rather than the thousand points of failure we have now. And if you're worrying about that, you should be worrying more about spying code being inserted directly into all the other parts of the OS. You quickly reach a point where you just shouldn't be using a smartphone at all. You either have to allow some level of trust, or you go back to not communicating electronically at all.

Now, Apple also has a completely separate system, which parents can optionally enable, for child accounts only, that does look for nudity and such, using machine learning algorithms. But what that system does, if it finds something, it present the child only with a dialog box only, that says, "this image may be inappropriate, do you still want to see it?" - and neither the event nor the child's response are reported to the parents, or to Apple, or to any authorities.

Apple made a major PR misstep in announcing the CSAM-scanning system and the "warn kids about nudity" systems at the same time, and people inevitably conflated the two systems in their heads and multiplied their outrage. Too much of this is largely uninformed people getting hold of crumbs of data and getting outraged, assuming they have a reasonably good understanding of the whole situation, and all the other moving pieces, when they very much do not.
 

CarlJ

macrumors 604
Feb 23, 2004
6,976
12,140
San Diego, CA, USA
And THAT is the problem! Everybody thinks they understand how all of this will work, and then we just fall into our old, bad habit of TRUSTING THE GOVERNMENT to have our best interests at heart.
You should absolutely destroy all your smartphones, tablets, home automation, and computers today, and never touch the Internet again, because everything that you're deathly afraid will happen probably already has. If the people you're paranoid are going to come after you really wanted to, the way that you think, it wouldn't be some upcoming thing, it would have already happened. Your best bet is to ditch all your electronics, sell all your major possessions for cash, cut up your credit cards, get a fake id, move somewhere far away where no one knows you, and try to keep your head down for the rest of your life. Go completely off-grid. Make sure to duck into the shadows whenever a helicopter comes over - oh, wait, modern drones at 1000ft are basically inaudible. Probably best to just stay inside, or maybe under tree cover in a forest.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.