Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
You are fighting me about the entire process of false positives. Saying I am wrong and don't know how it works. Yes you are battling me about false positives.

Regarding the image.....because it was manipulated to show the example that a car and a dog can match. They would NOT demonstrate a known CSAM image....that would be illegal. So obviously it would not be the same example.

Not one statement I have made was incorrect if false positives are even POSSIBLE. The presence of a manual review process proves there are potential for false positives.
I’m fighting you on your incorrect/manipulated white paper quotes which has now turned into pushing back on your total lack of understanding on this topic. There is possibility of false positives but it’s not what you think it is.
 
Last edited by a moderator:

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
I’m fighting you on your incorrect/manipulated white paper quotes which has now turned into pushing back on your total lack of understanding on this topic. There is possibility of false positives but it’s not what you think it is.

"I don't understand it" yet you fundamentally agree with me about false positives.....okay then.

For the last time, read my post. It was in no way misquoted and the only manipulation I did was to add context to it. If you are REALLY that upset I changed "the hash" to "the NeuralHash" then I am VERY sorry......There was NO context in my post regarding NeuralHash so I added context to the Hashing in question. But how the hell does that prove I have a "total lack of understanding on this topic". As I said I did significant research when this was first proposed. The examples were similar to ones shown before.....the ones I saw were for a flower and a curtain of said flower producing the same hash and would get flagged.

This is ALL we have to go by for examples BTW. You cannot actually demonstrate CSAM because it is illegal to show those images. So we have cars matching dogs.....flowers matching curtains....etc.

How is what I said wrong? False positives ARE possible.....therefore an image NOT CSAM could get flagged. Therefore, the hashing takes liberties into judgment what it detects or not. It is not a perfect match. I have said all of this before and you essentially agree with me. So where am I "totally lacking" here?

I REALLY don't appreciate it when people who don't know others say they "totally lack understanding" yet agree with the concept that person is saying.....that alone means that person actually DOES know the topic at hand.

Again, you agree false positives are possible. My original example before ALL OF THIS was way more close than a manipulated car image and a dog. It is far more likely for a "mature" looking 16/17 year old and a "young" 23/25 year old in the same poses to match. "visually similar image"
 
Last edited by a moderator:

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
"I don't understand it" yet you fundamentally agree with me about false positives.....okay then.

For the last time, read my post. It was in no way misquoted and the only manipulation I did was to add context to it. If you are REALLY that upset I changed "the hash" to "the NeuralHash" then I am VERY sorry......There was NO context in my post regarding NeuralHash so I added context to the Hashing in question. But how the hell does that prove I have a "total lack of understanding on this topic". As I said I did significant research when this was first proposed. The examples were similar to ones shown before.....the ones I saw were for a flower and a curtain of said flower producing the same hash and would get flagged.

This is ALL we have to go by for examples BTW. You cannot actually demonstrate CSAM because it is illegal to show those images. So we have cars matching dogs.....flowers matching curtains....etc.

How is what I said wrong? False positives ARE possible.....therefore an image NOT CSAM could get flagged. Therefore, the hashing takes liberties into judgment what it detects or not. It is not a perfect match. I have said all of this before and you essentially agree with me. So where am I "totally lacking" here?

I REALLY don't appreciate it when people who don't know others say they "totally lack understanding" yet agree with the concept that person is saying.....that alone means that person actually DOES know the topic at hand.

Again, you agree false positives are possible. My original example before ALL OF THIS was way more close than a manipulated car image and a dog. It is far more likely for a "mature" looking 16/17 year old and a "young" 23/25 year old in the same poses to match. "visually similar image"
The images were manipulated to have the same hash, they are not natural pictures.

NO - the example you gave will not flag as false positive.
 
Last edited by a moderator:

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
The images were manipulated to have the same hash, they are not natural pictures.

NO - the example you gave will not flag as false positive.
Then please.....give me an example of a false positive. I and others posted matches and you are saying "nope those won't get flagged". They match which causes a flag. So please....give me an example.

Why aren't you targeting Analog Kid too with this? He gave similar examples. Similar ones to what I gave.
 
Last edited by a moderator:

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
Then please.....give me an example of a false positive. I and others posted matches and you are saying "nope those won't get flagged". They match which causes a flag. So please....give me an example.
You are the one making the statement, you present the examples to defend your incorrect stance.

So far the only examples I have seen presented are images that have been manipulated to provide the same hash.
 

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
You are the one making the statement, you present the examples to defend your incorrect stance.

So far the only examples I have seen presented are images that have been manipulated to provide the same hash.
I asked you to give me an example. I gave many and you said nope to them all.......So you CLEARLY know correct ones so please provide an example.

To your point you are clearly an expert in the field and I am just someone passionate about this feature and looking up the details. You know way more than me obviously so instead of just saying NO, how about providing correct ones?
 

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
I asked you to give me an example. I gave many and you said nope to them all.......So you CLEARLY know correct ones so please provide an example.
What example am I going to give you exactly? I’m telling you it isn’t going to happen - so you want me to show you a blank screen?

You are the one claiming there all these images out there that hash match but aren’t the same picture. Still waiting for this mind blowing evidence.
 

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
What example am I going to give you exactly? I’m telling you it isn’t going to happen - so you want me to show you a blank screen?

You are the one claiming there all these images out there that hash match but aren’t the same picture. Still waiting for this mind blowing evidence.
I thought you said false positives are possible.....now it's not going to happen? I am asking for an example of a false positive.

How about that car and dog? A manipulated image is still an image........they produce the same hash match.
 

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
I thought you said false positives are possible.....now it's not going to happen? I am asking for an example of a false positive.

Since it’s ok for you to edit posts here we go. You just answered your own question. Manipulated images can give a false positive. Now explain to me how a genuine picture taken of your gf naked who you think could resemble a child naked on a similar pose is going to give a false positive.
 
Last edited by a moderator:

poseidondev

macrumors regular
Mar 9, 2015
144
351
It's a shame it took Apple so long to acknowledge the issues in terms of potential risks utilizing the technology at the foundation of this initiative, but lets not look a gift horse in the mouth I suppose.

That said, of equal concern is the faulty implementation of the technology and the false sense of security as represented by Apple.

Multiple security researchers have done a deep dive, but I think Sarah Jamie did a good job at covering most of it.

From the issues with probabilities on false positives, to the underlying protocols, to the issues of hash collisions.

The entire ordeal just seemed ill advised.

There's a more lay friendly Twitter thread that covers some of this too.

All in all, I think its good that Apple recognized the issues at hand and decided to try and help out in different ways.
 
  • Like
Reactions: gusmula

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
Since it’s ok for you to edit posts here we go. You just answered your own question. Manipulated images can give a false positive. Now explain to me how a genuine picture taken of your gf naked who you think could resemble a child naked on a similar pose is going to give a false positive.
I did explain. A mature looking 16/17 year old can be in the database. We are not just talking about very young children here. So someone would have the same body type....curves, feminine features you get the idea. Same pose...etc.

A 17 year old is still technically a child.
 

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
I did explain. A mature looking 16/17 year old can be in the database. We are not just talking about very young children here. So someone would have the same body type....curves, feminine features you get the idea. Same pose...etc.

A 17 year old is still technically a child.
Cool story - now explain how a genuine photo from your phone will false positive with that image?
 

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
Cool story - now explain how a genuine photo from your phone will false positive with that image?
I already explained it. In a similar pose, have similar body type and features. It's a close image. If the system can detect manipulations and cropping how far does it go?
 

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
I already explained it. In a similar pose, have similar body type and features. It's a close image. If the system can detect manipulations and cropping how far does it go?
So you just outlined that you don’t even know how file hashing works on its most basic level.
 

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
I can’t - because it so statistically unlikely
So then why even have a threshold of 30 at all? And manual reviews? Why not send people to jail on the second match? I would say first occurrence but wanted to give some grace here.
 

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
So then why even have a threshold of 30 at all? And manual reviews? Why not send people to jail on the second match? I would say first occurrence but wanted to give some grace here.

Why might it be important to set a threshold, albeit a low one 🤔 and when someone is flagged why might there be a manual review process 🤔
 
Last edited by a moderator:

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
Why might it be important to set a threshold, albeit a low one 🤔 and when someone is flagged why might there be a manual review process 🤔
If it is so statically unlikely there is no need for this amount of infrastructure in place. People are wrongfully convicted of crimes all the time, even with more evidence than this would. If this was truly a false positive, a lawyer and investigation would have it completed in an hour. If it will happen SO RARE, why go through this amount of effort?

And I wouldn't necessarily consider 30 a low threshold.
 
Last edited by a moderator:

Analog Kid

macrumors G3
Mar 4, 2003
9,019
11,801
Yes that is what I am saying. Why wait until 30 flags? If this is so perfect as you are all saying, the FIRST occurrence of this should get flagged to the authorities because it CAN'T make false positives. You all are saying how perfect this system is and our concerns are "ridiculous" but then why not just set the threshold at 1 then? The presence of a threshold system and Apple's own words about a very rare false positive chance proves our concerns. That is the entire definition of a false positive.

I’m not sure what you’re leading towards here. It sounds like what you’re saying is that we shouldn’t stop people who have 30 strikes against them because our technique isn’t sensitive enough to detect only one?

No detection technique is perfect. Perfect doesn’t exist. What you have is a trade off between sensitivity and false alarms. You could make the system sensitive enough that it picks up every individual CSAM image everywhere, but the trade off is that you’ll have a ton of false alarms that consume a lot of human resources and undermine trust in the system.

Instead Apple makes it less sensitive to violations and drove their expected false alarm rate to 1 false alarm in 1,000,000,000,000 accounts. And they structured it so that even if the system flags an account the government isn’t involved until Apple has verified the match. The accounts, their contents, the hashes, the derivative images and the detection count is encrypted to everyone, including Apple, until there are 30th hash violations with together form the key to unlock the cryptographic chain.


I have. The person I replied to EVEN PROVED what I found. A car and a dog matched. And they aren't even visually similar!

When the 30th hash fails, Apple can unlock the hashes and derivative images. A human can then look, not at the actual images themselves, but some unexplained derivative of them. They see it’s a car, not a known image of a child abuse, and do not mark the account as in violation. The fact that matching images are likely to be so very different from each other makes the rejection of false positives easier.

So the point here isn’t to look at the false positive rate of the steps, but of the system. I don’t know the false positive rate of each hash but it’s higher than it would be for 30 hashes. The false positive rate of 30 hashes is said to be 1 in a trillion. Once you bring a human into the final verification then false positive rate will be much lower than 1 in a trillion, it will likely be, for all intents and purposes, zero. If the system begins to show an unexpected sensitivity, then increase the hash length, increase the false positive count, better train the humans.


And it's probably good that my ignorance is showing because I have NO CLUE what these images look like. My main concern is I know people that are in a consenting relationship and are adults that share images. But some bodies are different. And CSAM probably includes some "mature" looking 16/17 year olds posing and some "younger" looking 22-25 year olds could have matching bodies that would get flagged POSSIBLY.

You don’t need to know what these images look like to understand the concepts that’s why I referred to a dataset of dogs. I’m actually quite uncomfortable even typing about the kinds of content these images contain.

Anyway, just think of it as looking for a few specific images of dogs that have been circulating on the internet. Not any picture of the species, or even those exact dogs, but specific pictures of those specific dogs. It’s not looking for a kind of image, it’s not looking for particular poses or scenes, it is looking for specific images.

When the neural net is trained, it’s trained for the images they’re looking for and it is trained against images it is not looking for. So the network will get trained against general pornography so that those images don’t trigger the system.

Then why even have a manual review process then if it absolutely cannot flag false positives? If you get one hit you should go to jail....right? If you say no, then there is a chance of false positives. Which by definition means content in question doesn't match the database.

I think you mean false negatives here, but yes that’s the trade off Apple was willing to make. You could get away with up to 29 illegal images on your phone, and that number was public.

One illegal image should be enough to put someone away, but the system isn’t discriminative enough to pull that off without trading off other things we care about. So the idea was catch the biggest monsters, or more likely find the image caches that people are distributing.
 

Ethosik

Contributor
Oct 21, 2009
7,834
6,763
I’m not sure what you’re leading towards here. It sounds like what you’re saying is that we shouldn’t stop people who have 30 strikes against them because our technique isn’t sensitive enough to detect only one?

No detection technique is perfect. Perfect doesn’t exist. What you have is a trade off between sensitivity and false alarms. You could make the system sensitive enough that it picks up every individual CSAM image everywhere, but the trade off is that you’ll have a ton of false alarms that consume a lot of human resources and undermine trust in the system.

Instead Apple makes it less sensitive to violations and drove their expected false alarm rate to 1 false alarm in 1,000,000,000,000 accounts. And they structured it so that even if the system flags an account the government isn’t involved until Apple has verified the match. The accounts, their contents, the hashes, the derivative images and the detection count is encrypted to everyone, including Apple, until there are 30th hash violations with together form the key to unlock the cryptographic chain.




When the 30th hash fails, Apple can unlock the hashes and derivative images. A human can then look, not at the actual images themselves, but some unexplained derivative of them. They see it’s a car, not a known image of a child abuse, and do not mark the account as in violation. The fact that matching images are likely to be so very different from each other makes the rejection of false positives easier.

So the point here isn’t to look at the false positive rate of the steps, but of the system. I don’t know the false positive rate of each hash but it’s higher than it would be for 30 hashes. The false positive rate of 30 hashes is said to be 1 in a trillion. Once you bring a human into the final verification then false positive rate will be much lower than 1 in a trillion, it will likely be, for all intents and purposes, zero. If the system begins to show an unexpected sensitivity, then increase the hash length, increase the false positive count, better train the humans.




You don’t need to know what these images look like to understand the concepts that’s why I referred to a dataset of dogs. I’m actually quite uncomfortable even typing about the kinds of content these images contain.

Anyway, just think of it as looking for a few specific images of dogs that have been circulating on the internet. Not any picture of the species, or even those exact dogs, but specific pictures of those specific dogs. It’s not looking for a kind of image, it’s not looking for particular poses or scenes, it is looking for specific images.

When the neural net is trained, it’s trained for the images they’re looking for and it is trained against images it is not looking for. So the network will get trained against general pornography so that those images don’t trigger the system.



I think you mean false negatives here, but yes that’s the trade off Apple was willing to make. You could get away with up to 29 illegal images on your phone, and that number was public.

One illegal image should be enough to put someone away, but the system isn’t discriminative enough to pull that off without trading off other things we care about. So the idea was catch the biggest monsters, or more likely find the image caches that people are distributing.
Thank you for explaining things. I appreciate it.

I don’t think I was referring to false negatives. I was referring to false positives. A false negative would be something illegal that was not flagged.

So if a false positive is so incredibly rare, probably less so than all false arrests combined, why not set the threshold to one or two then involve the authorities?

I’m concerned about false positives but if that is truly almost impossible I say anyone having even one illegal image should be arrested.
 

sideshowuniqueuser

macrumors 68030
Mar 20, 2016
2,880
2,888
Good on Apple!

First they came for the socialists, and I did not speak out—
Because I was not a socialist.

Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.

Then they came for the Jews, and I did not speak out—
Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.



With the rise of populist driven one-trick-pony political movements, it is truly great to see Apple's stance. Privacy is vital as is the right to free speech.
Yeah nah, I don't feel like Apple is quite trustworthy on this. I constantly feel censored by Apple. Anytime I try to swear in a txt message on my phone, Apple autocorrects it to "duck" etc. So damn annoying. Piss off Tim Cook, get your virtue signalling bullcrap out of my phone.
 
  • Haha
Reactions: ddhhddhh2

5H3PH3RD

macrumors 6502
Nov 3, 2011
260
174
Yeah nah, I don't feel like Apple is quite trustworthy on this. I constantly feel censored by Apple. Anytime I try to swear in a txt message on my phone, Apple autocorrects it to "duck" etc. So damn annoying. Piss off Tim Cook, get your virtue signalling bullcrap out of my phone.
Literally fixed in iOS17
 
  • Like
Reactions: sideshowuniqueuser
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.