The unlucky 1 in a million image that tricks the system by sheer pixel attributes would be something completely unrelated to children, like a picture of a bucket of sand or a sunset. Human review would can it in a split second. Fact is, it wouldn’t even reach human review ‘cause one match isn’t enough, an account has to do multiple offences.
On the other hand, altered versions of the original known CSAM would be easily catched by the AI. (again, read about how PhotoDNA works)
There is no contradiction.
I have read it. It makes no sense. Let me put it another way. Using
@hans1972 example.
Picture 1: Someone takes a picture of you where you live. Lets say you standing in front of a window.
You walk away for a few minutes.
Picture 2: Someone takes a picture of you standing in front of the same window and you have approx. the same pose as in the last picture.
Somehow picture 1 becomes part of the CSAM database. If NeuralHash is any good it should just flag picture 1 and not picture 2.
So we are saying in this example, picture 2 will NOT get flagged, even though it is nearly identical.....correct?
Okay, so I fire up Photoshop, and I perform a warp on Picture 1 to make it COMPLETELY MATCH picture 2. However, all these reports are now saying Picture 1 modified (which MATCHES picture 2) will now get flagged (because it tracks crops, edits, manipulations and other things)? But somehow picture 2 does not?
Again, that is the contradiction. ANY modifications to picture 1 get flagged, but picture 2 does not? How?