In the Github thread there are better matches with the same hash. Like two photos of totally different content but looking quite like normal photos.
Anyway, I took the time to go through this talk again:
https://www.apple.com/105/media/us/...enix-security-symposium-tpl-us-2021_16x9.m3u8 Actually it's interesting, they have planned a lot more "safety guards" than I remember, especially also for this kind of attack:
View attachment 2149182
They actually were planning on using another algorithm than NeuralHash (which would've run on the devices) on their servers to double-check any positive matches, making it unlikely for these "fake" pictures to be recognized as CSAM by the overall system. And of course, they also promised to use human reviewers at the end.
Still, I sleep better without any CSAM scanning going on. But at least, their implementation of it was considerably better than what Microsoft/Google etc. are doing.