With that in mind, the question I had is: what exactly is the manual review process which gets triggered once images are flagged.
Well, all we know is what Apple says, which is:
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. The threshold is set to provide an extremely high level of accuracy that accounts are not incorrectly flagged. This is further mitigated by a manual review process where Apple reviews each report to confirm there is a match. If so, Apple will disable the user’s account and send a report to NCMEC.
So it certainly sounds like they
are looking at the pictures, since they'd pretty much have to, but they're only looking at the ones that have specifically been flagged by the algorithm. The technical documents make it clear that they have no access to photos that are not flagged:
...it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.
I guess the logic here is that anybody who has a critical mass of "known CSAM" in their account doesn't deserve as high of a level of privacy. It's also worth keeping in mind that, assuming the algorithms work properly, Apple employees aren't looking at "private" images, per se — these would all be images that are already making the rounds online.
The threshold is also supposed to be high enough that false positives should be extremely rare. For instance, one or two photos mistakenly flagged as CSAM due to a false collision is understandable, but if an account has 100 or more flagged photos, that definitely warrants further investigation.
The fact that the user has them in their iCloud Photo Library is obviously "private," technically speaking, but that sounds like it's a distinction for lawmakers, lawyers, and judges to sort out. However, despite all of Apple's highfalutin' talk about privacy, it's
always reserved the right to monitor anything that's stored in your iCloud account. From
Apple's iCloud Terms of Service:
Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time, without prior notice and in its sole discretion, if such Content is found to be in violation of this Agreement or is otherwise objectionable.
That section actually follows a long list of things you're
not allowed do with iCloud, which includes "upload, download, post, email, transmit, store or otherwise make available any Content that is unlawful, harassing, threatening, harmful, tortious, defamatory, libelous, abusive, violent, obscene, vulgar, invasive of another’s privacy, hateful, racially or ethnically offensive, or otherwise objectionable," along with "plan or engage in any illegal activity."