No, this was not hashing like SHA-256. A cryptographic hashing algorithm like SHA-256 aims at creating wildly different hash values for two pieces of input data, even when those have only minor differences (e.g. one image and one copy with a few pixels altered). Also, the hash values are non-reversible, i.e. you cannot recreate the original data from the hash.
What Apple was planning to do was semantic hashing, which is very different: it is using a machine learning classifier trained to analyze the content of the images, and two images showing similar things are then supposed to get similar hash values. If this algorithm determines that an image on your phone shows things similar to those in a known CSAM picture, this would count as a match.
I do not think Apple ever provided any specifics on how generous this similarity measurement would be. They said the system was intended to catch minor alterations of the known CSAM pictures, so maybe it had rather tight tolerances. On the other hand, if that was the intention, Apple could have chosen to exclude photos the user took with the camera and only scan downloaded pictures, but from what I understand they did plan to scan user photos.
(Also, semantic hashes are reversible, which is why Apple planned to encrypt the known CSAM hashes stored on the phones and to keep them out of reach from the user.)
It's worth reading the document.
It is a perceptual hash, not a semantic hash. It isn’t seeking images with the same meaning (semantics) it’s looking for images that look like the same image. It is looking to create a common hash value for variations of a specific image. Cropped, rotated, color shifted, and probably a lot of other things.
In particular, this is not an image classifier. It is not inferring CSAM/not CSAM. It is an image detector, determining if your image is already in a known database. The downside of this is that it is not looking for and will not detect abusive imagery you may have created yourself, it is looking to see if you have a copy of an image that is already in circulation.
If this algorithm determines that an image on your phone shows similar things to a known CSAM image, it would not count as a match. If this algorithm determines that an image on your phone is a manipulated version of a known CSAM image it would count as a match.
It won't be perfect. It will miss some CSAM depending on the manipulations. Hashing implies information reduction so there is always the possibility of a false positive. A false positive does not imply that your image is "similar" to CSAM, it just means it hashed to the same value and is most likely completely innocuous. This is why it requires 30 positive matches to generate an alert, to give most people 30 possible false matches (not to give everyone a pass for 29 criminal images).
Apple has said there is a 1 in a trillion chance that an account would be falsely flagged. If there are a billion iCloud accounts, there's a 1 in a thousand chance someone would get referred to Apple for manual verification. I suspect that number is based on a numerical analysis with some assumptions and probably underestimates the risk-- but still the risk to anyone should be exceedingly low. Your match results are encrypted so that Apple can't tell anything about them, including whether you have 0 or 29 matches, unless you exceed the 30 match threshold.
Excluding pictures taken with the camera leaves open the possibility of altering the metadata of downloaded images to avoid the scan, and still would mean that if you share your picture with someone else it'll get scanned on their phone.
The hashes are not reversible: see hash. If you have a hash you can't create the image from it. There is less data in the hash than in the source image, and that data can't be guessed. The hashes are spoofable. With effort, it would be possible to construct an image that has the same hash as another different image. Most likely the image would look like crap, but it would tickle the hashing function in just the right way.
The hashes are probably encrypted to prevent people from creating spoof images and bogging down the system, and to prevent people from being able to pre-test their own image library for matches.