
The deal is that Apple is saying "we are going to use your device to determine what is in your pictures." The circumstances and scope of those determinations can change, but the expectation that Apple will be doing it is now publicly established. Political affiliations, location and timestamps (this doesn't even need modeling!), illegal objects or substances, etc. Tomorrow, the hyper-parameters or the ontology could be expanded to search for anything. If apple is introducing the ability to recognize photo content on the device (even if right now it is destined for the cloud as a pre-scan), it doesn't really matter that the model is only tuned to find child abuse, as an example. I can search my images for "Green taxi" and it will find it. Knowing the content of the images is what Google does in the cloud, and its great.

The way I would expect it to work, is to recognize the content of images. As far as privacy goes, Google will automatically scan your photos for faces if you have the face grouping feature turned on. I'm not an expert in how CSAM works, but if it only as a block list against certain images it will be very ineffective. We’ve reached out to Google for more details. Their app, Google PhotoScan, not only allows you to scan your prints for safe-keeping and easy. As such, anyone with a Gmail account in the U.S. Google Photos has taken all of the stress out of scanning photos.

Ok - if we are going to narrow the scope of the debate to the nature of the scanning, that is the core of my argument to begin with. Meanwhile, back in March, Google One gained dark web info monitoring for subscribers.That scanning is now coming to all Gmail IDs.
