The new system is overkill for iCloud, which Apple already scans. The obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used.
>> The obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used.
Wrong [1]. It's even in the first line of the document which you apparently didn't even read:
CSAM Detection enables Apple to accurately identify and report iCloud users who store
known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts
This doesn't mean I'm supporting their new "feature".
Partly it falls under the supposition of "if they can, they will". It's also suggested when they tout that they are going to start blurring "naked" pictures, of any kind, sent to children under 14. Which means they need some kind of tech to detect "naked" pictures, locally, across encrypted channels, in order to block them.
In theory, this is different tech than CSAM, which is supposed to checking hashes against a global database, vs determining the "nakedness" score of an arbitrary picture.
But, scan them once, scan them all, on your phone. The details start to matter less and less.
Also, since they already scanning all of the photos up on iCloud already, why would they need to have it put locally on the phone?
Finally, I know that Apple scans photos on my device, because that's how I get those "memories" of Furry Friends with vignettes of my cats. I don't even use iCloud. (To be clear, I love this feature. Send me movies of my cats once a week.)
> Partly it falls under the supposition of "if they can, they will".
Hm. But there are many millions of things Apple could do, but haven’t, because it would hurt their business model. So how would what you are proposing they will do help their business model?
> Which means they need some kind of tech to detect "naked" pictures, locally, across encrypted channels, in order to block them.
I know you know this is the case, but to make it clear for anyone reading: Apple is not blocking nude pictures in the kids filter. It’s blurring and giving a message. Again I ask: why would using this technology on non-nude stuff benefit Apple?
Are we worried about Apple or are we worried about the government forcing Apple to do things that this technology enables?
>They already scanning all of the photos up on iCloud already
I can't find a source for this. Do you happen to have one?
It seems to me that Apple doesn't want to host CSAM on their servers, so they're scanning your device so that if it does get uploaded, they can remove it and then ban you.
They're not scanning all photos on iCloud, as far as I can tell.
Apple doesn't "scan" iCloud. Not sure what you're talking about. Generally everything in iCloud is E2E encrypted, with the exception of iCloud Backups, where Apple holds onto a decryption key and will use it to comply with subpoenas. But nothing is "scanned," and if you don't use iCloud backup, Apple can't see your data.
iCloud Photos aren’t E2E encrypted, but it’s unlikely they’re scanned for CSAM today because Apple generates effectively 0 references to NCMEC annually.
I also believe Apple doesn't really want to scan your photos on their servers. I believe their competitors do, and they consider this compromise (scan on device with hashes) is their way of complying with CSAM demands while still maintaining their privacy story.
this is how it always starts, Apple went from 'no it's not possible to unlock the shooters phone' to 'yeah you can give us the fingerprint of any image (maybe doucuments too) and we'll check which of our users has it'
Then why do the "CSAM" perceptual hashes live on the device and the checks themselves run on the device? Those hashes could be anything. Your phone is turning into a snitch against you, and the targeted content might be CCP Winnie the Pooh memes or content the people in charge do not like.
We are not getting this wrong. Apple is taking an egregious step to satisfy the CCP and FBI.
Future US politicians could easily be blackmailed by the non-illegal content on their phones. This is a jeopardy to our democracy.
The only reason this was announced yesterday is because it was leaked on Twitter and to the press. Apple is in damage control mode.
This isn't about protecting children. It's about control.
This boils down to two separate arguments against Apple: 1) what Apple has already implemented, and 2) what Apple might implement in the future. It's fine to be worried about the second one, but it's wrong to conflate the two.
>It's fine to be worried about the second one, but it's wrong to conflate the two.
Agreed, and just to be clear, I'm worried about that too. It just appears that we (myself and the objectors) have different lines. If Apple were to scan devices in the US and prevent them from sharing memes over iMessage, that would cross a line for me and I'd jump ship. But preventing CSAM stuff from getting on their servers seems fine to me.
I think the situation is clear when we think of this development from a threat modelling perspective.
Consider a back-door (subdivided into code-backdoors and data-backdoors) placed either on-device or on-cloud. (4 possibilities)
Scanning for CP is available to Apple on-cloud (in most countries).
Scanning for CP is available to the other countries on-cloud (e.g. China users have iCloud run by a Chinese on shore provider).
Scanning for CP is not available to Apple on-device (until now)
This is where the threat model comes in. Intelligence agencies would like a back door (ideally both Code and Data).
This development creates an on-device data-backdoor because scanning for CP is done via a neural network algorithm plus the use of a database of hashes supplied by a third party.
If the intelligence service poisons the hashes database then it won't work because the neural network scans for human flesh and things like that, not other kinds of content. So the attack works for other sexual content but not political memes. It is scope-limited back door.
For it to be a general back door, the intelligence agency would need the neural network (part of apple's on-device code) and well as the hashes database to be modified. So that is both requiring a new code back door (Apple has resisted this), and a data back door both on-device.
Currently Apple has resisted:
Code back doors (on device)
Data back doors on device (until now)
and Apple has allowed
Data back doors in cloud (in certain countries)
Code back doors in cloud (in certain countries)
In reality the option to not place your photos in iCloud is a euphemism for "don't allow any data backdoor". That is because iCloud is a data-backdoor due to it being able to be scanned (either by Apple or an on-shore data provider).
My analysis is that the on-device scanning does not improve Apple's ability to identify CP since it does so on iCloud anyway. But if my analysis is incorrect, I'd be genuinely interested if anyone can correct me on this point.
iCloud photos aren’t currently encrypted, but this system provides a clear path to doing that, while staving accusations that E2E of iCloud will allow people to host CP there with impunity.
When the device uploads an image it’s also required to upload a cryptographic blob derived from the CSAM database which can then be used by iCloud to identify photos that might match.
As built at the moment, your phone only “snitches” on you when it uploads a photo to iCloud. No uploads, no snitching.
We know that every other cloud provider scans uploads for CSAM, they just do it server side because their systems aren’t E2E.
This doesn’t change the fact that having such a scanning capability built into iOS is scary, or can be misused. But in its original conception, it’s not unreasonable for Apple to say that your device must provide a cryptographic attestation that data uploaded isn’t CP.
I think Apple is in a very hard place here. They’re almost certainly under significant pressure to prove their systems can’t be abused for storing or distributing CP, and coming out and saying they’ll do nothing to prevent CP is suicide. But equally the alternative is a horrific violation of privacy.
Unfortunately all this just points to a larger societal issue. Where CP has been weaponised, and authorities are more interested in preventing the distribution of CP, rather than it’s creation. Presumably because one of those is much easier to solve, and creates better headlines, than the other.
>iCloud photos are encrypted, so scanning has to happen on device.
Is this true? I feel like Apple benefits from the confusion about "Encrypted at rest" + "Encrypted in transit" and "E2E Encrypted". It's my understanding that Apple could scan the photos in iCloud, since they have the decryption keys, but they choose not to, as a compromise.
I'm keying into this because this document: https://support.apple.com/en-us/HT202303 doesn't show Photos as part of the category of data that "Apple doesn't have access to." That's mentioned only in the context of the E2E stuff.
It's baffling. It seems like nearly everyone losing their shit over this doesn't understand how it works. Most of the commentary I see here and elsewhere is based on a misunderstanding of the implementation that blends the CSAM scanner with the child messaging scanner.
Perhaps I missed it, but does anywhere on this letter mention that that both of these features are optional?
CSAM depends on using iCloud Photos. Don’t rent someone else’s computer if you don’t want them to decide what you can put on it.
Content filter for iMessages is for kids accounts only, and can be turned off. Or, even better: skip iMessages for Signal.