"Think of the children" will always work, no matter what the context is, no matter what the stats are, and no matter what we do. That does not mean that we should not care about the children, and it does not mean that we shouldn't care about blocking CSAM. We should care about these issues purely because we care about protecting children. If there are ways for us to reduce the problem without breaking infrastructure or taking away freedoms, we should take those steps. Similarly, we should also think about the children by protecting them from having their sexual/gender identities outed against their wishes, and by guaranteeing they grow up in a society that values privacy and freedom where they don't need to constantly feel like they're being watched.
But while those moral concerns remain, the evergreen effectiveness of "think of the children" also means that compromising on this issue is not a political strategy. It's nothing, it will not ease up on any pressure on technologists, it will change nothing about the political debates that are currently happening. Because it hasn't: we've been having the same debates about encryption since encryption was invented, and I would challenge you to point at any advancement or compromise from encryption advocates as having lessened those debates or having appeased encryption critics.
Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.
Reducing the real-world occurrences for irrational fears doesn't make those fears go away. If we reduce shark attacks on a beach by 90%, that won't make people with a phobia less frightened at the beach, because their fear is not based on real risk analysis or statistics or practical tradeoffs. Their fear is real, but it's also irrational. They're scared because they see the deep ocean and because Jaws traumatized them, and you can't fix that irrational fear by validating it.
So in the real world we know that the majority of child abuse comes from people that children already know. We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum. We know the broader privacy risks. We know that abusers (particularly close abusers) often try to hijack systems to monitor and spy on their victims. We would also in general like to see more stats about how serious the problem of CSAM actually is, and we'd like to know whether or not our existing tools are being used effectively so we can balance the potential benefits and risks of each proposal against each other.
If somebody's not willing to engage with those points, then what makes you think that compromising on any other front will change what's going on in their head? You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse. They have an image in their head of predators using encryption, and that image is never going to go away no matter what the real-world stats do and no matter what solutions we propose.
The central fear that encryption critics have is a fear of private communication. How can technologists compromise to address that fear? It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself.
Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough." So the idea that compromising now will change anything about how that line is used in the future -- it just seems naive to me. Really, the problem here can't be solved by either technology or policy. It's cultural. As long as people are frightened of the idea of privacy and encryption, the problem will remain.
> Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.
What makes you think I think that? You have misrepresented me here (effectively straw-manning), but I will assume an honest mistake.
You are right that there are people who will always seek to ban or undermine encryption no matter what, and who use ‘think of the children’ as an excuse regardless of the actual threat. ‘Those people’ as you put it, by definition will never have their minds changed by technologists. Indeed there is no point in technologists trying to do that.
However I don’t think that group includes Apple, nor does it include most of Apples customers. Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though. These people are not ideologues or anti-encryption fanatics.
Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use, not does it do anything to assuage the legitimate fears of Apple’s customers.
Perhaps you believe that there is no way to build a more privacy preserving solution than the one Apple has.
I would simply point out in that case, that the strategy of arguing against ‘think of the children’, has already lost, and commiserate with you.
I’m not convinced that there is no better solution. Betting against technologists to solve problems usually seems like a bad bet, but even if you don’t think it’s likely, it seems irrational not to hedge, because the outcome of solving the problem would have such a high upside.
It’s worth pointing out that Public Key cryptography is a solution to a problem that at one time seemed insoluble to many.
> Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use
- If the stats don't justify their fears
- And I come up with a technological solution that will make the stats even lower
- Their fears will not be reduced
- Because their fears are not based on the stats
----
> Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though
Are they worried about this because of a rational fear based on real-world data? If so, then I want to talk to them about that data and I want to see what basis their fears have. I'm totally willing to try and come up with solutions that reduce the real world problem as long as we're all considering the benefits and tradeoffs of each approach. We definitely should try to reduce the problem of CSAM even further.
But if they're not basing their fear on data, then I can't help them using technology and I can't have that conversation with them, because their fear isn't based on the real world: it's based on either their cultural upbringing, or their preconceptions about technology, or what media they consume, or their past traumas, or whatever phobias that might be causing that fear.
Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.
That problem can't be solved with technology, it can only be solved with education, or emotional support, or cultural norms. If they're scared right now without knowing anything about how bad the problem actually is, then attacking the problem itself will do nothing to help them -- because that's not the source of their fear.
> Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.
Not really.
I’m agreeing that parents will be afraid for their children regardless of the stats, and are unlikely to believe anyone who claimed they shouldn’t be. The ‘stats’ as you put it won’t change this.
Not because the stats are wrong, but because they are insufficient, and in fact predation will likely continue in a different form even if we can show a particular form to not be very prevalent. The claim to have access to ‘the reality of the situation’ is not going to be accepted.
You won’t be able to solve the problem through education or emotional support because you can’t actually prove that the problem isn’t real.
You actually don’t know the size of the problem yourself, which is why you are not able to address it conclusively here.
What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.
> What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.
And what I'm saying is that this is a giant waste of time because if someone has a phobia about their kid getting abducted, that phobia will not go away just because Apple started scanning photos.
You want people to come up with a technical solution, but you don't even know to define what a "solution" is. How will we measure that solution absent statistics? How will we know if it's working or not? Okay, Apple starts scanning photos. Are we done? Has that solved the problem?
We don't know if that's enough, because people's fears here aren't based on the real world, they're based on Hollywood abduction movies, and those movies are still going to get made after Apple starts scanning photos.
You are completely correct that the stats are insufficient to convince these people. But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away, because in your own words: "parents will be afraid for their children regardless of the stats."
If Apple's policy reduces abuse by 90%, they'll still be afraid. If it reduces it by 10%, they'll still be afraid. There is no technological solution that will ease their fear, because it's not about the stats.
----
I'm open to being proven wrong that predation is a serious problem that needs drastic intervention. I'm open to evidence that suggests that encryption is a big enough problem that we need to come up with a technological solution. I just want to see some actual evidence. People being scared of things is not evidence, that's not something we can have a productive conversation about.
If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.
If that's not on the table, then also in your words, we need to "accept that this is the environment" and stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.
> But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away,
I can’t be wrong about that since I’m not claiming that anywhere or assuming it.
> because in your own words: "parents will be afraid for their children regardless of the stats."
Given that I wrote this, why would you claim that I think otherwise?
> There is no technological solution that will ease their fear, because it's not about the stats.
Agreed, except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.
> People being scared of things is not evidence,
It’s evidence of fear. Fear is real, but it’s not a measure of severity or probability.
> that's not something we can have a productive conversation about.
I don’t see why we can’t take into account people’s fears.
> If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.
Yes. One of those metrics could be ‘in what ways does this compromise privacy’, and another could be ‘in what ways does this impede child abuse use cases’. I suspect Apple is trying to solve for those metrics.
Perhaps someone else can do better.
> If that's not on the table, then also in your words, we need to "accept that this is the environment"
This part is unclear.
> stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.
It’s unclear why you would say anyone is pretending this, least of all me. I have wholeheartedly agreed with you that these calls are ‘evergreen’.
I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.
> except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.
Statistics are a reflection of reality. When you say that the stats don't matter, you are saying that the reality doesn't matter. Just that people are scared.
You need to go another step further than you are currently going, and realize that any technological "solution" will only be affecting the reality, and by extension will only be affecting the stats. And we both agree that the stats can't solve the problem.
It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.
----
Let's say we abandon this fight and roll over and accept Apple moving forward with scanning. Do you honestly believe that even one parent is going to look at that and say, "okay, that's enough, I'm not scared of child predators anymore."? Can you truthfully tell me that you think the political landscape and the hostility towards encryption would change at all?
And if not, how can you float compromise as a political solution? What does a "solution" to an irrational fear even look like? How will we tell that the solution is working?
You say the stats don't matter; then we might as well give concerned parents fake "magic" bracelets and tell them that they make kids impossible to kidnap. Placebo bracelets won't reduce actual child abuse of course, but as you keep reiterating, actual child abuse numbers are not why these people are afraid. Heck, placebo bracelets might help reduce parent's fear more than Apple's system, since placebo bracelets would be a constantly visible reminder to the parents that they don't need to be afraid, and all of Apple's scanning happens invisibly behind the scenes where it's easy to forget.
----
> I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.
Out of curiosity, how will you prove to these people that your solutions are sufficient and that they work as substitutes for weakening encryption? How will you prove to these people that your solutions are enough?
Will you use stats? Appeal to logic?
You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.
No, they are the output of a process. Whether a process reflects ‘reality’ is dependent on the process and how people understand it. This is essential to science.
Even when statistics are the result of the best scientific processes available, they are typically narrow and reflect only a small portion of reality.
This is why they are insufficient.
> When you say that the stats don't matter,
I never said they don’t matter. I just said they were insufficient to convince people who are afraid.
> you are saying that the reality doesn't matter.
Since I’m not saying they don’t matter, this is irrelevant.
> It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.
Can you find a place where this contradicts something I’ve said? I haven’t argued to the contrary anywhere. I don’t expect to get the fears to go away.
As to whether they are rational are not, some are, and some aren’t. We don’t know which are which because you don’t have the stats, so we have to accept that there is a mix.
> Will you use stats? Appeal to logic?
Probably a mix of both, maybe some demos, who knows. I won’t expect them to be sufficient to silence the people who are arguing in favor of weakening encryption, not make parents feel secure about their children being protected against predation forever.
> You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.
Again you misrepresent me. Can you find a place where I argue that technological solutions are not subject to the same problems as the current debate?
I don’t think you can find such a place.
I have fully agreed that you can’t escape the vicissitudes of the current debate. Nonetheless, you can still produce better technological solutions. This isn’t about prevailing over unquantifiable fears and dark forces. It’s about making better technologies in their presence.
Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?
Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.
If they did, we could argue with them. But we can't, because they don't.
> Can you find a place where this contradicts something I’ve said?
Yes, see below:
> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to
I'm open to some kind of clarification that makes this comment make sense. How are your "solutions" going to make people less afraid? On what basis are you going to argue with these people that your solution is better than banning encryption?
Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?
>> Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?
No. Did I say something to that effect?
> Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.
You say this as if you are arguing against something I have said. Why?
> If they did, we could argue with them. But we can't, because they don't.
We can still argue with them, just not with science.
> Can you find a place where this contradicts something I’ve said?
> Yes, see below:
You’ll need to explain what the contradiction is. You have said you don’t understand it, but you not understanding doesn’t make it a contradiction.
>> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to
> I'm open to some kind of clarification that makes this comment make sense.
It makes sense to have solutions that don’t weaken privacy. Wouldn’t you agree?
> How are your "solutions" going to make people less afraid?
They won’t.
> On what basis are you going to argue with these people that your solution is better than banning encryption?
Which people? The parents, the nefarious actors, apple’s customers?
> Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?
Of course not because you are going to play the role of an irrational parent who cannot be convinced.
Neither of us disagree that such people exist. Indeed we both believe that they do.
> Neither of us disagree that such people exist. Indeed we both believe that they do.
> Why does changing such a person’s mind matter?
Okay, finally! I think I understand why we're disagreeing. Please tell me if I'm misunderstanding your views below.
> You’ll need to explain what the contradiction is.
I kept getting confused because you would agree with me right up to your conclusion, and then suddenly we'd both go in opposite directions. But here's why I think that's happening:
You agree with me that there are irrational actors that will not be convinced by any kind of reason or debate that their fears are irrational. You agree with me that those people will never stop calling to ban encryption, and that they will not be satisfied by any alternative you or I propose. But you also believe there's another category of people who are "semi-rational" about child abuse. They're scared of it, maybe not for any rational reason. But they would be willing to compromise, they would be willing to accept a "solution" that targeted some of their fears, and they might be convinced than an alternative to banning encryption is better.
Where we disagree is that I don't believe those people exist -- or at least if they do exist, I don't believe they are a large enough or engaged enough demographic to have any political clout, and I don't think it's worth trying to court them.
My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.
So when you and I both say that we can't convince the irrational people with alternative solutions, my immediate thought is, "okay, so the alternative solutions are useless." But of course you think the alternative solutions are a good idea, because you think those people will listen to your alternatives, and you think they'll sway the encryption debate if they're given an alternative. I don't believe those people exist, so the idea of trying to sway the encryption debate by appealing to them is nonsense to me.
In my mind, anyone who is rational enough to listen to your arguments about why an alternative to breaking encryption is a good idea, is also rational enough to just be taught why banning encryption is bad. So for people who are on the fence or uninformed, but who are not fundamentally irrationally afraid of encryption, I would much rather try gently reaching out to them using education and traditional advocacy techniques.
----
Maybe you're right and I'm wrong, and maybe there is a political group of "semi-rational" people who are
A) scared about child abuse
B) unwilling to be educated about child abuse or to back up their beliefs
C) but willing to consider alternatives to breaking encryption and compromising devices.
If that group does exist, then yeah, I get where you're coming from. BUT personally, I believe the history of encryption/privacy/freedom debates on the Internet backs up my view.
Let's start with SESTA/FOSTA:
First, Backpage did work with the FBI, to the point that the FBI even commented that Backpage was going beyond any legal requirement to try and help identify child traffickers and victims. Second, both sex worker advocates and sex workers themselves openly argued that not only would SESTA/FOSTA be problematic for freedom on the Internet, but that the bills would also make trafficking worse and make their jobs even more dangerous.
Did Backpage's 'compromise' sway anyone? Was there a group of semi-reasonable people who opposed sites like Backpage but were willing to listen to arguments that the bills would actively make sex trafficking worse? No, those people never showed up. The bills passed with broad bipartisan support. Later, several Senators called to reexamine the bills not because alternatives were proposed to them, but because they put in the work to educate themselves about the stats, and realized the bills were harmful.
Okay, now let's look at the San Bernardino case with Apple. Apple gave the FBI access to the suspect's iCloud account, literally everything they asked for except access to decrypt the phone itself. Advocates argued that the phone was unlikely to aid in the investigation, and also suggested using an exploit to get into the phone, rather than requiring Apple to break encryption. Note that in this case the alternative solution worked, the FBI was able to get into the phone using an exploit rather than by compelling Apple to break encryption. The best case scenario.
Did any of that help? Was there a group of semi-reasonable people who were willing to listen to the alternative solution? Did the debate cool because of it? No, it changed nothing about the FBI's demands or about the political debate. What did help was Apple very publicly and forcefully telling the FBI that any demand at all to force them to install any code for any reason would be a violation of the 1st Amendment. So minus another point from compromise as an effective political strategy in encryption debates, and plus one point to obstinance.
Okay, now let's jump back to early debates about encryption: the clipper chip. Was that solved by presenting the government and concerned citizens with an alternative that would better solve the problem? No, it wasn't -- even though there were plenty of people who argued at the time for encryption experts to work with the government instead of against it. Instead the clipper chip problem was solved both when encryption experts broke the clipper chip so publicly and thoroughly that it destroyed any credibility the government had in claiming it was secure, and it was solved by the wide dissemination of strong encryption techniques that made the government's demands impossible, over the objections of people who called for compromise or understanding of the government's position.
----
I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse. If that group does exist, it does a very good job of hiding, and a very bad job of intervening during policy debates.
I do think that people exist who are skeptical about encryption but who are not so irrational that they would fall into our category of "impossible to convince." However, I believe they can be educated, and that it is better to try and educate them than it is to reinforce their fears.
Because of that, I see no political value in trying to come up with alternative solutions to assuage people's fears. I think those people should either be educated, or ignored.
It is possible I'm wrong, and maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great. However, I very much doubt that you can come up with a solution like that, and if you can, I very much doubt that outside of technical communities anyone will be very interested in what you propose. I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.
And it's my opinion, based on the history of privacy/encryption, that traditional advocacy and education techniques will be more politically effective than what you propose.
> My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.
We disagree here, indeed. My view is not that there are ‘semi-rational’ people. My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved. I think this describes most of us, most of the time.
The idea that there is a clear distinction between ‘rationally’ understanding a complex social problem through science, and being ‘irrational and unconvincable’ seems inaccurate to me. Both of these positions seem equally extreme, and neither qualify as reasonable in my view, nor are they how most people act.
I think there are a lot of people who are reasonably afraid of things they don’t fully understand and which nobody fully understands. These people reasonably want solutions, but don’t expect them to be perfect or to assuage everyone’s fear.
These are the people who can easily be persuaded to sacrifice a little privacy if it means making children safer from horrific crimes.
They are also people who would prefer a solution that didn’t sacrifice so much if it was an option.
My argument is that the best way to make things better is to make better options available. Irrationally paranoid parents, and irrationally paranoid governments exist, but are the minority.
Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.
If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.
Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better.
If we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem.
If we don’t offer an alternative solution we aren’t offering them anything at all.
> I see no political value in trying to come up with alternative solutions to assuage people's fears.
Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.
> I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse.
Why do you assume education about encryption/abuse is relevant? Even people who deeply understand the issue still have to choose between the options that are available and practical.
> If that group does exist, it does a very good job of hiding,
It’s not a meaningful group definition.
> and a very bad job of intervening during policy debates.
Almost nobody intervenes during policy debates unless there have a strong position. Most people just choose the best solution from what is available and get on with their lives which are not centered on these issues.
> maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great.
Indeed. Isn’t that what we really want here? The only reason people are engaged in all this ideological battle is that they assume there isn’t a technical solution.
> However, I very much doubt that you can come up with a solution like that.
You could have just said you are someone who doesn’t believe a technical solution is possible.
> I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.
Why would you think I would be disappointed? We have already discussed how I don’t expect those people to change their minds.
Fortunately that is irrelevant to whether a solution would help, since it is not aimed at them.
> My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved.
Heavily agreed. But those are not irrational fears.
They become irrational fears when learning more about the risks and learning more about the benefits and downsides of different mitigation techniques doesn't change anything about those fears one way or another.
We all form beliefs based on incomplete information. That's not irrational. It is irrational for someone to refuse to look at or engage with new information. If someone is scared of the potential for encryption to facilitate CSAM because they're working with incomplete information, that's not irrational.
If someone is scared of encryption because they have incomplete information, and they refuse to engage with the issue or to learn more about the benefits of encryption, or the risks of banning it, or what the stats on child predators actually are -- at that point, it's an irrational belief. What makes them irrational is the fact that they are no longer being adjusted based on new information.
A rational person is not someone who knows everything. A rational person is someone who is willing to learn about things when given the opportunity.
> Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.
I guess I don't understand what you are arguing for then.
Let's look at your "reasonable people who are reasonably afraid" camp. We'll consider that these people have doubts about encryption, but don't hate it. They are scared of the potential for abusers to run rampant, but are having trouble figuring out what that looks like or what the weak points are in a complicated system. They are confused, but not bad-faith, and they have fears about something that is legitimately horrific. We will say that these people are not irrational, they recognize a real problem and earnestly want to do something about it.
There are 2 things we can do with these types of people:
1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.
or
2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
You seem to be suggesting that we try #2? And this apparently isn't designed to assuage their fears? But I'm not sure what it does then. Presumably the reason they'll accept your proposal is because it addresses the fears they have.
My preference is to try #1. I believe that if someone is actually in the camp you describe, if they have reasonable fears but they're looking at a complex social problem, openly talking to those people about the complex downsides of banning encryption is OK. They'll listen. They might come up with other ideas, they might bring up their own alternative solutions. All of that is fine, none of us are against reducing CSAM, we just want people to understand the risks behind the systems being proposed.
But importantly, if someone is genuinely reasonable, if they aren't irrational and they're just trying to grapple with a complex system -- then talking about the downsides should be enough, because those people are reasonable and once they understand the downsides then they'll understand why weakening encryption isn't a feasible plan. From there we can look at alternatives, but the alternatives are not a bargaining chip. Even if there were no alternatives, that wouldn't change anything about the downsides of making software more vulnerable. First, people must understand why a proposed solution won't work, and then we can propose alternatives.
To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.
> If we don’t offer an alternative solution we aren’t offering them anything at all.
Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.
We can look for a solution together once we've taken those things off the table.
Because if someone comes to me asking to find a good solution, I want to know that they're coming in good faith, that they genuinely are looking for the best solution with the fewest downsides. If they're not, if they're using encryption as some kind of threat, then they're not really acting in good faith about honestly looking at the upsides and downsides. I have a hard time figuring out how I would describe that kind of a person as "reasonable".
> I personally think you would be very disappointed
> Why would you think this? Did I say anything anywhere about convincing people who are arguing for weakening encryption?
Let me be even more blunt. I think that you could come up with a brilliant solution today with zero downsides that reduced CSAM by 90%. And I think you would be praised if you did come up with that solution, and it would be great, and everyone including tech people like me would love you for it. And I also think it would change literally nothing about the current debates we're having. I think we would be in the exact same place, I think all of the people who are vaguely worried about CSAM and encryption (even the good faith people you mention above) would still be just as worried tomorrow. You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.
I'm not just talking the irrational people. It would not change the opinions of the reasonable people you're describing above. Because why would it? However good your solution is, if encryption is genuinely not worth preserving, then it would always be better to implement your solution and ban encryption. I don't say that derisively, if the benefits of banning encryption really did outweigh the downsides, then it would genuinely be good to get rid of encryption.
The only reason we don't get rid of encryption is because its benefits do heavily outweigh its downsides. Not because this is some kind of side in a debate, but because when you examine the issue rationally and reasonably, it turns out that weakening encryption is a really bad idea.
> My argument is that the best way to make things better is to make better options available.
This is another point where we differ then.
As far as I can tell, any reasonable person who is convinced that encryption is a net negative is always going to be interested in getting rid of encryption unless they understand what the downsides are. Any reasonable person who is on the fence about encryption is going to stay on the fence until they get more information. I don't see how proposing alternative solutions is going to change that.
So I believe that the only way these reasonable people you describe are going to change their minds are if they're properly educated about the downsides of making software vulnerable, if they're properly educated about the upsides of privacy, and if they're properly educated about the importance of device ownership.
And maybe I'm overly optimistic here, but I also do believe that reasonable people are willing to engage in good faith about their proposed solutions and to learn more about the world. I don't think that a reasonable person is going to clam up and get mad and stop engaging just because someone tells them that their idea to backdoor software has negative unintended side effects. I think that education works when offered to reasonable people.
> There are 2 things we can do with these types of people:
> 1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.
> or
> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
Why are those the only two solutions? That seems like a false dichotomy.
Again why would you think I’m suggesting #2.
Can I ask you straight up, are you trolling?
There is a pattern where you say “I think you are saying X” where X is unrelated to anything I have actually said. I ask “why do you think I think X”, and you don’t answer, but just move on to repeat the process.
I have been assuming there is good faith misunderstanding going on, but the fact that you keep not explaining where the misunderstandings have arisen from when asked is starting to make me question that.
Most of what you’ve written in this reply is frankly incoherent, or at least seems to be based on assumptions about my position that are neither valid nor obvious, as to make it seem seem unconnected from our previous discussion.
For example this:
> To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.
Just seems like a gibberish hypothetical that doesn’t have much to do with what we are talking about.
And this:
> You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.
What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?
Let’s take another part:
> Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.
Ok, but those are all in fact fluid concepts whose status is changing as time goes by, and mostly not in the directions it sounds like you would prefer. Nobody is thinking of them as prizes. The status quo is that they are in jeopardy.
> We can look for a solution together once we've taken those things off the table.
Ok, but this just means you aren’t willing to participate with people who don’t agree to a set of terms, which in fact don’t represent anything anyone has so far developed.
That’s a comment about your personal boundaries not about whether a better solution than what Apple is proposing could be built.
That’s fine by me, in fact I’d be happy if a solution did incorporate all of the concepts you require. I agree we need that. I argue for it quite often.
I don’t think such a thing has been built yet, and if it were built, I suspect parents would like to have some mechanism to control whether it was. vector of child exploitation before they let their kids use it.
> Why are those the only two solutions? That seems like a false dichotomy.
It's not? It's a real dichotomy. What other solution could there be?
I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options.
I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.
----
> why do you think I think X
Frankly, if this isn't what you think, then I don't understand what you're thinking.
You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal, we have to offer an alternative if we're going to criticize. But why?
- I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.
- I thought the point was to compromise with critics: no, you're saying that's not what you mean.
- I thought the point was to try and get people to stop attacking encryption: no, you're saying that's not what you mean.
- Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?
> What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?
What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.
I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.
----
Honest to God, hand over my heart, I am not trolling you right now. I understand that this is frustrating to you, but my experience throughout this conversation has been:
- You say something
- I try to interpret and build on it
- You tell me that's not what you meant and ask me why I thought that
- Okay, I try to reinterpret and explain
- The cycle repeats
- The only information I can get out of you is that I apparently don't understand you. I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.
What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal? You seem to think this will help keep people from attacking encryption, but I'm wrong to say that it will help by reducing their fears, or by distracting them, or by teaching them, or by solving the problems that they think they have, or... anything.
You tell me that "if we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem." But then you tell me that "encryption is not a prize" and the goal is not to convince them of anything, which to me completely contradicts the previous sentence.
If encryption isn't a prize, if "nobody is thinking of them as prizes", then why does it sound like you're telling me that preserving encryption is conditional on me coming up with some kind of alternative? If encryption isn't a prize, then great, let's take it off the table.
But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms". So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.
You're jumping back and forth between positions that seem completely contradictory to me. I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".
So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand, or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible. If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent, and we can go off and do something more productive with our evenings. But I'll give this exactly one last try:
----
> Most of what you’ve written in this reply is frankly incoherent
Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position:
A) it is desirable to reduce CSAM without violating privacy.
B) the downsides of violating privacy are greater than the upsides of reducing CSAM.
C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance.
D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights.
E) the ones that refuse to be educated are never going to change their views.
F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time.
G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy).
H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with)
I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM.
Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?
>> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
> Why are those the only two solutions? That seems like a false dichotomy.
It's not? It's a real dichotomy. What other solution could there be?
3) Offer a better technical that is less of a compromise than what Apple is offering, or indeed is not a compromise at all.
> I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options.
I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.
I’m not suggesting #2 because #2 is a straw man.
----
>> why do you think I think X
>Frankly, if this isn't what you think, then I don't understand what you're thinking.
Ok - that seems like a straightforward response. You don’t understand.. But I clearly am not saying the things you are attributing to me.
I have no repeatedly asked where I said anything that leads you to think they are my view. It’s rare that you answer. From my point of view that means you aren’t actually responding to what I have written. You read what I write, don’t understand it, and then make something up that isn’t what I’ve said (or is even directly contradicted by what I’ve said) and then you tell me that’s what I’m saying.
If this was a one time thing, it would be fine, but at this point it doesn’t seem to matter what I say - you’ll just respond as if I said something else, and you won’t explain why when asked. From here it looks like you are having a discussion with your own imagination, rather than with what I write.
Here’s an example:
>> You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal,
Where do I ‘keep saying that we can’t just criticize apple’s proposal.’? If that is something I have said more than once, you should be able to quote me. If not then it isn’t actually something I keep saying, it’s only in your imagination that I am saying it.
> we have to offer an alternative if we're going to criticize. But why?
Another example of something I you are imagining me to be saying, but that I am not.
> - I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.
I have now said it is not what I mean, multiple times with explanation, and yet you keep saying it is. Why is that?
> - I thought the point was to compromise with critics:
Why do you think that? I have never said it. Again it’s something you are imagining. What is the text that made you imagine it? If we knew that, we could uncover where you haven’t understood.
> no, you're saying that's not what you mean.
Of course, because I didn’t say it.
> - I thought the point was to try and get people to stop attacking encryption:
Again I have never said this was the point, not only that I have said we can never do so.
But you have not explained why you thought this was the point.
> no, you're saying that's not what you mean.
- Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?
In this case then misunderstanding is mine. I misunderstood ‘reduce csam’ as ‘reduce csam detection’. I.e. I read it as get Apple to reduce their efforts.
> What does it even mean to ‘reduce CSAM’?
This is what I do when I don’t understand what someone has written - I ask them. You answered, and we have uncovered where I misunderstood.
If you answered my questions, we might have understood why you haven’t been understanding me.
> Why do we care about changing people’s minds here about encryption?
It seems to me that you have an agenda to change people’s minds about encryption. What isn’t clear is why you attribute that to me.
> What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.
Agreed - like I say I just misread the phrase.
> I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.
Ageeed - like I say I just misread the phrase.
----
> Honest to God, hand over my heart, I am not trolling you right now.
The reason it looks like trolling, is that when you say ‘your are saying X’, and X doesn’t appear to be supported by my words, X seems like a straw man. I have assumed this not to be intentional, and I believe you, but by not answering the question ‘why would you think I think that?’ you created ambiguity in your intentions.
> I understand that this is frustrating to you,
It’s not so much ‘frustrating’, as not functional as a discussion. If you misunderstand me and don’t answer questions aimed at getting to the root of the misunderstanding then you’ll likely just talk past me. I am just trying to evaluate whether an alternative is possible.
> but my experience throughout this conversation has been:
- You say something
- I try to interpret and build on it
- You tell me that's not what you meant and ask me why I thought that
- Okay, I try to reinterpret and explain
- The cycle repeats
This seems like close to a description of what I am seeing but not quite. Let’s examine the steps:
1. You say something
2. I try to interpret and build on it
3. You tell me that's not what you meant and ask me why I thought that
4. Okay, I try to reinterpret and explain
5. The cycle repeats
In #2 you say ‘You tell me that's not what you meant and ask me why I thought that’. This isn’t quite true. I often don’t ask ‘why you thought that’ in a vague way. I ask ‘what did I say that made you think that’. I ad,it there may be a few lapses, but most of the time I ask what i said that led to your understanding.
In #4 you said “I try to reinterpret and explain”. What you don’t do is answer the question - what is it I said that led to your understanding?
By not answering this question, we don’t get to the root cause of the misunderstanding.
a> - The only information I can get out of you is that I apparently don't understand you.
You don’t.
> I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.
This is false. I ask what I said that led to the misunderstanding. I do not move on.
What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal?
> You seem to think this will help keep people from attacking encryption,
What have I said that makes you think that?
[there are a few paragraphs that I can’t respond to because they don’t make sense]
> But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms".
Did I misunderstand you? Did you mean something else by ‘taking encryption off the table’?
> So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.
I thought you agreed that there are some people who will always attack encryption. I didn’t think it was just me ‘telling you that’. Did I misunderstand you - do you think you can get people to stop attacking encryption?
> You're jumping back and forth between positions that seem completely contradictory to me.
That’s possible, but I don’t think so. Can you quote where you think I have contradicted myself?
> I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".
This seems like sarcasm and bad faith. You are misrepresenting me. For example, I have never mentioned human rights.
Privacy on the other hand, is definitely a fluid concept.
What we consider it to mean has changed over time as both technology and society have developed.
> So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand,
What do you mean by flowery language?
> or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible.
I know you do.
> If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent,
Your views to the extent that I know them, don’t seem gibberish or incoherent. It’s when you incorporate interpretations of my views that don’t relate to what I have said, that what you write appears incoherent to me.
and we can go off and do something more productive with our evenings. But I'll give this exactly one last try:
----
> Most of what you’ve written in this reply is frankly incoherent
Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position:
A) it is desirable to reduce CSAM without violating privacy.
B) the downsides of violating privacy are greater than the upsides of reducing CSAM.
C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance.
D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights.
E) the ones that refuse to be educated are never going to change their views.
F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time.
G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy).
H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with)
I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM.
Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?
Honestly, no. This looks like just a blunt attempt to win some argument of your own with me playing a role that has nothing to do with the conversation so far. You are also asking me to do a lot of work to answer your questions when you have been unwilling to answer mine. That doesn’t seem like good faith.
Remember, you came to this subthread by replying to me. But you have consistently ignored clarifying questions.
Was it your goal along was to simply ignore what I have been saying and find a spot to just make your own case? I am genuinely unsure.
How about we start somewhere simpler? When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
> When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
Okay, sure. When you ask me to try and justify why I think you hold your position, I interpret that as a distraction (hopefully a good faith one). I don't want to argue on a meta-level about why I got confused about your comments, I want to know what you believe. I'm frustrated that you keep trying to dig into "why are you confused" instead of just clarifying your position.
My feeling is we could have skipped this entire debate if you had sat down and made an extremely straightforward checklist of your main points, consisting of maybe 5-10 bullet points, each one to two sentences max. This is a thing I've done multiple times now about my beliefs/positions during this discussion. If we get mixed up about what the other person is saying, the best thing to do is not to dive into that, it's to take a step back and try to clarify from the start in extremely clear language.
You looked at the final checklist and said "this looks like just a blunt attempt to win some argument of your own". I looked at it as a charitable invitation to step back, write 10-20 sentences instead of 15 paragraphs, and to just cut through the noise and figure out where we disagree. If your checklist doesn't overlap with mine, fine. It's not bad for us to discover that we're arguing past each other. What's bad is if we spend X paragraphs getting frustrated about meta-arguments that have nothing to do with Apple.
I don't want to debate language or start cross indexing each other's comments, I want to debate ideas.
So when you tell me that I'm wrong about what you believe, I look over your statements and try to reinterpret, and I move on. Very rarely is my instinct to sit down and try to catalog a list of statements to try and prove to you that you do believe what I think, because I take it as a given that if you tell me that I misinterpreted you... I did.
So I accept it and move on.
----
Yes, we could get into a giant debate about "what makes you think I think that". That might go something like:
> You seem to think this will help keep people from attacking encryption,
> What have I said that makes you think that?
And I could reply by linking back to one of your previous comments:
> "Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.
> If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.
> Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better."
Which to me sounds quite a bit like: "offer a solution that doesn't target encryption, and then these people won't target encryption because 'most people just want reasonable solutions'".
----
But what's the point of the above conversation? I already know that you don't interpret those 3 paragraphs about a "privacy preserving option" as meaning "a proposal that will stop reasonable people from attacking encryption." Because you told me that's not what you believe.
So how weird and petty would I need to be to start arguing with you, "actually you did mean that, and I have proof!" Is it any value to either of us to try and trip each other up over "well, technically you said"? I'm not here trying to trap you, I want to understand you.
Honestly, the short answer to why I rarely reply back with quotes about "why I think you said that", is I kind of interpreted "what makes you think I think that" as a vaguely rude attempt to derail the conversation and debate language instead of ideas, and I've been trying to graciously sidestep it and move on.
- I'm happy to debate privacy with someone
- I'm happy to listen to them so I can understand their views better
- I'm not happy to debate whether or not someone believes something. I think that's a giant meaningless waste of time.
I don't think that means you're operating in bad faith, but I can't think anything I would rather do less than spend all day going back over all of your statements to cross-reference them so I can prove that... what? That I misunderstood your actual position? I believe you, you don't need to prove to me that I misunderstood you! Let's just skip that part and move on to explaining what the actual position really is.
It doesn't matter "why I think you said what I said", it just matters that I understand you. So why get into that meaningless debate instead of just asking you to clarify or trying to reinterpret? I don't care about technicalities and I don't care about "winning" against you, and I interpret "justify why you thought I thought that" as a meaningless distraction that only has value in Internet points, not in getting me any closer to understanding what your views are.
>> When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
> Okay, sure.
> When you ask me to try and justify why I think you hold your position,
At this point it’s hard to read you as honest because of the frequency with which you misrepresent me. I have to assume you are unaware of this.
I am talking about why you won’t identify which words of mine lead to your misunderstandings of my position.
Why would you represent that as asking you to ‘justify’ your interpretation? That isn’t what I’m doing, and more importantly it’s not something I said. I’m just asking you to tell me what you are interpreting so I can see if what I said was ambiguous and if so how.
> I interpret that as a distraction (hopefully a good faith one).
A distraction from what? Are you not seeking to understand? Later in this comment you claim to want to know what my views are. If you ignore clarifying questions as a ‘distraction’, it seems likely that misunderstandings will keep arising.
> I don't want to argue on a meta-level about why I got confused about your comments,
Who is asking you to ‘argue on a meta-level’? I am asking you to simply say what words you are referring to when a misunderstanding becomes apparent.
> I want to know what you believe.
I recommend trying to get to the bottom of misunderstandings then.
> I'm frustrated that you keep trying to dig into "why are you confused"
You misrepresent me again. Can you not see that I haven’t asked ‘why are you confused’?
I have asked what you are referring to when you attribute a view to me that I don’t think is contained in what I wrote.
> instead of just clarifying your position.
I could clarify my position if you were willing to tell me what I said that lead to your interpretations of my views.
> I could clarify my position if you were willing to tell me what I said that lead to your interpretations of my views.
Okay, holy crud, I'm out. I have zero interest in the evolution of this conversation from a debate about privacy/encryption tradeoffs, into "let's figure out who's arguing in good faith", into "explain to me why you think I'm asking you to explain to me what I'm thinking".
I've made a quick ~$5 donation to Matrix.org (https://imgur.com/a/wVj2neA) in the hope that their current work with on-by-default E2E encryption and P2P community hosting/connections will ideally make this entire conversation irrelevant in the future.
I wish you the best in coming up with your technical solutions.
> "Reducing the real-world occurrences for irrational fears doesn't make those fears go away." "You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse"
We could say the same thing the other way - people up in arms are not frightened by statistics of abuse of a surveillance system, but frightened of the idea of a company or government abusing it. This thread is full of people misrepresenting how it works, claims of slippery slopes straight to tyranny, there's a comparison to IBM and the Holocaust, and it's based on no real data and not even the understanding from simply reading the press release. This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused. For example Skype has intercepted your comms since Microsoft bought it and routed all traffic through them, Chrome and FireFox and Edge do smartscreen blocking of malware websites - what are the stats on those systems being abused to block politically inconvenient memes or similar? Nothing Apple could do would in any way reassure these people because the fears are not based on information. For example your comment:
> "We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum."
Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]. This comment from you is reacting to a fear of something disconnected from the facts of what's been announced where that fear is guarded against as part of the design.
You could say that the hash database is from a 3rd-party so that it's not Apple acting unilateraly, but that's not taken as reassurance because the government could abuse it. OK guard against that with Apple reviewing the alerts before doing anything with them, that's not reassuring because Apple reviews are incompetent (where do you hear of groups that are both incompetent and capable of implementing worldscale surveillance systems? conspiracy theories, mostly). People say it scans all photos and when they learn that it scans only photos about to be uploaded to iCloud their opinion doesn't seem to change, because it's not reasoned based on facts, perhaps? People say it will be used by abusive partners who will set their partner to be a minor to watch their chats. People explain that you can't change an adult AppleID to a minor one just like that, demonstrating the argument was fear based not fact based. People say it is a new ability for Apple to install spyware in future, but it's obviously not - Apple have been able to "install spyware in future" since they introduced auto-installing iOS updates many years ago. People say it's a slippery slope - companies have changed direction, regulations can change, no change in opinion; nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them. People saying it could be used by bad-actors at Apple to track their Ex's. From the design, it couldn't. But why facts when there's fearmongering to be done? The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.
> "We would also in general like to see more stats about how serious the problem of CSAM actually is"
We know[2] that over 12 million reports of child abuse material to NMEC were related to FaceBook messenger and NMEC alone gets over 18 million tips in a year. Does that change your opinion either way? Maybe we could find out more after this system goes live - how many alerts Apple receives and how many they send on. A less panicky "Open Letter to Apple" might encourage them to make that data public, how many times it triggered in a quarter, and ask Apple to commit to removing it if it's not proving effective. And ask Apple to state what they intend to do if asked to make the system detect more things in future.
> "their fear is not based on real risk analysis or statistics or practical tradeoffs"
Look what would have to happen for this system to ruin your life in the way people here are scaremongering about:
- You would have to sync to iCloud, such that this system scans your photos. That's optional.
- Someone would have to get a malicious hash into the whole system and a photo matching it onto your device. That's nontrivial to say the least.
- Enough of those pictures to trigger the alarm.
- The Apple reviewers would have to not notice the false alarm photo of a distorted normal thing.
- The NMEC and authorities would have to not dismiss the photo.
It's not impossible, but it's in the realms of the XKCD "rubber hose cryptography" comic. Sir Cliff Richard, his house was raided by the police, the media alerted, his name dragged through the mud, then the crown prosecution service decided there was nothing to prosecute. The police apologised. He sued the police and they settled out of court. The BBC apologised. He sued them and won. The crown prosecution service reviewed their decision and reaffirmed that there was nothing to prosecute. His name is tarnished, forever people will be suspicious that he paid someone off or otherwise pulled strings to get away with something; a name-damaging flase alarm which is something what many people fear happening in this thread. Did anyone need to use a generative-adversarial network to create a clashing perceptual hash uploaded into a global analysis platform to trigger a false alarm convincing enough to pass two or three human reviews? No, two men decided they'd try to extort money and made a false rape allegation.
People aren't interested in how it works, why it works the way it does, whether it will be an effective crime fighting tool (and how that's decided) or whether it will realistically become a tyrannical system, people aren't interested in whether Apple's size and influence could be an independent oversight on the photoDNA and NCMEC databases to push back against any attempts of them being misused to track other-political topics, people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.
> "So in the real world we know that the majority of child abuse comes from people that children already know."
Those 12 million reports of child abuse material related to FaceBook messenger; does it make any difference if they involved people the child knew? If so, what difference do you think that makes? And Apple's system is to block the spread of abuse material, not (directly) to reduce abuse itself - which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.
> "Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough.""
Are the EFF not in the political sphere? Are the groups quoted in the letter not? Here[3] is a UK government vote from 2014 on communication interception, where it was introduced with "interception, which provides the legal power to acquire the content of a communication, are crucial to fighting crime, protecting children". 31 MPs voted against it. Here[4] is a UK government vote from 2016 on mass retention of UK citizen internet traffic, many MPs voted against it. It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating. Which could be an example of you taking your position by fear instead of fact.
> "It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself."
The UK government statement linked earlier[2] disagrees when it says "On 8 October 2019, the Council of the EU adopted its conclusions on combating child sexual abuse, stating: “The Council urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted or hosted on IT servers located abroad, without prohibiting or weakening encryption and in full respect of privacy". The people whose views you claim to describe explicitly say the opposite of how you're presenting them. Which, I predict, you're going to dismiss with something that amounts to "I won't change my opinion when presented with this fact", yes?
There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems" - but most of the criticisms made in this thread are silly.
> This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused.
It is filled with explanations about why the systems you mention are tangibly different from what Apple is proposing. There is a huge difference between scanning content on-device and scanning content in a cloud. That doesn't mean that scanning content in the cloud can't be dangerous, but it is still tangibly different. There is also a huge difference between a user-inspectable list of malware being blocked in a website and an opaque list of content matches that users can not inspect or debate. There is also a huge difference between a user-inspectable static list and an AI system with questionable accuracy guarantees. And there is a huge difference between a user-controlled malware list that is blocked locally without informing anyone, and a required content list that sends notifications to other people/governments/companies when it is bypassed.
That being said, if you want to look at stats about how accurate AI filters are for explicit material in the examples you mention, there are a ton of stats online about that, and they're mostly all quite bad.
> nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them
There's a lot to unpack in this one sentence, and it would take more time than I'm willing to give, but are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?
> Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]
You misunderstand the concern. The risk is not the child themselves clicking through to the photo (although it would be easy for them to accidentally do so), it's the risk of that data being leaked from other phones because a friend thoughtlessly clicks through the prompt.
> The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.
Where? Here's the second paragraph:
> Apple's proposed technology works by continuously monitoring photos saved or shared on the user's iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child's parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity.
The only thing I can think of is the word "continuously" which is not strictly inaccurate but could be misinterpreted as saying that the scanning will be constantly running on the same set of photos, and the subtle implication that this scanning will happen to photos "saved", which might be misinterpreted as implying that this scan will happen to photos that aren't uploaded to iCloud. But given that the second sentence immediately clarifies that this is referring to photos uploaded to iCloud, it seems like a bit of a stretch to me to call this misinformation.
> people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.
Hang on a sec. A little while ago you were calling my fears theoretical, now you're admitting that governments routinely abuse this kind of power. You really think it's unreasonable to be cautious about giving them more of this power?
> Does that change your opinion either way?
It does not change my opinion, but at least it's real data, so more of that in these debates please. I'm not denying or rejecting the numbers that the UK lists.
> A less panicky "Open Letter to Apple" might encourage them to make that data public
Holy crud, I would hope this is the bare minimum. Are we really having a debate over whether or not Apple will make that data public? I thought that was just assumed that they would. If that's up in the air right now, then we've sunk really low in the overall conversation about public accountability and human rights.
> which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.
I realize this is branching out on in a different direction, but it sure as heck better reduce abuse or its not worth building. CSAM is disgusting, but the primary reason to target it is to reduce abuse. If reducing CSAM doesn't reduce abuse, it's not worth doing and we should focus our efforts elsewhere.
I know this is something that might sound abhorrent to people, but we are having this debate because we care about children. We have to center the debate on the reduction of the creation of CSAM, the reduction of child abuse, and the reduction of gateways into child abuse. Reducing child abuse is the point. We absolutely should demand evidence that these measures reduce child abuse, because reducing child abuse and reducing the creation of CSAM is a really stinking important thing to do.
Which leads back to your other note:
> does it make any difference if they involved people the child knew? If so, what difference do you think that makes?
Yes, it makes a massive difference, because knowing more about where child abusers are coming from and how they interact with their victims makes it easier to target them and will make our efforts more effective. We should care about this stuff.
> It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating.
I think you misunderstand. Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession. That there are some entities (like the EFF) who are willing to reject the argument as a debate killer and look at it through a risk analysis lens does not mean the unquestioned argument of "one child is too many" is any less toxic to real substantive political debate.
> without prohibiting or weakening encryption and in full respect of privacy
I don't want to bash on the EU too hard here, but it has this habit of just kind of tacking onto the end of its laws "but make sure no unintended bad things happen" and then acting like that solves all of their problems. It doesn't mean anything when they add these clauses. This is the same EU that argued for copyright filters and then put on the end of their laws, "but also this shouldn't harm free expression or fair use."
It means very little to me that the EU says they care about privacy. What real, tangible measures did they include to make sure that in practice encryption would not be weakened?
Look, I can do the same thing. Apple should not implement this system, but they should also reduce CSAM. See, I just proved I care about both, exactly as convincingly as the EU! So you know I'm serious about CSAM now, I said that Apple should reduce it. But I predict that you'll "dismiss with something that amounts to 'I won't change my opinion when presented with this fact', yes?"
> There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems"
Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! What else are people criticizing? These are really big concerns! You got to the end of your comment, then suddenly listed out 6 extremely good reasons to oppose this system, and then finished by saying, "but other than those, what's the problem?"
> "Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! These are really big concerns!"
My point is your original point - where is the data to support these criticisms, the the facts, the statistics? Merely saying "I can imagine some hypothetical future where this could be terrible and misused" should not be enough to conclude that it is, in fact, terrible, and will more likely than not be misused.
We've had years of leaks showing that three letter agencies and governments simply don't need to misuse things like this. The USA didn't slide down a slope of banning Asbestos for health reasons and end up "oops" banning recreational marijuana. The USA didn't slide down a slippery slope into the Transportation Security Authority after 9/11 it appeared almost overnight, and then didn't slide down a slippery slope into checking for other things, it stayed much the same ever since.
The fact that one can imagine a bad future is not the same as the bad future being inevitable; the fact that one can imagine a system being put to different uses doesn't mean it either will be, or that those uses will necessarily be worse, or that they will certainly be maximum-bad. It's your comment about "fear based reasoning" turned to this system instead of to encryption.
You ask "are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?" and I'm saying the position "because slippery slopes exist, this system will slide down it and that's the same as being at the bottom of it" and then expecting the reader to accept that without any data, facts, evidence, stats, etc. is low quality unconvincing commenting, but is what makes up most of the comments in this thread.
> "Where? Here's the second paragraph:"
The paragraph which implies it happens to all photos (not just iCloud ones), and immediately alerts the authorities with no review and no appeal process. There are people in this thread saying "I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision" for a system which does not look at browser cacdhe, does not call the FBI, has a review process, does have an appeal process.
> "Holy crud, I would hope this is the bare minimum."
Why would you hope "the bare minimum" the letter could ask for is something the letter is clearly not asking for? Or that the bare minimum from a company known for its secrecy is openness and transparency? It would be nice if it was, yes. I expect it won't be, because we would all have very different legal systems and companies if laws and company policies were created with metrics to track their effectiveness and specified expiry dates and by default only get renewed if they are proving effective.
> "What else are people criticizing?"
My main complaint is that people are asking us to accept criticism such as "Iraq will use this to murder homosexuals" unquestioningly. But still, to quote from people in this thread: "Apple can (and likely will) say they won't do it and then do it anyway." - despite Apple announcing this in public they're going to lie about it, and you should just believe me without me supporting this position in any way. "This will lead to black mailing of future presidents in the U.S." - and you should believe that because reasons. "Made for China" - and you should agree because China is the boogeyman. (Maybe it is, if so justify why the reader should agree). "It's not Apple. It's the government" - because government bad. "Scan phones for porn, then sell it for profit and use it for blackmail. Epstein on steroids" - because QAnon or something, who even knows???. "the obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used." - because they said 'obviously' you have to agree or you're clueless, I guess. "I never thought I'd see 'privacy' Apple come out and say we're going to [..] scan you imessages, etc." - and they didn't say that; unless the commentor is a minor which is against the HN guidelines.
It's very largely unreasoned, unjustified, unsupported, panicky worst-case fearmongering even when the concerns could be serious - if justified.
> "Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession."
That is probably true, but probably self-supporting. Someone who honestly uses "think of the children" likely thinks the children's safety is not being thought of enough, and is self-selectedly less likely to immediately turn around and agree the opposite.
> "It means very little to me that the EU says they care about privacy."
Well, the witch is being drowned despite her protests.
> "What real, tangible measures did they include to make sure that in practice encryption would not be weakened?"
Well they didn't /ban/ it for a start; which they could have done as exemplified by Saudi Arabia and Facetime discussed in this thread, and they didn't explicitly weaken it like the USA did with its strong encryption export regulations of the 1990s. Those should count for something in defense of their stated position?
I'm not going to push too hard on this, but I do want to quickly point out:
> Well they didn't /ban/ it for a start [...] and they didn't explicitly weaken it
Does not match up with:
> urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted
If you're pushing a company to ensure access to encrypted content based on a warrant, you are banning/weakening E2E encryption. It doesn't matter what they say their intention is/was, or whether they call that an outright ban, I don't view that as a credible defense.
----
My feeling is that we have a lot of evidence from the past and present, particularly in the EU, about how filtering/reporting laws evolve over time (EU's CSAM filters within the ISP industry are a particularly relevant example here, you can find statements online where the EU leaders argue that expanding the system to copyright is a good idea specifically because the system already exists and would inexpensive to expand). I also look at US programs like the TSA and ICE and I do think their scope, authority, and restrictions have expanded quite a bit over the years. I don't agree that those programs came out of nowhere or that they're currently static.
If you don't see future abuse of this system as credible, or if you don't see a danger of this turning into a general reporting requirement for encrypted content, or if you don't think that it's credible that Apple would be willing to adapt this system for other governments -- if you see that stuff as fearmongering, then fine I guess. We're looking at the same data and the same history of government abuses and we're coming to different conclusions, so our disagreement/worldview differences are probably more fundamental than just the data.
To complain about some of the more extreme claims happening online (and under this article) is valid, but I feel you're extrapolating a bit here and taking some uncharitable readings of what people are saying (you criticize the article for "implying" things about the FBI, and the article doesn't even contain the words FBI). Regardless, the basic concerns (the "chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between 'things you own' and 'things you own which are closely tied to the manufacturer's storage and messaging systems'") are enough of a problem on their own. We really don't need to debate whether or not Apple will be willing to expand this system for additional filtering in China.
We can get mad about people who believe that Apple is about to start blackmailing politicians, but the existence of those arguments shouldn't be taken as evidence that the system doesn't still have serious issues.
"Think of the children" will always work, no matter what the context is, no matter what the stats are, and no matter what we do. That does not mean that we should not care about the children, and it does not mean that we shouldn't care about blocking CSAM. We should care about these issues purely because we care about protecting children. If there are ways for us to reduce the problem without breaking infrastructure or taking away freedoms, we should take those steps. Similarly, we should also think about the children by protecting them from having their sexual/gender identities outed against their wishes, and by guaranteeing they grow up in a society that values privacy and freedom where they don't need to constantly feel like they're being watched.
But while those moral concerns remain, the evergreen effectiveness of "think of the children" also means that compromising on this issue is not a political strategy. It's nothing, it will not ease up on any pressure on technologists, it will change nothing about the political debates that are currently happening. Because it hasn't: we've been having the same debates about encryption since encryption was invented, and I would challenge you to point at any advancement or compromise from encryption advocates as having lessened those debates or having appeased encryption critics.
Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.
Reducing the real-world occurrences for irrational fears doesn't make those fears go away. If we reduce shark attacks on a beach by 90%, that won't make people with a phobia less frightened at the beach, because their fear is not based on real risk analysis or statistics or practical tradeoffs. Their fear is real, but it's also irrational. They're scared because they see the deep ocean and because Jaws traumatized them, and you can't fix that irrational fear by validating it.
So in the real world we know that the majority of child abuse comes from people that children already know. We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum. We know the broader privacy risks. We know that abusers (particularly close abusers) often try to hijack systems to monitor and spy on their victims. We would also in general like to see more stats about how serious the problem of CSAM actually is, and we'd like to know whether or not our existing tools are being used effectively so we can balance the potential benefits and risks of each proposal against each other.
If somebody's not willing to engage with those points, then what makes you think that compromising on any other front will change what's going on in their head? You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse. They have an image in their head of predators using encryption, and that image is never going to go away no matter what the real-world stats do and no matter what solutions we propose.
The central fear that encryption critics have is a fear of private communication. How can technologists compromise to address that fear? It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself.
Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough." So the idea that compromising now will change anything about how that line is used in the future -- it just seems naive to me. Really, the problem here can't be solved by either technology or policy. It's cultural. As long as people are frightened of the idea of privacy and encryption, the problem will remain.