You don’t like it when they release research, you don’t like it when research leaks, you don’t like it when research is suppressed. Hard for Meta to do anything right on this topic.
I’m just saying that some companies might release more information if the reaction wasn’t always adversarial. It’s not just meta. There’s a constant demand for outrage against big companies.
I don't want to beat a dead horse, since sibling commenters have covered this, but I'd implore you to imagine the spectrum of reactions which Meta _could_ have had when discovering their research indicated they were having a negative impact on people.
Some of those reactions on that spectrum would lead to greater human flourishing and well-being, others of those reactions would lead to the opposite. Now think about the reaction they actually _did_ have. Where on the aforementioned spectrum would their actual reaction fall?
Zooming out, how have they reacted to similar circumstances in the past when their own internal research or data indicated a negative impact on people?
The continued "outrage" is that they've exhibited a recurrent pattern across myriad occurrences.
I think if it weren't suppressed and released alongside some real, substantive changes for improving child safety it might be seen as Meta finally deciding to do something about it.
It's also worth pointing out this comes hot on the heels of the internal ai chatbot <> children memo leak [1] so people might not be likely to give them the benefit of the doubt atm...
We also don't like it when this happens: "their boss ordered the recording of the teen’s claims deleted, along with all written records of his comments."