Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They likely had tools to make certain that advertisers that objected didn't show up next to new viral meme (so that Procter & Gamble didn't show up next to anyone who had the hashtag #tidepodchallenge).

They also likely had tools to say "this screen shot from one of our testers put our content next to {objectionable figure} - make sure that this doesn't happen again" for advertisers to contact their account managers and make it happen.

The account managers were likely quite responsive if {jewish owned company advertising / verified} said that they were getting people replying back with antisemitic responses when customers were asking for support.

The advertisers had someone to contact and make things right - and were ok with that.

This need not be automatic.

https://twitter.com/CaseyNewton/status/1591608302076858371

> Getting word that a large number of number of Twitter contractors were just laid off this afternoon with no notice, both in the US and abroad. Functions affected appear to include content moderation, real estate, and marketing, among others

Note the "content moderation" and "marketing" categories of employees.



> This need not be automatic.

So it's not content moderation to enforce rules and such of the system, just content moderation to apease advertisers.

That makes so much more sense now why there's so much content that gets reported and stays online and you get responses that there's no violation despite it being clear violation...

I've reported so many tweets over the last couple of years, where people are threatening others with violence, posting graphic videos of animals being killed or videos from the Ukraine war full of overly graphic content and I just get replies that there is no violation and I just assume that it's an automated response unless many people report the same content.


It's a mostly human process, and the humans that pay money are the ones most likely to be heard first.

Many social media systems experimented with a purely automated system and had difficulty with automated systems that likewise reported everything that they didn't like and that resulted in the content creators getting banned for non-reasons.

This leads to needing to having a human check things - and humans don't scale.

Look at the stories of Youtube content moderation - both the humans involved and the false positives from when humans aren't involved - for examples of "why we can't have nice things."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: