I have actually worked in this area. I like a lot of Yishan's other writing but I find this thread mostly a jumbled mess without much insight. Here are a couple assorted points:
>In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?
I'm not sure what the big point is here but there are a couple parts to how this works in the real world:
1) Some types of content removal do not need you to understand the language: visual content (images/videos), legal takedowns (DMCA).
2) Big social platforms contract with people around the world in order to get coverage of various popular languages.
3) You can use Google Translate (or other machine translation) to review content in some languages that nobody working in content moderation understands.
But some content that violates the site's policies can easily slip through the cracks if it's in the right less-spoken language. That's just a cost of doing business. The fact that the language is less popular will limit the potential harm but it's certainly not perfect.
>Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:
>
>It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
Well, that's the same principle that underlies all content moderation: "allowing this content is more harmful to the platform than banning it". You can go into all the different reasons why it might be harmful but that's the basic idea and it's not unprincipled at all. And not all spam is banned from all platforms--it could just have its distribution killed or even be left totally alone, depending on the specific cost/benefit analysis at play.
You can apply the same reasoning to every other moderation decision or policy.
The main thrust of the thread seems to be that content moderation is broadly intended to ban negative behavior (abusive language and so on) rather than to censor particular political topics. To that I say, yeah, of course.
FWIW I do think that the big platforms have taken a totally wrong turn in the last few years by expanding into trying to fight "disinformation" and that's led to some specific policies that are easily seen as political (eg policies about election fraud claims or covid denialism). If we're just talking about staying out of this business then sure, give it a go. High-level blabbering about "muh censorship!!!" without discussion of specific policies, is what you get from people like Musk or Sacks, though, and that's best met with an eye roll.
>In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?
I'm not sure what the big point is here but there are a couple parts to how this works in the real world:
1) Some types of content removal do not need you to understand the language: visual content (images/videos), legal takedowns (DMCA).
2) Big social platforms contract with people around the world in order to get coverage of various popular languages.
3) You can use Google Translate (or other machine translation) to review content in some languages that nobody working in content moderation understands.
But some content that violates the site's policies can easily slip through the cracks if it's in the right less-spoken language. That's just a cost of doing business. The fact that the language is less popular will limit the potential harm but it's certainly not perfect.
>Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons: > >It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
Well, that's the same principle that underlies all content moderation: "allowing this content is more harmful to the platform than banning it". You can go into all the different reasons why it might be harmful but that's the basic idea and it's not unprincipled at all. And not all spam is banned from all platforms--it could just have its distribution killed or even be left totally alone, depending on the specific cost/benefit analysis at play.
You can apply the same reasoning to every other moderation decision or policy.
The main thrust of the thread seems to be that content moderation is broadly intended to ban negative behavior (abusive language and so on) rather than to censor particular political topics. To that I say, yeah, of course.
FWIW I do think that the big platforms have taken a totally wrong turn in the last few years by expanding into trying to fight "disinformation" and that's led to some specific policies that are easily seen as political (eg policies about election fraud claims or covid denialism). If we're just talking about staying out of this business then sure, give it a go. High-level blabbering about "muh censorship!!!" without discussion of specific policies, is what you get from people like Musk or Sacks, though, and that's best met with an eye roll.