Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Normally such content would be taken down after sufficient number of reports were received.

I can't be the only one who sees a problem with this. No content should ever be taken down automatically just because a bunch of random people report it.



What is your suggested solution?

Your options are:

1. Leave up child porn, revenge porn, hate speech, gore, fake and misleading stuff, etc for a long time, possibly forever, since a machine can no longer automatically take down mass-reported content

OR

2. Hire so many human moderators that facebook goes bankrupt

OR

3. Use untrained and unpaid/underpaid human moderators, such as strangers (the reddit model); go back to square 1 where volunteer moderators take down stuff they don't like with minimal oversight

OR

4. Have so few users that a small number of human moderators can actually review every report, i.e. the hacker news model. Or actually, no, hacker news "dead"s posts based on just number of reports without a human seeing it, I take that back. I guess this is the approach forums, most smaller blog comments, and other quite small websites use.

Do you have another solution that isn't one of those? Do any of those solutions sound good for facebook? Better than what they have now?


Here's a possible system: replace "random people" with "people with a solid track record of previous correct reports". When something gets reported, human moderators categorize it as either "correctly reported, and should be taken down", "incorrectly reported, but possibly a misunderstanding or a borderline case", or "a clearly bad-faith report against content that no reasonable person would actually believe breaks the rules". Keep track of how many reports from each user end up in each category.

If almost all of your reports are in the first category, then you're considered trusted, and if enough trusted users report a post, then it can be automatically removed before a human moderator sees it. If you haven't reported anything before, or too many of your reports are in the second category, then your report only helps to get the submission in front of a human moderator and doesn't directly contribute to it being removed. If more than a handful of your reports ever end up in the third category, you get banned for abusing the report system.


Is that not already the case? Do you think facebook doesn't already weight reports by the historical quality of the reporter?

That still seems like a violation of "No content should ever be taken down automatically just because a bunch of random people report it", as you wrote above.


> No content should ever be taken down automatically just because a bunch of random people report it.

Serious question, why not?


Because you have no idea if those reports are at all genuine, or if the reporters met up elsewhere (online or off) in order to brigade and mass-report said content, with the intention of getting it taken down despite breaking no rules. Sometimes, the coordination isn't even necessary, it just needs to be the right target posting something online. (Eg more than a few people have gone and reported every post by a politician you like/dislike for hate speech and inciting violence.)


The article well explains this downside of user reports, so I don't see what this comment adds. It does not answer my question. The article also describes problems of not acting on them, so the conclusion requires more than just finding a negative.


I'm sorry you're unable to understand my answer. Let me try saying the same thing as the article another time, and maybe you'll be able to understand the answer?

Your question was Why should no content should ever be taken down automatically just because a bunch of random people report it.

It's because the random people reporting it can't be trusted to be acting honestly. Without a human in the loop, the automated system becomes a tool for cyberbullies, and harms the very users you intend to protect. The foregone conclusion, thus, is that a fully-automated system will do more harm for the user-base than good.


I understood the answer, it just didn't address the question properly. I think you failed to understand the problem with the answer.

> The foregone conclusion, thus, is that a fully-automated system will do more harm for the user-base than good.

This is not an answer, it is just lazy circular reasoning. "Automated take downs should not be used because they do more harm than good." Yes we have already established that is your assertion, I am asking for how you were able to conclude that.


That’s why politicians tend to get X-Check.


Because coordinated false reports aren't uncommon.


I wasn't asking about possible downsides, of course there are pros and cons. I'm wondering how you came to conclude that taking down the content is the wrong strategy for facebook (or in general).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: