Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Twitter's community notes are quite effective but they are simply very low bandwidth

That's why they are effective. I review Community Notes sometimes and the right assessment is almost always "no note needed". A lot of attempted CNs are just arguing with the poster's opinion, which belongs in replies. CN is meant to be for correcting cases where something is objectively false or missing critical context, and it does quite well at that. People are very good at spotting edited videos, mis-dated photos and so on, which is the bread and butter of real fact checking. Not very exciting but useful. Facebook could do worse than just reimplementing the system. It's certainly far better than letting activist run NGOs be editors.



If said activists are part of a company that approves of their activities why isn’t their censorship legitimate? Commenters/posters are free to take their comments and posts somewhere else. Why don’t the “censors” get a say on what goes up on their platform?


The activists aren't a part of the company. Facebook outsources fact checking to third parties.


They still control it, and it’s still their right as a corporation. I’m asking where is it wrong or in the US Constitution that says a company has to allow all points of view? That’s a moral call, and I can see people arguing that, but it is not illegal or amoral from the point of the company or those who say free speech/property rights apply to all


Is anyone claiming it's illegal or that the constitution demands that?


At least an NGO will likely have a consistent point of view. The CN algorithm, apparently, requires “agreement from contributors who have a history of disagreeing.” Let’s say we have an entirely hypothetical scenario where the two primary political groups arguing over notes are a milquetoast centrist party and a far-right party susceptible to conspiracy theories; accordingly, any notes that are agreed upon will either be extremely obvious (“the sky is blue” but not, perhaps, “the president’s wife is not a man”) or will tilt center-right. That seems far from objective to me. And that’s to say nothing of thumbs-on-the-scale tweaks to the algorithm by the platform owner, which will be undetectable, or changes to the political makeup of the editors.

I don’t think there’s any way to algorithm your way out of non-trivial fact-checking. Tech is not the solution to these kinds of fundamentally social problems.

(I should add that the best-case scenario here is an emergent and stable cabal of intellectually-rigorous editors, perhaps of varying political persuasions, similar to what happened to Wikipedia. But that’s barely different from fact-checking by some NGO.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: