Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the long term, on-device won't save us from a biased assistant. It might notice we seem tired and insinuate that we could use Mococoa, all natural beans straight from the upper slopes of Mount Nicaragua.

Or—and this happens—it "summarizes" the same text differently, depending on whether the author's name happens to fit a certain ethnicity.



On the inverse of this, it can also save us from biased content because it can point out all the ways that the article we are reading is trying to manipulate our perspective.

With how inexpensive trainings are starting to get, it will not be long until we can train our own specialized models to fit our specific needs.


> it can also save us from biased content

I am pessimistic on that front, since:

1. If LLM's can't detect biases in their own output, why would we expect them to reliably detect it in documents in general?

2. As a general rule, deploying bias/tricks/fallacies/BS is much easier than the job of detecting them and explaining why it's wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: