Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree with your premise that 3 years ago “people” knew about hallucinations or that these models shouldn’t be trusted.

I would argue that today most people do not understand that and actually trust LLM output more on face value.

Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: