Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What they are is a value judgement we assign to the output of an LLM program. A "hallucination" is just output from an LLM-based workflow that is not fit for purpose.

In other words, hallucinations are to LLMs what weeds are to plants.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: