Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
cainxinth
on Dec 5, 2024
|
parent
|
context
|
favorite
| on:
AI hallucinations: Why LLMs make things up (and ho...
> What they are is a value judgement we assign to the output of an LLM program. A "hallucination" is just output from an LLM-based workflow that is not fit for purpose.
In other words, hallucinations are to LLMs what weeds are to plants.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
In other words, hallucinations are to LLMs what weeds are to plants.