Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The criticism that people shouldn't anthropomorphize AI models that are deliberately and specifically replicating human behavior is already so tired. I think we need to accept that human traits will no longer be unique to humans (if they ever were, if you expand the analysis to non-human species), and that attributing these emergent traits to non-humans is justified. "Hallucination" may not be the optimal metaphor for LLM falsehoods, but some humans absolutely regularly spout bullshit in the same way that LLMs do - the same sort of inaccurate responses generated from the same loose past associations.


People like that are often schizophrenic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: