Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Then there's the fact that the Turing test has always said as much about the gullibility of the human evaluator as it has about the machine. ELIZA was good enough to fool normies, and current LLMs are good enough to fool experts. It's just that their alignment keeps them from trying very hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: