Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs have randomness baked into every single token it generates. You can try running LLMs locally and set the temperature to low and it immediately feels boring to always have the same reply every time. It's the randomness that makes them feel "smart". Put it another way, randomness is required for the illusion of intelligence.


Im fully aware of that. However, this illusion is a dangerous mirage. It doesnt equate to reality. In some cases thats OK. But in most cases its not, especially so in the context of business operations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: