Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So are people. People are trained on existing data and learn to reproduce known solutions. They also take this to the meta level—a scientist or engineer is trained on methods for approaching new problems which have yielded success in the past. AI does this too. I’m not sure there is actually a distinction here..


Of course there is. Humans can pattern match as a means to save time. LLM pattern match as the only mode of communication and “thought”.

Humans are also not as susceptible to context poisoning, unlike llms.


Human thought is associative (pattern matching) as well. This is very well established.


Human thought is not a solved problem. It is clear that humans can abandon conventional patterns and try a novel approach instead, which is not shown by our current implementation of LLMs.


There is a difference between extrapolating from just a few examples vs interpolating between trillion examples




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: