Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does slime mold reason?

Yes, Hinton can be wrong, is wrong on many things like his misunderstanding on Chomsky and language.

But I also think he has spent thousands of hours testing these systems scientifically.

Your last sentence puts a lot of words in peoples mouths. But to continue down that line, fake reasoning and actual reasoning sounds like the Chinese Room. Is that the argument you are making?

We don't understand our own mental processes well enough, so I try to not anthropomorphize reasoning and cognition.



> Your last sentence puts a lot of words in peoples mouths.

Well, it’s the most common sentiment I see on both here and (before I gave up) the AI-centred parts of reddit.

It’s not quite the Chinese Room, since LLMs can’t even simulate reasoning very well. So there’s no need to debate the distinction between ‘fake reasoning and actual reasoning’ — there may or may not be a difference, but it’s not the point I’m making.

As for Hinton: I’m sure he has. But inventors are often not experts on their own creations/discoveries, and are probably just as prone to FUD and panic in the face of surprising developments as the rest of us. No one predicted that autoregressive transformers would get us this far, least of all the experts whose decades of work lead us to this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: