Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised the companies fascinated with AGI don't devote some resources to neuroscience - it seems really difficult to develop a true artificial intelligence when we don't know much about how our own works.

Like it's not even clear if LLMs/Transformers are even theoretically capable of AGI, LeCun is famously sceptical of this.

I think we still lack decades of basic research before we can hope to build an AGI.



Admitting you need to do basic research is admitting you're not actually <5 years from total world domination (so give us money now).


We are yet to see a pure theoretical roadblock between LLMs and AGI. The way things are going, I wouldn't be surprised if an existing LLM architecture (whether fully transformer-based or one of the hybrids) can hit AGI with the right scale, training and some scaffolding.

On the other hand, extracting usable insights from neuroscience? Not at all easy. Human brain does not yield itself to instrumentation.

If an average human had 1.5 Neuralink implants in his skull, and raw neural data was cheap and easy to source? You bet someone would try to use that for AI tech. As is? We're in the "bitter lesson" regime. We can't extract usable insights out of neuroscience fast enough for it to matter much.


Many of the people in control of the capital are gamblers rather than researchers.


If you want to create artificial human intelligence you need to know how the brain works. If you're creating alien intelligence the brain doesn't matter.


Why should they care as long as selling shares of a company selling access to a chatbot is the most profitable move?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: