Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am very confused by this comment. There is no gatekeeping in the ML/AI community. Ideas flow freely (albeit within the confines of several major Discord servers, or so it seems). Whether the author of an idea has formal training in ML and adjacent disciplines or not, whether it's published on arXiv or not, it doesn't matter - it'll be adopted if it works and/or makes it easier for people to run their GPT waifu/ baby AGI prototype.

That said, new open foundation models sized 7B and over are still a fairly rare thing to see. If someone goes through the effort of creating one of those, and especially if it has some sort of an edge against Llama 2 7B, it's not unreasonable to expect an arXiv paper to be released about it.



Isn't token completion incapable of representing AGI? AGIs need the ability to perform internal thought and deliberation, and they probably also need to be bottom-up rather than top-down.


You can't judge whether something is AGI or not from how it works, that just leads to goalpost-moving. AGI is AGI if it can do certain things, no matter whether it's token-based or top-down or anything.


> Isn't token completion incapable of representing AGI?

Given the absence of a validated model (or even usable operational definition) of general intelligence, who knows? AGI might as well be an empty marketing buzzword, it isn't something about which falsifiable fact claims can be made.

> AGIs need the ability to perform internal thought and deliberation

Systems built around LLMs can do something like this, via reflection, a technique in constructing agent-based simulations using LLMs as the engine.


I see what you mean. In this case, I was talking about the LLM being a component of the AGI rather than the whole AGI all by itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: