Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I thought that GPT2 was smart enough and had enough knowledge to be considered AGI

Really?

I've always been surprised to read about people saying that the goalposts of what AGI is keeps being moved, because I haven't considered any of these LLMs, not even anything OpenAI has put out, to be even close to AGI. Not even ChatGPT o1 which claims to "reason through complex tasks".

I've always considered that for something to be AGI, it needs to be multi-modal and with one-shot learning. It needs strong reasoning skills. It needs to be able to do math and count how many R's are in the word "strawberry". It should be able to learn how to drive a car just as fast as a human does.

IMO, ChatGPT o1 isn't "reasoning" as OpenAI claims. Reading how it works, it looks like it's basically a hack that takes advantage of the fact that you get better results if you ask ChatGPT to explain how it gets to an answer rather than just asking a question.



>It should be able to learn how to drive a car just as fast as a human does.

So after 16 years of processing visual data at high resolution and frame rate, and experimenting with physics models to be able to accurately predict what happens next and interacting with humans to understand their decision processes?

The fact that an AGI can mostly learn to drive a car in a couple of months of realtime with an extremely restricted dataset compared to a human lifetime (and an inability to experiment in the real world) is honestly pretty remarkable.


I mean, you get pretty good results with a dumb-ass logic of “if right wall is closer than this, go left” and the reverse. Like, a robot vacuum is 95% there where a tesla is. And a tesla is 80% where a human is. It’s just that last n percent requires a full on, almost AGI with a proper model of the physical world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: