Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think we're at a plateau. There's still a lot GPT-4 can't do.

Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.



I thought we’ve seen diminishing returns on benchmarks with the last wave of foundation models.

I doubt we’ll see a linear improvement curve with regards to parameter scaling.


And now we have the LLMs self feeding their models (which may be either good or bad). This shouldn’t lead to short-term wide (as in AGI) efficiency. I bet this is a challenge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: