I don't think we're at a plateau. There's still a lot GPT-4 can't do.
Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.
And now we have the LLMs self feeding their models (which may be either good or bad). This shouldn’t lead to short-term wide (as in AGI) efficiency. I bet this is a challenge.
Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.