A better question is, given a significant corpus of complex functionality, can it implement complex code in a language that it knows, but in which it has only seen lower complexity code?
Can it transfer knowledge across languages with shared underlying domains?
I think given that it's been trained on everything ever written, we should suppose the answer is no.
It has always been possible, in the last century, to build a NN-like system: it's a trivial optimization problem.
What was lacking was the 1 exabyte of human-produced training data necessary to bypass actual mechanisms of intelligence (procedural-knowledge generation via exploration of one's environment, etc.).
the implication here is that GPT is just brute force memorizing stuff and that it can't actually work from first principles to solve new problems that are just extensions/variations of concepts it should know from training data it has already seen
on the other hand, even if that's true GPT is still extremely useful because 90%+ of coding and other tasks are just grunge work that it can handle. GPT is fantastic for data processing, interacting with APIs, etc.
No, the implication is that most of us fake it until we make it. And The Peter Principle says we're all always faking something. My comment was just about humanity. ChatGPT isn't worth writing about.
We aren't state machines. We are capable of conscious reasoning which GPT or any computer is not.
We can understand our own limitations, know what to research and how, and how to follow a process to write new code to solve new problems we have never encountered.
Our training set trains our learning and problem solving abilities, not a random forest.
(Looks down at dynamic programming problem involving stdin/stdout and combining two data structures).