So are people. People are trained on existing data and learn to reproduce known solutions. They also take this to the meta level—a scientist or engineer is trained on methods for approaching new problems which have yielded success in the past. AI does this too. I’m not sure there is actually a distinction here..
Human thought is not a solved problem. It is clear that humans can abandon conventional patterns and try a novel approach instead, which is not shown by our current implementation of LLMs.