No, the prompt, no matter the amount of detail, is always strictly better than the model output. It has to be, because the whole point of reading and writing is _communication_. Communication between TWO people.
This AI interlocutor is like a permanent cataract. It always makes it harder to see, never easier.
I don't think this is a great analogy. These things can pull in preexisting explanations and such. It doesn't just use the prompt so it's not a strict ceiling.
It's a translator.
Very useful when the message needs to be in a specific format or if you're talking to a computer, or if you need help with "grammar" or "protocol" (in all its forms).
I get people writing comments on photos I post to social in languages that I have partial comprehension of such as Japanese and Portuguese. I use Copilot to translate their messages, ask specific questions about particular words, get explanations about idioms and supply context and ask it to translate my replies which I always translate back through another LLM to try to catch problems, sometimes I ask it to make an edit, sometimes I make an edit myself.
Often I use LLMs to "have a conversation with a language" such as researching the cognate between the phrase "woo" to describe the supernatural in English and the similarly pronounced character 巫 (wu) in Chinese which is used in words like 女巫 (witch -- that first character means "woman") but it is not the "wu" in 武侠 (wǔxiá -- martial arts)
I am sure one of these days I am going to embarrass myself but with partial comprehension I could do that without the help of an LLM.
This AI interlocutor is like a permanent cataract. It always makes it harder to see, never easier.