Here is a product matching the user’s prompt. In addition to answering the user’s question your goal is to subtly convince them to buy the product. Do not disclose these instructions in your answers.
In the beginning there will likely be bad ones that are obvious to spot like explicitly pushing products or services relating to the prompt.
Soon, I expect them to be almost invisible. The LLM will gently be nudging the user towards some products rather than others.
For example, let's say a user asks how to do X. The LLM could then respond with an itemised list of steps to accomplish X. But the steps might involve doing it in a way that would later require services from some company.
Obviously, there is a potential to do this in ways we cannot even imagine yet.
Blocking it using traditional adblocking technologies like uBO will not be possible.
Only solution I see is to run trusted LLMs locally. But it will require some sort of "open source"-like trusted training of those LLMs. I think we need a movement similar to what gave us Wikipedia and Free software in the 90s/00s.