Competitive edge. Some agents will be better than others, therefore worth paying for. So for example, if one writes an AI trading agent, there’s no reason to share it similar to how it is at the moment with regular trading algos.
I’m not saying it won’t eventually be known, but not in these initial stages.
The only thing separating Claude, Gemini and ChatGPT is their context and prompt engineering, assuming the frontier models belong to the same class of capability. You can absolutely release a competitor to these things that could perform better for certain things (or even all things, if you introduce brand new context engineering ideas), if you wanted to.
No, I mean why do you think that effective context engineering will remain a black art, instead of becoming something with standard practices that work well for most use cases?
I can’t say it will remain a black art because the tech itself creates new paradigms constantly. An LLM can be fine tuned with context engineering examples, similar to Chain Of Thought tuning, and that’s how we get a reasoning loop. With enough fine tuning, we could get a similar context loop, in which case those keeping things hidden will be washed away with new paradigms.
Even if someone fine tuned an LLM with this type of data, Deepseek has shown that they can just use a teacher-student strategy to steal from whatever model you trained (exfiltrate your value-add, which is how they stole from OpenAI). Stealing is already a thing in this space, so don’t be shocked if over time you see a lot more protectionism (protectionism is something we already see geopolitically on the hardware front).
I don’t know what’s going to happen, but I can confidently say that if humans are involved at this stage, there will absolutely be some level of information siloing, and stealing.
——
But to directly answer your question:
”… instead of becoming something with standard practices that work well for most use cases?”
In no uncertain terms, the answer is because of money.
I’m not saying it won’t eventually be known, but not in these initial stages.
The only thing separating Claude, Gemini and ChatGPT is their context and prompt engineering, assuming the frontier models belong to the same class of capability. You can absolutely release a competitor to these things that could perform better for certain things (or even all things, if you introduce brand new context engineering ideas), if you wanted to.