Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can’t say it will remain a black art because the tech itself creates new paradigms constantly. An LLM can be fine tuned with context engineering examples, similar to Chain Of Thought tuning, and that’s how we get a reasoning loop. With enough fine tuning, we could get a similar context loop, in which case those keeping things hidden will be washed away with new paradigms.

Even if someone fine tuned an LLM with this type of data, Deepseek has shown that they can just use a teacher-student strategy to steal from whatever model you trained (exfiltrate your value-add, which is how they stole from OpenAI). Stealing is already a thing in this space, so don’t be shocked if over time you see a lot more protectionism (protectionism is something we already see geopolitically on the hardware front).

I don’t know what’s going to happen, but I can confidently say that if humans are involved at this stage, there will absolutely be some level of information siloing, and stealing.

——

But to directly answer your question:

”… instead of becoming something with standard practices that work well for most use cases?”

In no uncertain terms, the answer is because of money.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: