I mean legacy auto-makers are just hedging whilst also acquiring information about operations that they can then leverage. They were already caught asleep at the wheel when Tesla came onto the scene.
Envisioning what should exist is always the hardest part.
If LLMs provide substantial material to be able to produce what one envisions faster, that is great. But LLMs will not be doing the envisioning. Most humans already are poor at that. Hence why there are very few real 'visionaries' in history.
Envisioning always requires deep thinking. If LLMs eat away at a humans ability to sit and think, this will make envisioning solutions harder. So you'll see more stuff produced, but largely more crap.
When I read comments here, Im beginning to realise many people get lost in the noise and cant seem to figure out what exactly is the true bottle-neck of producing great software that people (be it consumers or folks employed at firms) purchase/use.
Writing code faster alone doesn't change a great deal. Frankly it'll just create a larger influx of noise. Focus is very difficult to do, it'll become harder in the advent of LLMs.
LLM change the dynamic so an individual or small team can replicate the work large companies. Especially if they worked on that large application before.
Yep. Tax the resources that capital needs to produce the stuff. This is just a simple way to think about how we think about tax regimes etc can evolve.
LLMs have randomness baked into every single token it generates. You can try running LLMs locally and set the temperature to low and it immediately feels boring to always have the same reply every time. It's the randomness that makes them feel "smart". Put it another way, randomness is required for the illusion of intelligence.
Im fully aware of that. However, this illusion is a dangerous mirage. It doesnt equate to reality. In some cases thats OK. But in most cases its not, especially so in the context of business operations.
Determinism in agents is a complex topic because there are several different layers of abstraction, each of which may introduce its own non-determinism. But yeah, it is going to be difficult to induce determinism in a commercial coding agent, for reasons discussed below.
However, we can start by claiming that non-determinism is not necessarily a bad thing - non-greedy token sampling helps prevent certain degenerate/repetitive states and tends to produce overall higher quality responses [0]. I would also observe that part of the yin-yang of working with the agents is letting go of the idea that one is working with a "compiler" and thinking of it more as a promising but fallible collaborator.
With that out of the way, what leads to non-determinism? The classic explanation is the sampling strategy used to select the next token from the LLM. As mentioned above, there are incentives to use a non-zero temperature for this, which means that most LLM APIs are intentionally non-deterministic by default. And, even at temperature zero LLMs are not 100% deterministic [1]. But it's usually pretty close; I am running a local LLM as we speak with greedy sampling and the result is predictably the same each time.
Proprietary reasoning models are another layer of abstraction that may not even offer temperature as knob anymore[2]. I think Claude still offers it, but it doesn't guarantee 100% determinism at temperature 0 either. [3]
Finally, an agentic tool loop may encounter different results from run to run via tool calls -- it's pretty hard to force a truly reproducible environment from run to run.
So, yeah, at best you could get something that is "mostly" deterministic if you coded up your own coding agent that focused on using models that support temperature and always forced it to zero, while carefully ensuring that your environment has not changed from run to run. And this would, unfortunately, probably produce worse output than a non-deterministic model.
Appreciate the response. I agree that non-determinism isnt a bad thing. However LLMs are being pushed as the thing to replace much of the deterministic things that exist in the world - and anyone seen to be thinking otherwise gets punished e.g. in the stock market.
This world of extremes is annoying for people who have the ability to think more broadly and see a world where deterministic systems and non-deterministic systems can work together, where it makes sense.
Yeah, I think you're right that LLMs are overused. In most cases where a deterministic system is feasible and desirable, it's also much faster and cheaper than using an LLM, too..