If you have an app and you want to run a single app yeah silly to look for K8s.
If you have a beefy server or two you want to utilize fully and put as many apps on it without clashing dependencies you want to use K8s or docker or other containers. Where K8s enables you to go further.
I thought both should be equal in solving problems - turns out Cursor with the same model selected somehow was able to solve tasks that Copilot would get stuck or run in loops.
They have some tricks on managing file access that others don’t.
Cynics on HN easily dismiss AI service wrappers (and many of them are in fact overblown and not worth their own code). But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy. The biggest issue is that model providers also see what the community likes and often move on with their own offerings that are tailored to their own models, potentially at the training stage. So even if you have the best harness for something today, unless you are also a frontier LLM provider, there's zero guarantee you will still be relevant in the future. More like the opposite.
It's not like someone paid $60 billion for a product the way you pay for bananas at the store. They invested a much smaller amount and essentially bought an option to acquire. And even if you don't believe the company's assets are worth the current valuation, an acquisition can still make sense if you believe that valuation will go up further. And if they actually do acquire, it will probably still not be in cash. They'll just be swapping stocks. That is essentially how all startup funding works. There is nothing strange about this. It merely reached new dimensions thanks to AI.
> (...) writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.
This. They are after the harness engineering experience of the Cursor people, I'd assume the they want to absorb all that into Grok's offerings.
The value and the room for innovation on the harness side seems to be underestimated.
Oddly the harness also affects model training, since even GLM/Z.ai for example train (I suspect) their model on the actual Claude Code harness. So the choises made by harness engineers affects the model. For Kimi/Moonshot and OpenAI the company makes their own harness. Alibaba uses Gemini.
Something being harder and attributing value to that makes no sense. Sure a big moat is important for value but "difficult to do" is just a unidimensional angle.
"But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy."
It is surprisingly easy to do it once someone else has done the work. Increasingly that's the nature of AI-based software engineering: point it at an existing tool and ask it to carefully duplicate features until it has parity. As you pointed out, frontier LLM companies happen to be well positioned to sell the resulting products.
It can use local/oss models, but it doesn't make it simple to do (easiest with ollama) and it's not clear what else you 'lose' by making that choice.
If you had a really good (big) local model, maybe it's an option, but on the more common smaller (<32b) models, it will have similar problems in looping, losing context, etc. in my experience.
It's a nice TUI, but the ecosystem is what makes it good.
Their annualized revenue run rate is on track to surpass $6 billion by the end of 2026 so it's not ridiculous for them to be valued at $60 billion at some point. Also worth noting that if they do get access to SpaceX compute, they could start pretraining their own model. Composer is good but its built on top of Kimi 2.5.
I actually now think ai prompt writing in the IDE is completely overkill nowadays.
IDEs are made for just a human to interact with code. I think the paradigm of forcing these tools that weren’t built for this to do this, is us trying to fit a square peg in a round hole.
Call me old, but don’t put ai in my ide. My ide was made for a human, not an ai. For the established players for sure it makes sense since they already have space on our machines. But for the new ones imo terminal, or dedicated llm interfaces are where it’s at.
If I’m writing code sure suggest the next line. If the machine is writing code, let it, and just supervise properly. and have the proper interface that allows the strength of each
>They have some tricks on managing file access that others don’t.
I thought it was a Windows thing. My Windows work computer is so heavily managed and monitored I assumed that was why Copilot stops being able to get terminal output or find the file I'm looking at. It's the same problem in IntelliJ and VSCode, with different models trying to find things in different ways.
Now that I think of it though, I've only used Copilot at work. At home I use Debian but I've never tried using Copilot. Claude, OpenCode, Gemini, and IntelliJ's AI Chat pointed at local Ollama models never have issues finding files or reading files and terminal output.
They're using the code intelligence from the IDE to run the AI, while Claude Code only does greps.
AI coding is much more than just the model - all the tools that human use in IDE are also useful for AI. Claude Code on the other hand just works with grep.
Recent events make it quite clear that this time it is going to be different.
It was like you described earlier. Last year and this year it is basically cumulating over multiple countries.
Swiss people are very upset with what is going on with their military spending in US. I do believe they will be serious about all other purchases from US.
> Swiss people are very upset with what is going on with their military spending in US
Can confirm, as a Swiss person I am flabbergasted at how the federal government keeps pushing for the new fighter jets to be F35s, despite not only the US' currenr erratic behaviour in general, but how it has changed the terms of the purchase deal. Blows my mind, honestly.
You cannot get away with „well no one is going to spend time writing custom exploit to get us” or „just be faster than slowest running away from the bear”.
I'm not really sure what point you're making. Is the point that it is harder to to secure more things? Is it that security events happen more frequently the higher your number of employees goes?
If so, I bristle at this way that many developers (not necessarily you, but generally) view security: "It's red or it's green."
Attack surface going up as the number of employees rises is expected, and the goal is to manage the risk in the portfolio, not to ensure perfect compliance, because you won't, ever.
And just as dangerous: 50 employees. Because quite frequently these 50 employee companies have responsibilities that they can not begin to assume on the budgets that they have. Some business can really only be operated responsibly above a certain scale.
A law firm with 50 employees who use nothing but Microsoft Word, Outlook and a SaaS practice management application is really easy to button up tight, though they probably don’t have any inhouse IT and the quality of MSPs varies wildly.
A company of 50 software developers is an enormous headache.
If you have an app and you want to run a single app yeah silly to look for K8s.
If you have a beefy server or two you want to utilize fully and put as many apps on it without clashing dependencies you want to use K8s or docker or other containers. Where K8s enables you to go further.
reply