2. Some people have become very tied to the memory ChatGPT has of them.
3. Inertia is powerful. They just have to stay close enough to competitors to retain people, even if they aren’t “winning” at a given point in time.
4. The harness for their models is also incredibly important. A big reason I continue to use Claude Code is that the tooling is so much better than Codex. Similarly, nothing comes close to ChatGPT when it comes to search (maybe other deep research offerings might, but they’re much slower).
These are all pretty powerful ways that ChatGPT gets new users and retains them beyond just having the best models.
All of my family members bar one use ChatGPT for search, or to come up with recipes, or other random stuff, and really like it. My girlfriend uses it to help her write stories. All of my friends use it for work. Many of these people are non-technical.
You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.
I think the key here is “if X then Y syntax” - this seems to be quite effective at piercing through the “probably ignore this” system message by highlighting WHEN a given instruction is “highly relevant”
The skilled AI users are the people that use it to help them learn and think problems through in more detail.
Unskilled AI users are people who use AI to do their thinking for them, rather than using it as a work partner. This is how people end up producing bad work because they fundamentally don’t understand the work themselves.
GenAI isn't a thinking machine, as much it might pretend to be. It's a theatre kid that's really motivated to help you and memorized the Internet.
Work with them. Let them fill in your ideas with extra information, sure, but they have to be your ideas. And you're going to have to put some work into it, the "hallucinations" are just your intent incompletely specified.
They're going to give you the structure, that's high probability fruit. It's the guts of it that has to be fully formed in the context before the generative phase can start. You can't just ask for a business plan and then get upset when the one it gives you is full of nonsense.
Ever heard the phrase "ask a silly question, get a silly answer"?
Opus 4.5 seems to think a lot less than other models, so it’s probably not as many tokens as you might think. This would be a disaster for models like GPT-5 high, but for Opus they can probably get away with it.
Code quality is still a culture and prioritisation issue more than a tool issue. You can absolutely write great code using AI.
AI code review has unquestionably increased the quality of my code by helping me find bugs before they make it to production.
AI coding tools give me speed to try out more options to land on a better solution. For example, I wrote a proxy, figured out problems with that approach, and so wrote a service that could accomplish the same thing instead. Being able to get more contact with reality, and seeing how solutions actually work before committing to them, gives you a lot of information to make better decisions.
But then you still need good practices like code review, maintaining coding standards, and good project management to really keep code quality high. AI doesn’t really change that.
> Code quality is still a culture and prioritisation issue more than a tool issue.
AI helps people more that "write" (i.e. generate) low-quality code than people who write high-quality code. This means AI will lead to a larger percentage of new code being low-quality.
The benefit you get from juggling different tools is at best marginal. In terms of actually getting work done, both Sonnet and GPT-5.1-Codex are both pretty effective. It looks like Opus will be another meaningful, but incremental, change, which I am excited about but probably won’t dramatically change how much these tools impact our work.
I despise these laws! Australia made Google pay the big news websites for linking to them, which is just insane to me. It inevitably favours the big news providers that can negotiate directly with Google, and the laws even stop tech companies from just removing the news companies from their search results as well. It very much feels like taking from one big company to prop up other big companies... Who benefits again?!
> It inevitably favours the big news providers that can negotiate directly with Google,
That is a feature, not a bug.
> It very much feels like taking from one big company to prop up other big companies... Who benefits again?
The big news companies, who tend to support the political status quo. The parties big enough to get into government are very cosy with them (not in Australia in particular, in most places).
> and the laws even stop tech companies from just removing the news companies from their search results as well.
Does that apply to all search engines? In that case it creates a barrier to entry by by making it harder for smaller competitors to emerge, so it favours Google.
We use LiteLLM and it is a bit of a dumpster fire of enterprise features and bugs. I can't even update the budget on keys in the UI (enterprise feature, although it may be a bug that it is marked as such). I can still update budgets through the API, but the API is a bit of a mess as well. Then we've ran into a lot of bugs like the UI DDOSing itself when the retry mechanism broke and it just started spamming API requests. And then basic features like the cleanup of old logs is an enterprise feature.
We are actively looking to switch away from it, so it was nice to stumble on a post like this. Something so simple as a proxy with budgeting for keys should not be such a tangled mess.
Are there other alternatives you have been looking at? I’m just getting started looking at these LLM gateways. I was under the impression that LiteLLM was pretty popular but you are not the only one here with negative things to say about it.
I'm currently using apisix its ai rate limits are fine and the webui is a little json heavy but got me going on load balancing a bunch of models across ollama installs
1. ChatGPT has a better UX than competitors.
2. Some people have become very tied to the memory ChatGPT has of them.
3. Inertia is powerful. They just have to stay close enough to competitors to retain people, even if they aren’t “winning” at a given point in time.
4. The harness for their models is also incredibly important. A big reason I continue to use Claude Code is that the tooling is so much better than Codex. Similarly, nothing comes close to ChatGPT when it comes to search (maybe other deep research offerings might, but they’re much slower).
These are all pretty powerful ways that ChatGPT gets new users and retains them beyond just having the best models.
reply