Cursor has been my main AI tool for over a year now.
I've been trying to use Claude Code seriously for over a month, but every time I do it, I get the impression that it would take me less work to do with Cursor.
I'm on the enterprise plan, so it can get pricey. This is why I used to stick mostly to auto mode.
Now Composer 2 has taken over as my default model. It is not as intelligent as OpenAI's or Anthropic's flagship models, but I feel it has as good as or better intuition. With way better pricing. It can get stuck in more complex tasks though.
Being able to get in the loop, stop and instruct or change models makes all the difference. And that is why I've stayed in the editor mode until now. Let's see if 3.0 changes that.
I was a Cursor loyal until I was spending around $2k a week with premium models and my team had a discussion about whether we’d want to use more Cursor over hire another engineer. We unanimously agreed we’d rather hire another team member. I’m more productive than ever but I’m burning out.
Anyway, as a result, I switched to Claude Code Max and I am equally as prolific and paying 1/10th the price. I get my cake and to eat it, too. *Note there’s a Cursor Ultra, which at quick glance seems akin to Claude Code Max. Notice that both are individual plans, I believe I’m correct you benefit from choosing those token-wise over a team or enterprise plan?
Anyway, you’re right Claude Code is less ergonomic; generally slower. I was losing my mind over Opus in Cursor spinning up subagents. I don’t notice that happen nearly as frequently in Claude Code itself. I think it has to do with my relatively basic configuration. CC keeps getting better the more context I feed it though, which is stuff like homegrown linters to enforce architecture.
All to say, Cursor’s pricing model is problematic and left a bad taste in my mouth. Claude Code seems to need a bunch of hand holding at first to be magical. Pick your poison
> Anyway, you’re right Claude Code is less ergonomic; generally slower.
The secret in my experience is parallelization - Cursor might be faster or have better ergo for a single task, but Claude Code really shines when you have 6 tasks that are fairly independent.
If you treat CC as just another terminal tool and heavily use git worktrees, the overall productivity shoots through the window. I've been using a tool called Ouijit[1] for this (disclosure: the dev is an old colleague of mine), and I genuinely do not think I could go back to using Cursor or any other traditional IDE+agent. I barely even open the code in an editor anymore, primarily interacting through the term with Vim when I need to pull the wires out.
Cursor can do that well too. Their code review feature usually gives a handful of independent feedbacks. I just trigger agents independently for all of those. Other integrations with Linear and Slack are also very handy to getting into this workflow. Seems like the 3.0 version is aiming at getting better at this use case.
FWIW I'm not saying Cursor is not capable of this, but that all of the 'Cursor' bits are superfluous, and using tools that bring you closer to the 'bare metal' of the terminal actually give you both more flexibility (I can run Claude Code, Crush, Codex, OpenCode, etc) and remove an entire layer of abstraction that I believe hinders a devs ability to really go all in on agentic engineering.
I started using Cursor and it was my daily driver for a year or two, but I haven't looked back once in regret moving more towards a terminal focused workflow. (Not to mention the pricing of Cursor being absolutely abysmal as well, although often comped by employers)
I've been using Codex since before ChatGPT (the OG version) and CC since launch. For me personally - Claude Code with Opus/Sonnet generally has better taste, more personality in interactions, and is more willing to just do the work. Paired with skills, LSPs, linters, and hooks, it works very well. I think of the two like this:
Claude Code with Opus/Sonnet is the L7 senior engineer still gunning for promotion. Hasn't hit burnout, hasn't been ground down by terrible teams yet. Capable and willing to get their hands dirty.
Codex (the harness) with GPT-5.4 or 5.3-codex is fantastic but terse. Some of the UX frustrates me. I want a visual task list. That said, Codex as a harness is phenomenal. Think of it as the senior EM / CTO-in-waiting who won't write new code without complaining and nitpicking for hours. But they'll thoroughly tear your code apart and produce a plan you can execute yourself or pass to Claude Code.
Both are great, and so is Factory Droid. Also worth checking out Compound Engineering from Every.to if you haven't.
here is example of project i worked using codex, it took 10 iterations just to get github actions right https://github.com/newbeelearn/whisper.cpp . you can see the commits made by codex. Project was quite simple it needs to modify whisper to add support for transcribing voice with start/stop keys and copy the transcription to clipboard when stopped. That's it.
It performs poorly as compare to CC which gets it right in one shot.
The workflow that got me into Cloud Code was instructing it that whenever I create a new feature or bug, it should make a new git worktree. And then when I'm done, merge that back to main and delete the worktree. That enables me to open up three plus different Cloud Code's and work on three different things at the same time. As long as they're not directly overlapping, it works great.
I find it interesting that you are on the enterprise plan and are not default willing to pay more for more intelligence. Most people I know who are on the enterprise plan are wishing there existed a 2x intelligent model with 2x price.
My company is going through the exact opposite, so it kinda depends on the company. We are actively encouraging our devs to NOT use Cursor because of how much more expensive it is compared to other tools we have from our calculations and they even considered dropping Cursor at contract renewal altogether due to their costs being higher than other tools.
most tasks I can do better and faster with composer 2
a fellow engineer reported a bug on a code I had written a few months back.
I used his report as prompt for composer 2, gpt-5.4-high and claude-4.6-opus-max-thinking.
composer found the issue spot on.
gpt found another possible vector a couple of minutes later, but a way less likely one and one that would eventually self heal (thus not actually reproducing what we observed on production).
claude had barely started when the other two had finished
also, i don't have a budget per se.
but it is expected that i over deliver if i'm over spending
I agree, but this is more about stopping when you get hurt. I cook everyday and hurt myself only with very sharp knifes as I couldn't feel when the knife was about to cut my skin.
For trained chefs, the sharper blade means things stay in place as expected, because the weight and motion of the blade are cutting, and not force exerted.
Related: I watch chefs use a mandolin, no freaking way I'd use it the way they do. I just do not have the skills necessary to free hand it. I will use a safety glove and/or a guard.
Agreed that you can have real individual ownership. Not only that, I think that is the only way to be really "productive".
But I think that is beside the point.
Individuals are not fungible, but team members are - or at least can be, depending on how you structure your teams.
And as your org grows, you want predictability on a team level. Skipping a bunch of reasoning steps, this means having somewhat fungible team members, to give you redundancy.
The engineering parallel here is the tradeoff between resilience and efficiency. You can make a system more reliable by adding redundancy. You make a system more efficient by removing redundancy.
I blame Agile and the like for fucking up individual ownership by treating every engineer and every task interchangeable. You can't build expertise by working on a different type of task every sprint...
Yes. See update 2 FTA for a 2019 study on go concurrency bugs. Most go devs that I know consider using higher level synchronization mechanisms the right way to go (pun intended). sync.WaitGroup and errgroup are two common used options.
I wonder how keys are assigned to rows and how the adjudicator is shared.
The author also mentions a leader adjudicator, which means there is probably some sort of coordination to pick a leader.
This raises the question of how a leader is picked, and if leadership changes based on how hot a key is in a given AZ.
This blog series is a great read. Every day Marc drops excellent content and leaves room for questions, which he ends up answering on the following days. Hope more details come next.
> We’ve learned from building and operating large-scale systems for nearly two decades that coordination and locking get in the way of scalability, latency, and reliability for systems of all sizes. In fact, avoiding unnecessary coordination is the fundamental enabler for scaling in distributed systems
I hope there is a follow-up since the points the author only glossed over are important to understanding the architecture and trade-offs. I would like to know about the cross-adjudicator coordination protocol and how the journal works.
From the information available, it seems that DSQL should be pretty fast as long as you keep writes local. Once you add active-active replication and start writing to the same key in different regions the coordination costs should slow the system down significantly (or not - but if that is the case I want to know how they managed to do it).
A symlink can point to anything, including a file that doesn't exist:
[~] 0 $ mkdir tmp/demo
[~] 0 $ cd tmp/demo
[demo] 0 $ ln -s foo bar
[demo] 0 $ ls -l
total 1
lrwxrwxrwx 1 user users 3 Nov 15 12:14 bar -> foo
[demo] 0 $ cat bar
cat: bar: No such file or directory
[demo] 1 $ echo foo > foo
[demo] 0 $ ls -l
total 2
lrwxrwxrwx 1 user users 3 Nov 15 12:14 bar -> foo
-rw-r--r-- 1 user users 4 Nov 15 12:14 foo
[demo] 0 $ cat bar
foo
[demo] 0 $ rm foo
[demo] 0 $ cat bar
cat: bar: No such file or directory
[demo] 1 $ ls -l
total 1
lrwxrwxrwx 1 user users 3 Nov 15 12:14 bar -> foo
[demo] 0 $
What you can't see because this is flat text is that in my terminal the first and last "bar -> foo" are red because ls is warning me that that link points to a file that doesn't exist.
1. This depends on the filesystem. For ext2/3/4 (and many others) there is a reference count maintained in the first inode of the file. You can usually see this count in the output of "ls -l", between the perms and ownership columns. If someone goes wrong and the count isn't decremented properly (due to a system crash while the inode is being updated) or is otherwise corrupt, the space allocated to the object may never be released when it is deleted because the count will never reach zero. This is one of the checks/fixes fsck.ext* does when run. If the count is somehow too low the content could be deallocated too early, resulting in corruption (the remaining link(s) ending up pointing to the wrong data when the inode is eventually refused). Again fsck can detect this, but only if it is not too late and things are already mislinked or some of the space relocated.
2. A dangling soft link points to nothing valid. If you try to access it in a way that would normally give you the object it points to there will be a not found error. If a new object of the destination name appears the link will start to work again but give the new content. If relative links are moved around out of step with what they point to this can cause significant confusion. This is not filesystem level corruption that fsck can/will check for.
For 1, the inode probably has a reference count that's incremented when creating a hard link and decremented when deleting one. If the count is 0, the inode can be deleted.
I know its required to store this count such that the filesystem would know when it can actually delete the inode, but isn't this half-way to making the inode aware of the paths pointing to it?
You can point at softlink at any path, even one that doesn’t exist. Create a regular file, now softlink to it, delete the regular file - now your softlink is dead.
I grew up finding those glasses the most horrible thing. Even though they were always popular, they became fashionable recently when new sizes were introduced.
I somehow started to find they kind of beautiful when I worked at a company that only had pint-sized American glasses at their office. Now most cups in my house have this design. They are dirty cheap and very easy to replace.
I've been trying to use Claude Code seriously for over a month, but every time I do it, I get the impression that it would take me less work to do with Cursor.
I'm on the enterprise plan, so it can get pricey. This is why I used to stick mostly to auto mode.
Now Composer 2 has taken over as my default model. It is not as intelligent as OpenAI's or Anthropic's flagship models, but I feel it has as good as or better intuition. With way better pricing. It can get stuck in more complex tasks though.
Being able to get in the loop, stop and instruct or change models makes all the difference. And that is why I've stayed in the editor mode until now. Let's see if 3.0 changes that.
reply