I would add on that the most of the premium of a modern SWE has always been on understanding problems and systems thinking. LLMs raise the floor and the ceiling, to where the vast majority of it will now be on systems and relationships
Of course this is true. Just like the need to travel long distances over land will never disappear.
The skills needed to be a useful horseman though have almost nothing to do with the skills needed to be a useful train conductor. Most the horseman skills don't really transfer other than being in the same domain of land travel. The horseman also has the problem that they have invested their life and identity into their skill with horses. It massively biases perspective. The person with no experience with horses actually has some huge advantages of the beginner mind in terms of travel by land at the advent of travel by rail.
The ad nauseam software engineer "horsemen" arguments on this board that there will always be the need to travel long distance by land completely misses the point IMO.
I'm quite convinced that software (and, more broadly, implementing the systems and abstractions) seems to have virtually unlimited demand. AI raises the ceiling and broadens software's reach even further as problems that previously required some level of ingenuity or intelligence can be automated now.
A lot of the recent gains are from RL but also better inference during the prefill phase, and none of that will be impacted by data poisoning.
But if you want to keep the "base model" on the edge, you need to frequently retrain it on more recent data. Which is where data poisoning becomes interesting.
Model collapse is still a very real issue, but we know how to avoid it. People (non-professionals) who train their own LoRA for image generation (in a TTRPG context at least) still have the issue regularly.
In any case, it will make the data curation more expensive.
I primarily find them useful in augmenting my thinking. Grokking new parts of a codebase, discussing tradeoffs back and forth, self-critiques, catching issues with my plan, etc.
I implemented some of his setup and have been loving it so far.
My current workflow is typically 3-5 Claude Codes in parallel
- Shallow clone, plan mode back and forth until I get the spec down, hand off to subagent to write a plan.md
- Ralph Wiggum Claude using plan.md and skills until PR passes tests, CI/CD, auto-responds to greptile reviews, prepares the PR for me to review
- Back and forth with Claude for any incremental changes or fixes
- Playwright MCP for Claude to view the browser for frontend
I still always comb through the PRs and double check everything including local testing, which is definitely the bottleneck in my dev cycles, but I'll typically have 2-4 PRs lined up ready for me at any moment.
We have a giant monorepo, hence the shallow clones. Each Claude works on its own feature / bug / ticket though, sometimes in the same part of the codebase but usually in different parts (my ralph loop has them resolve any merge conflicts automatically). I also have one Claude running just for spelunking through K8s, doing research, or asking questions about the codebase I'm unfamiliar with.
Was the axe or the chainsaw designed in such a way that guarantees that it will definitely miss the log and hit your hand fair amount of the times you use it? If it were, would you still use it? Yes, these hand tools are dangerous, but they were not designed so that it would probably cut off your hand even 1% of the time. "Accidents happen" and "AI slop" are not even remotely the same.
So then with "AI" we're taking a tool that is known to "hallucinate", and not infrequently. So let's put this thing in charge of whatever-the-fuck we can?
I have no doubt "AI" will someday be embedded inside a "smart chainsaw", because we as humans are far more stupid than we think we are.
reply