This whole post-developer idea is a red herring fueled by investor optics.
The reality is AI will change how software is built, but it's still just a tool that requires the same type of precise thinking that software developers do. You can remove all need to understand syntax, but the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this, and savvy tech leaders know this. So why are we hearing this narrative from tech leaders who should know better? Two reasons:
First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
Second, and perhaps more importantly, regardless of AI, software teams across the industry are just too big. Headcount for tech companies has ballooned over the last couple decades due to the web, smart phone revolution and ZIRP. With this type of environment the FAANGs of the world were hoarding talent just to be ready to capitalize on whatever is next. But the ugly truth is that a lot of the juice has already been squeezed, and the actual business needs don't justify that headcount over the long-term. AI is convenient cover story for RIFs that they would have done anyway, this just ties it with a nice bow for investors.
When COBOL came out it was hyped as ending the need for software developers because it looked sorta like normal English, however it still required someone to be able to think like a programmer. The need to be able to think like a developer is somewhat reduced, but I don't see it totally going away.
Exactly. Plus we kind of want to believe it. The "extrapolate to infinity" bias writ large. It's seductive. Step 1: AI that genuinely does some amazing things. Step 2: handwave, but look out Step 3: Super Intelligence and AI that does it all. "This changes everything" etc. And there are just enough green shoots to go all in on this idea (and I mean cult-level all in).
In practice it plays out much closer to the author's sentiment. A useful tool. Perhaps even paradigm defining.
> First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
This is also the only place where LLMs have had a tangible impact wrt product offerings with some path of utility, so it must be sold this way.
The broader public (my experience, unprovoked, in conversations with the non-technical) are even aware of this--a neighbor of mine mentioned "prompting for code" to me the other day, while "AI" was a topic we discussed.
Programmers have been well-compensated and I suspect there's some sort of public dissatisfaction with the perception of "day in the life" types making loads of comp to drink free lattes or whatever; no-one will cry for us.
While, there's a billion and 6 "AI Startups" "revolutionizing healthcare/insurance/whatever with AI" but with nothing that the public has seen at any scale that can even be sold as a plausible demo.
Image/music gen and chatbots writing code are basically all of it, and the former isn't even often sold as a profitable path.
You can set a hotkey to disable completions, it's very useful for the 'no my little AI friend, I don't think you quite get it' situations that would lead you to spending more brain cycles discarding broken suggestions than actually coding.
> the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this
LLMs are very good at exactly that. What they aren't good at (I'll add the yet as an opinion) is larger systems thinking, having the context of multiple teams, multiple systems, infra, business priorities, security, etc.
> the need for translating vague desires from laypeople into a precise specification will remain
What makes you think LLMs will never be able to do that?
Have you tried any of the various DeepResearch products? The workflow is that you make a request for a research project; and then it asks various clarifying questions to narrow down the specific question, acceptable sources, etc.; and only then does it do all the research and collate a report for you.
Sounds like the work of mustering up instructions... like programming...
So how do these LLM's completely remove us from having to do this work of mustering up instructions? Seems to me someone still has to instruct LLMs on what to do, and that the only way this reality will cease to exist entirely, is if humanity stopped desiring computers to do what it is they want. I don't think that's happening anytime soon.
However, maybe fewer programmers will be needed, but then again, the same was said of Fortran and COBOL and look at where we are today, more programmers than ever...
> Sounds like the work of mustering up instructions... like programming...
Again, try Deep Research. You make a vague request, and it works with you to make it specific enough that it can deliver a product with some confidence that it will meet your requirements. Like a product manager, business analyst, requirements engineer, or whatever they call it these days.
The reality is AI will change how software is built, but it's still just a tool that requires the same type of precise thinking that software developers do. You can remove all need to understand syntax, but the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this, and savvy tech leaders know this. So why are we hearing this narrative from tech leaders who should know better? Two reasons:
First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
Second, and perhaps more importantly, regardless of AI, software teams across the industry are just too big. Headcount for tech companies has ballooned over the last couple decades due to the web, smart phone revolution and ZIRP. With this type of environment the FAANGs of the world were hoarding talent just to be ready to capitalize on whatever is next. But the ugly truth is that a lot of the juice has already been squeezed, and the actual business needs don't justify that headcount over the long-term. AI is convenient cover story for RIFs that they would have done anyway, this just ties it with a nice bow for investors.