The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
Someone smart said that AI should replace tasks, not jobs.
There are infinite analogies for this whole thing, but it mostly distills down to artisans and craftsmen in my mind.
Artisans build one chair to perfection, every joint is meticulously measured and uses traditional handcrafted Japanese joinery, not a single screw or nail is used unless it's absolutely necessary. It takes weeks to build one, each one is an unique work of art.
It also costs 2000€ for a chair.
Craftsmen optimise their process for output, instead of selling one 2000€ chair a month, they'd rather sell a hundred for 20€. They have templates for cutting every piece, jigs for quickly attaching different components, use screws and nails to speed up the process instead of meticulous handcrafted joinery.
It's all about where you get your joy in "software development". Is it solving problems efficiently or crafting a beautiful elegant expressive piece of code?
Neither way is bad, but pre-LLM both people could do the same tasks. I think that's coming to an end in the near future. The difference between craftsmen and artisans is becoming clearer.
There is a place for people who create that beautiful hyper-optimised code, but in many (most) cases just a craftsman with an agentic LLM tool will solve the customer's problem with acceptable performance and quality in a fraction of the time.
In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
If we had done what you say (distribute wealth more evenly between people/corporations) more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.
The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.
Transformers grew out of the value-add side (autotranslation), though, not really the ad business side iirc. Value-add work still gets done in high-progressive-tax societies if it's valuable to a large fraction of people. Research into luxury goods is slowed by progressive tax rates, but the actual border between consumer and luxury goods actually rises a bit with redistributed wealth; more people can afford smartphones earlier and almost no one buys superyachts and so reinvestment into general technology research may actually be higher.
Sure. I just know in most companies (seeing the numbers on projects in a number of them across industries now) funding projects which give time for people to think, ponder, publish white papers of new techniques is rare and economically not justifiable against other investments.
Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.
They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.
In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.
Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.
Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.
Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.
> Career being the core of one's identity is so ingrained in society
In AMERICAN society. Over there "what do you do?" is in the first 3 questions people ask each other when they meet.
I've known people for 20 years and I don't have the slightest clue what they do for a living, it's never came up. We talk about other things - their profession isn't a part of their personality.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.
For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.
I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.
The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.
Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.
I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.
I just wish I could have competent enough local LLMs and not rely on a company.
The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)
You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.
Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.
> the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.
This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.
This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.
Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.
Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.
AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.
My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.
If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)
As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.
We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.
In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.
Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…
AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.
Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.
The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…
Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.
Not sure it's the gotcha you want it to be. What you said is true by definition. That is, vibe coding is defined as not caring about code. Not to be confused with LLM-assisted coding.
I care about product quality. If "good design" and "code quality" can't be perceived in the product they don't matter.
I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.
Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?
Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.
It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.
Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.
Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.
I don't agree, I looked at most of the code the AI wrote in my project, I have a good idea of how it is architectured because I actively planned it. If I have a bug in my orders, I know I have to go to the orders service. Then it's not much harder than reading the code my coworkers write at my daily job.
At this point in reality do you read assembly or libraries anymore?
Years ago it was Programmer -> Code -> Compile -> Runtime
Now today the Programmer is divided into two entities.
Intention/Prompt Engineer -> AI -> Code -> Compile -> Runtime.
We have entered the 'sudo make me a sandwich' world where computers are now doing our bidding via voice and intent. Despite knowing how low level device drivers work I do not care how a file is stored, in what format, or on what medium. I do want it to function with .open and .write which will work as expected with a working instruction set.
Those who can dive deep into software and hardware problems will retain their jobs or find work doing that which AI cannot. The days of requiring an army of six figure polyglots has passed. As for the ability to production or kernel level work is a matter of time.
I'm not sure I'm having more fun, at least not yet, since for me the availability of LLMs takes away some of the pleasure of needing to use only my intellect to get something working. On the other hand, yes, it is nice to be able to have Copilot work away on a thing for my side project while I'm still focused on my day job. The tradeoff is definitely worth it, though I'm undecided on whether I am legitimately enjoying the entire process more than I used to.
You don't have to use LLMs the whole time. For example, I've gotten a lot done with AI and had the time to spend over the holidays on a long time side project... organically coding the big fun thing
Replacing Dockerfiles and Compose with CUE and Dagger
I don't do side projects, but the LLM has completely changed the calculus about whether some piece of programming is worthwhile doing at all. I've been enjoying myself automating all sorts of admin/ops stuff that hitherto got done manually because there was never a clear 1/2 day of time to sit down and write the script. Claude does it while I'm deleting email or making coffee.
For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.
It is clearly an emotional question. My comment on here saying I enjoyed programming with an LLM has received a bunch of downvotes, even though I don't think the comment was derogatory towards anyone who feels differently.
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.