Hacker Newsnew | past | comments | ask | show | jobs | submit | encyclopedism's commentslogin

> are these tools really that hard to use?

Exactly! If people have 'never felt this far behind' and the LLM's are that good. Ask the LLM to teach you.

Like so many articles on 'prompt engineers' this (never felt this behind) take too is laughable. Programmers having learnt how to program (writing algorithms, understanding data structures, reading source code and API docs) are now completely incapable of using a text box to input prompts? Nor can they learn how to quickly enough! And it's somehow more difficult than what they have routinely been doing? LOL


Frontier AI isnt trained on frontier AI. I wish HN would collectively stop and actually think before they post.

Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.

It is certainly possible that AI is the one great disruptor that we can’t adapt to. History over millenia has me taking the other side of that bet, seeing the disruptions and adaptations from factory farming, internal combustion engines, moving assembly lines, electrification, the transistor, ICs, wired then wireless telecommunications, the internet, personal computing, and countless other major disruptions.

Have we though?

1. Fundamentals do change, Yuval Noah Harari made this point in the book Sapiens, but basically there are core beliefs (in fact the idea that things do change for the better is relatively new, “the only constant is change”. Wasn’t really true before the 19th century.

What does “the great disrupter we can’t adapt to” mean exactly? If humans annihilate themselves from climate change, the earth will adapt, the solar system will shrug it off and the universe won’t even realize it happened.

But like, I am 100% sure humans will adapt to the AI revolution. Maybe we let 7 billion people die off, and the 1% of the rest enslave the rest of us to be masseuses and prostitutes and live like kings with robot servants, but I’m not super comfortable with that definition if “adaptation”.

For most of human history and most of the world “the rest of us” don’t live all that well, is that adaptation? I think most people include a healthy large, and growing middle class in their definition of success metrics.


Isn’t this “healthy, large middle class” a reality that is less than 100 years old in the best of cases? (After a smaller initial emergence perhaps 100 years prior to that.) In 250K years since modern humans emerged, that’s a comparative blink of an eye.

There might be slight local dips along the timeline, but I think most Westerners (and maybe most people, but my lived experience is Western) would not willingly trade places with their same-percentile positioned selves from 100, 200, 500, 1000, 2000, 10K, 50K, or 250K years ago. The fact that few would choose to switch has to be viewed with some positive coefficient in a reasonable success metric.


Yes, my point was, if AI and automation in general are the start to the end of all that (and I do think there are some signs that these technologies could be leading us towards a fundamentally less egalitarian society) I think many would consider that a devastating impact that we did not adapt to, the way we did the Industrial Revolution, which ultimately led towards more middle class opportunities.

I concur with your sentiments.

Am puzzled why so many on HN cannot see this. I guess most users on HN are employed? Your employers - let me tell you - are positively salivating at the prospect of firing you. The better LLM's get the fewer of you will be needed.


Denial, like those factory workers at the first visit from the automation company, each one hoping they are the ones elected to stay and overwatch the robots.

I have seen projects where translator teams got reduced, asset creation teams, devops head count, support teams on phone lines,...

It is all about how to do more with less, now with AI help as well.


Software doesn't have to be good academically speaking. It just needs to furnish a useful function to be economically viable.

LLM's may not generate the best code but they need only to generate useful code to warrant their use.


> the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.

This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.

Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.


Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.

AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.


Here here. I totally agree with both the authors sentiments and your comments.

As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.

However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.


You don't have to use them.

You're wrong in saying so. Many companies are quite literally mandating their use, do a quick search on HN.

That's not how technology works in a society.

Only if you are already wealthy or fine with finding a new job

If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.

In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.

If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.


You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.

Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.

For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?


Very well put

When I do dishes by hand I think all kinds of interesting thoughts.

Anyway, we've had machines that do our dishes and laundry for a long while now.


We have machines that only do some parts of these tasks.

yet some people still do them by hand…

As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.

A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?

It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.


Well put.

And dishes and laundry can be enjoyable zen moments. One only suffers by perceiving them as chores.

Some people want all yang without any yin.


Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.

As I've commented already...

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.


The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.

As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.


Someone smart said that AI should replace tasks, not jobs.

There are infinite analogies for this whole thing, but it mostly distills down to artisans and craftsmen in my mind.

Artisans build one chair to perfection, every joint is meticulously measured and uses traditional handcrafted Japanese joinery, not a single screw or nail is used unless it's absolutely necessary. It takes weeks to build one, each one is an unique work of art.

It also costs 2000€ for a chair.

Craftsmen optimise their process for output, instead of selling one 2000€ chair a month, they'd rather sell a hundred for 20€. They have templates for cutting every piece, jigs for quickly attaching different components, use screws and nails to speed up the process instead of meticulous handcrafted joinery.

It's all about where you get your joy in "software development". Is it solving problems efficiently or crafting a beautiful elegant expressive piece of code?

Neither way is bad, but pre-LLM both people could do the same tasks. I think that's coming to an end in the near future. The difference between craftsmen and artisans is becoming clearer.

There is a place for people who create that beautiful hyper-optimised code, but in many (most) cases just a craftsman with an agentic LLM tool will solve the customer's problem with acceptable performance and quality in a fraction of the time.


In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?

Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.

Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.

Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.


If we had done what you say (distribute wealth more evenly between people/corporations) more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.

The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.


Transformers grew out of the value-add side (autotranslation), though, not really the ad business side iirc. Value-add work still gets done in high-progressive-tax societies if it's valuable to a large fraction of people. Research into luxury goods is slowed by progressive tax rates, but the actual border between consumer and luxury goods actually rises a bit with redistributed wealth; more people can afford smartphones earlier and almost no one buys superyachts and so reinvestment into general technology research may actually be higher.

And I'm sure none of it was based on any public research from public universities, or private universities that got public grants.

Sure. I just know in most companies (seeing the numbers on projects in a number of them across industries now) funding projects which give time for people to think, ponder, publish white papers of new techniques is rare and economically not justifiable against other investments.

Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.

They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.

In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.


I agree with much of what you say.

Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.

Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.

Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.


> Career being the core of one's identity is so ingrained in society

In AMERICAN society. Over there "what do you do?" is in the first 3 questions people ask each other when they meet.

I've known people for 20 years and I don't have the slightest clue what they do for a living, it's never came up. We talk about other things - their profession isn't a part of their personality.


    Education for educations sake isn't a thing.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: