Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
It Would Be Good If the AI Bubble Burst (stephendiehl.com)
56 points by extesy 4 months ago | hide | past | favorite | 32 comments


I wholeheartedly agree--the hype has become toxic, just like in 2001 when the bubble burst. I think the author overestimates what will be salvageable from this. What I witnessed back in 2001-2003 was beyond comprehension. Find someone who worked for Northern Telecom back then to explain their experience of not only losing their job, but also their retirement (the entire 401K was in company stock). The former workers I encountered still lived in denial about the stock's worthlessness even into 2005, but it was finally, inevitably de-listed and the last remnants of those dead 401Ks was swept away.

In RTP, the amount of office space that was built out has never been fully utilized to this very day.

These tech bubbles come and go, but leave enormous craters of destruction in their wake.


> These tech bubbles come and go, but leave enormous craters of destruction in their wake.

They also leave a select few propped up to move on to their next, entitled, adventure [0].

[0] https://web.archive.org/web/20100326134422/http://www.phonep...


> It could simply be a tool, and we could get back to the real, unglamorous, but ultimately more rewarding work of using it to build better things.

I agree completely from a developer point of view, as for it being a bubble.. I'm not sure. It seems that it's a couple of companies enticing people to integrate things into their system so deeply that later, they can name their price and dictate their terms so that all technology falls in line with how they want things to play out.

We can see it already happening with VC investment not touching anything that doesn't have AI integrated.

Less innovation and creativity from smaller startups means less competition, which is great for business.


Feels a little like AWS in that regard, no?


In the sense of hosting and "cloud" when it was the craze at the time, yes. Absolutely. But what you developed and ran on the aws platform was not dictated by aws. If it didn't suit, you could adhoc it.

Great example though, so many companies grew from the IAC movement. Those were startups though, doing what Amazon probably didn't have the resources to do themselves and also didn't need to.

This AI craze feels to hit at a bit of a lower level, I feel awful for any junior or new Devs entering the market.

Edit for clarity : Hashicorps terraform wouldn't be here without making aws infra easier.


Unlike past bubbles, there are only a handful of publicly traded AI stocks, which have notably preformed poorly despite considerable AI hype. This bubble is limited to private companies. Also the AI market is capable of accommodating many entrants and each getting a high valuation and filling some sort of niche, instead of the old Google vs Yahoo or Apple App Store vs Google App Store duopolies or winner-take-all markets like with seen with MySpace vs. Facebook. There is Eraser, Claude, Anthropic, etc. Each fills some sort of purpose and has strengths and weaknesses. In past bubbles, the market was more concentrated among a few names or interchangeable. So this means looking it from the lens of past bubbles may not work.


A bubble assumes there is a bell curve to the system. So far, all graphs show it is going up, either linear or curving upwards. This shows there hasn't been a hint (yet) at slowing down. Until there is some sign, don't assume a bubble.


I wholeheartedly disagree. We have already for some time plateaued both in terms of LLM model usefulness and in terms of efficiency.

You can already see some of the bigger players trying to squeeze out more money out of their victims, pardon, users, in order to maintain at least the illusion of a future where they are profitable. We also see some players eg Apple give up on the race, and some other players, eg Microsoft, Google and OpenAI switch personnel and positions in the race, which should be a good signal it's a mined out field and they are scrapping to be the one that gets the few remaining spoils before it all crumbles down. Very bearish on LLMs.


I would challenge you to prove that with data


You could prove it yourself if you actually care whether or not it's true and aren't just positioning yourself in an imaginary debate so you can claim "burden of proof". If what they're saying is true then it will still be true whether any proof is posted by them here or not.


No actually the proof is in basically every benchmark, which are all super well known, so the burden of proof is on the OP


It's funny how that works in your mind.

There comes a snake oil salesmen and says that LLMs are the bees'knees and you just take as a given without any critical thought.

But when people actually have proof that LLMs are nothing special and even reduce productivity (by 19% in experienced developers): https://secondthoughts.ai/p/ai-coding-slowdown you are screaming to the heavens for proof.

I am the last to stop the AI fanbase from outsourcing their critical thinking and some of their wallet to a fancy autocomplete, that means more work for the rest of us that actually do the thinking.

LLM vibe-"engineers" will be fleeced $$$ when Anthropic actually starts wanting to be profitable: https://news.ycombinator.com/item?id=44598254#44602695

Their skills deteriorate, and so does their critical thinking: https://www.mdpi.com/2075-4698/15/1/6

And I just can't stop giggling imagining their workday when Claude is down.

LGTM is not a sustainable career choice, people.


I would challenge you to use LLMs to prove me wrong, then LLMs to prove me right. Cause they will gladly invent stuff up to agree with whatever you ask them to do.

Hard pass on spending time debating with the fanbase of a tool which claim to fame is the dubious benefit of allowing its users to avoid critical thinking for themselves.


that sounds about right, "ignore all evidence and believe what I think"


Evidence of what?

As soon as somebody tried to actually measure LLM productivity for devs, it became evident that it reduces it about 19% for experienced engineers:

https://hackaday.com/2025/07/11/measuring-the-impact-of-llms...

But you can believe any x-AI/e-acc bro (with 0 years of dev experience but dog-like obedience to their mind-influencers Musk or Sama) that suddenly became a self-proclaimed expert and have 319491% increase of productivity. It's Schrödinger's LLM gains: 1337% on X, -19% in a study.

Those are the same people that back in the day were proclaiming that NFTs, web3 and crypto were the next coming of Jesus. These are types of people who hang on every statement of Andrej Karpathi and take it as gospel, while failing to realize that's the same guy telling you self-driving cars are to be a reality in 2019. I sometimes wonder how they are not ashamed of themselves, but I realized shame requires critical thinking, and context/memory, something that is severely lacking in LLMs, but also seemingly in the LLM fanbase.

Funnily enough, these e/acc bros are the ones that do benefit the most from LLMs. That's because if you are used to offload your critical thinking to accounts on X, offloading to a fancy autocomplete instead doesn't seem to be such a big step down.

But in reality, the cold facts are that if LLMs actually helped with productivity, we'd have a noticeable impact on open source. And what do we have instead? It's either crickets or just slop, insane amount of slop.

https://www.theregister.com/2025/02/16/oss_llm_slop/

https://www.infoq.com/articles/llm-surging-pr-noise/

And AGI is juuust behind the corner, trust me.


Not going to happen, this bubble knows what it’s doing.


The AI "bubble" if you call it a bubble, it currently provides most of the oxygen for transistor improvements. Had it not been AI, Smartphone and PC alone would not be able to scale the 2 - 2.5 years improvement cycle alone.

And as someone who have seen the PC, Internet and Smartphone cycle. I will say ChartGPT ( or AI ) adoption cycle is way faster than anything I have seen.


"It has become less a field of engineering and more of a speculative gold rush, complete with a quasi-religious fervor that is completely disconnected from the reality of what these tools can actually do."


Except it might not be a bubble if companies are using it in production.


> Except it might not be a bubble if companies are using it in production.

"Being a bubble" and "companies are using it in production" are not mutually exclusive.


Companies were using websites in production during .com bubble too


People were buying houses in 2006.


>> a legitimate, if incremental, step forward in developer productivity

It’s not incremental it’s revolutionary. Nothing has come before that has such power and capability.


So where is the product? Why haven't the vibecoders built a browser or a kernel or anything remotely ambitious? They have had years at this point. With their fabled productivity increase, making a better kernel than Linux in that time should be child's play. So where is it?


Why are you conflating people who use LLMs to work more efficiently with vibe-coding shills? Real engineers only write in assembly right? Lol. It’s giving anxiety.


AI is an assistant not a magician.


So where is the revolution then? How can it be both a revolution and not a magician at the same time?

At the same time when studies are coming out that experienced developers are losing 19% of productivity using AI tools, it makes me question whether it's not a devolution. Especially considering how widely unprofitable is for Claude to be run at scale where it's at least a net neutral for the average dev, where is that revolution you are talking about?

Is it the same revolution like NFts or blockhain or whatever web3 was, cause I am still waiting for those?


Then why are AI-based contributions in the open source space generally so low quality that they get rejected, while the biggest observable effect of big tech investments is the addition of AI buttons everywhere that sometimes don't even do anything other than annoy users? Aside from AI-powered tech support leading to loss of customers and reputation, see Cursor AI.

If it's revolutionary as you say, why are companies laying off people when higher productivity per employee should mean that more employees increase the advantage from AI? Why aren't early adopters running circles around competitors and producing larger, more frequent and/or higher quality updates and products in a measurable way?


>The core of the problem is the mythology. You have chief executives and venture capitalists acting like high priests, delivering sermons from conference stages about the imminent arrival of this ill-defined "Artificial General Intelligence".

A lot of the AGI prediction is a fairly mundane business of extrapolating Moore's law like growth in computing and comparing it to the human brain. I don't think calling it mythology from high priests is a very accurate appraisal.


The author misses the forest for the trees. He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative I.e the rate of improvement of these tools.

That being said, everything is overvalued and a lot of this is ridiculous.


> He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative

Extrapolation would reasonably show that they're reaching an asymptote, graph cost vs improvement on a chart; you'll see that they are not proportional.


I think you are the ine missing the forest for the trees.

- The energy efficiency and cost improvements of LLMs has plateau-ed as of late. https://arxiv.org/html/2507.11417v1

- The improvements from each subsequent model have also plateau-ed, with even regressions being noticeable

- The biggest players are so wildly unprofitable that they are already trying to change their plans or squeeze their current fanbase and raise their rates

https://news.ycombinator.com/item?id=44598254

https://www.wheresyoured.at/anthropic-is-bleeding-out/

- And, as it turns out, experienced developers are 19% less productive using LLMs: https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...

> I.e the rate of improvement of these tools.

They have stopped improving to match for the increase of the rate their costs and benefits. It's simple mathematics, improvements in efficiency don't match the increases in costs and the fact that they are extremely unprofitable, and all that data points to a bubble.

It's one of the most obvious bubbles if I have ever seen one, propped only by vibes and X-posts and Sama's promise that AGI is just around the corner, just inject a couple trillion more, trust me bro. All that for a fancy autocomplete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: