Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would challenge you to prove that with data


You could prove it yourself if you actually care whether or not it's true and aren't just positioning yourself in an imaginary debate so you can claim "burden of proof". If what they're saying is true then it will still be true whether any proof is posted by them here or not.


No actually the proof is in basically every benchmark, which are all super well known, so the burden of proof is on the OP


It's funny how that works in your mind.

There comes a snake oil salesmen and says that LLMs are the bees'knees and you just take as a given without any critical thought.

But when people actually have proof that LLMs are nothing special and even reduce productivity (by 19% in experienced developers): https://secondthoughts.ai/p/ai-coding-slowdown you are screaming to the heavens for proof.

I am the last to stop the AI fanbase from outsourcing their critical thinking and some of their wallet to a fancy autocomplete, that means more work for the rest of us that actually do the thinking.

LLM vibe-"engineers" will be fleeced $$$ when Anthropic actually starts wanting to be profitable: https://news.ycombinator.com/item?id=44598254#44602695

Their skills deteriorate, and so does their critical thinking: https://www.mdpi.com/2075-4698/15/1/6

And I just can't stop giggling imagining their workday when Claude is down.

LGTM is not a sustainable career choice, people.


I would challenge you to use LLMs to prove me wrong, then LLMs to prove me right. Cause they will gladly invent stuff up to agree with whatever you ask them to do.

Hard pass on spending time debating with the fanbase of a tool which claim to fame is the dubious benefit of allowing its users to avoid critical thinking for themselves.


that sounds about right, "ignore all evidence and believe what I think"


Evidence of what?

As soon as somebody tried to actually measure LLM productivity for devs, it became evident that it reduces it about 19% for experienced engineers:

https://hackaday.com/2025/07/11/measuring-the-impact-of-llms...

But you can believe any x-AI/e-acc bro (with 0 years of dev experience but dog-like obedience to their mind-influencers Musk or Sama) that suddenly became a self-proclaimed expert and have 319491% increase of productivity. It's Schrödinger's LLM gains: 1337% on X, -19% in a study.

Those are the same people that back in the day were proclaiming that NFTs, web3 and crypto were the next coming of Jesus. These are types of people who hang on every statement of Andrej Karpathi and take it as gospel, while failing to realize that's the same guy telling you self-driving cars are to be a reality in 2019. I sometimes wonder how they are not ashamed of themselves, but I realized shame requires critical thinking, and context/memory, something that is severely lacking in LLMs, but also seemingly in the LLM fanbase.

Funnily enough, these e/acc bros are the ones that do benefit the most from LLMs. That's because if you are used to offload your critical thinking to accounts on X, offloading to a fancy autocomplete instead doesn't seem to be such a big step down.

But in reality, the cold facts are that if LLMs actually helped with productivity, we'd have a noticeable impact on open source. And what do we have instead? It's either crickets or just slop, insane amount of slop.

https://www.theregister.com/2025/02/16/oss_llm_slop/

https://www.infoq.com/articles/llm-surging-pr-noise/

And AGI is juuust behind the corner, trust me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: