Hacker Newsnew | past | comments | ask | show | jobs | submit | simplify's commentslogin

Does it not sound insane to you that you need to expose your biometrics to a corporation just to make anonymous posts on a forum?

It really is insane that some people don't realize how biometrics are just as bad as every other option.

Cool concept! I play Go, and it's extremely unnerving that all the good shapes you play in Go are essentially the worst shapes you can play in Tiao :D

Haha yeah :D

What is the purpose of this mindset? Should we encourage typical corporate coldness instead?

We should encourage minimal dependency on multibillion tech companies like anthropic. They, and similar companies are just milking us… but since their toys are soo shiny, we don’t care

Sure, but that seems out of scope of the original comment.

Same here, maybe we're grandfathered into a good plan or something.


Since when has raising taxes actually solved any major problem? We have enough taxes, the issue is the corrupt politicians swindling it to themselves and their cronies.


You pay enough. Musk doesn't. Does he even pay any at all?


Power is not the problem, because power exists regardless of who owns it.

We the people actually have a relatively high amount of power in our states and communities. We just don't use it. The real solution is to convince the masses to pay attention, which is harder today than it ever was.


LLMs are an amplifier. The great get greater, and the lazier get lazier.


Considering the seeming increasing frequency of high severity bugs happening at FAANG companies in the last year I think perhaps The great getting greater is not actually the case.


That's assuming FAANG engineers are actually great.


They're far more likely to be above average I would say.


Above average in tolerance for immoral business models, certainly.


I happen to think that's largely a self-delusion which nobody is immune to, no matter how smart you are (or think you are).

I've heard this from a few smart people whom I know really well. They strongly believe this, they also believe that most people are deluding themselves, but not them - they're in the actually-great group, and when I pointed out the sloppiness of their LLM-assisted work they wouldn't have any of it.

I'm specifically talking about experienced programmers who now let LLMs write majority of their code.


All on my own, I hand-craft pretty good code, and I do it pretty fast. But one person is finite, and the amount of software to write is large.

If you add a second, skilled programmer, just having two people communicating imperfectly drops quality to 90% of the base.

If I add an LLM instead, it drops to maybe 80% of my base quality. But it's still not bad. I'm reading the diffs. There are tests and fancy property tests and even more documentation explaining constraints that Claude would otherwise miss.

So the question is if I can get 2x the features at 80% of the quality, how does that 80% compare to what the engineering problem requires?


I was somewhat surprised to find that the differentiator isn't being smart or not, but the ability to accurately assess when they know something.

From my own observations, the types of people I previously observed to be sloppy in their thought processes and otherwise work, correlates almost perfectly with those that seem most eager to praise LLMs.

It's almost as if the ability to identify bullshit, makes you critical of the ultimate bullshit generator.


This is very true. My biggest frustration is people who use LLMs to generate code, and then don't use LLMs to refine that code. That is how you end up with slop.I would estimate that as a SDE I spend about 30% of my time reviewing and refining my own code, and I would encourage anyone operating a coding agent to still spend 30% figuring out how to improve the code before shipping.


Youtube's downvote button has served me quite well for this purpose.


Yeah, Mithril got this right over 10 years ago. Still good to see at least one big player finally catching up. React's state model has always been a pain to work with.


Same here. I tried codex a few days ago for a very simple task (remove any references of X within this long text string) and it fumbled it pretty hard. Very strange.


yeah I'm in the same boat. Codex can't do this one task, and constantly forgets what I've told it, and I'm reading these comments saying how is so great to the point that I'm wondering if I'm the one taking the crazy pills. Maybe we're being A/B tested and don't know about it?


No, no one that's super boosting the LLMs ever tells you what they are working on or give any reasonable specifics about how and why it's beneficial. When someone does, it's a fairly narrow scope and typically inline with my experience.

They can save you some time by doing some fairly complex basic tasks that you can write in plain language instead of coding. To get good results you really need a lot of underlying knowledge yourself and essentially, I think of it as a translator. I can write a program in very good detail using normal language and then the LLM can convert it to code with reasonable accuracy.

I haven't been able to depend on it to do anything remotely advanced. They all make up API endpoints or methods or fill in data with things that simply don't exist, but that's the nature of the model.


You misread me. I'm one of the people you're complaining about. Claude code has been great in my experience and no I don't have a GitHub repo of code that's been generated for you to tell me that's trivial and unadvanced and that a child could do it.

What I'm saying was to compare my experience with Claude code vs Codex with GPT-5. CC's better than codex in my experience, contrary to GP's comment.


Maybe, just maybe, people are lying on the internet. And maybe those people have a financial interest in doing so.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: