Hacker Newsnew | past | comments | ask | show | jobs | submit | mattgreenrocks's commentslogin

IMO, the world simply functions better when we strive for virtue.

How do you handle the dogs ignoring the deacons and going after the polecats though? Seems like the mayor should get involved to me.

Without context I would have thought this post came from a video game forum or a mentally ill person. I'm not dissing you personally.

I havent tried gas town yet. I have a pretty good multi-agent workflow by just using beads directly along with thoughtfully produced prompts.

This framing neatly explains the hubris of the influencer-wannabes on social media who have time to post endlessly about how AI is changing software dev forever while also having never shipped anything themselves.

They want to be seen as competent without the pound of flesh that mastery entails. But AI doesn’t level one’s internal playing field.


Sam Altman’s real job is pushing AI hopium on execs who will believe anything in pursuit of that nirvana.

Which is hilarious, because AI is making it easier and easier to bring a good idea to market with much less external financing than usual.

You can argue about security, reliability, and edge cases, but it's not as if human devs have a perfect record there.

Or even a particularly good one.

What are those execs bringing to the table, beyond entitlement and self-belief?


> What are those execs bringing to the table, beyond entitlement and self-belief?

The status quo, which always require an order of magnitude more effort to overcome. There's also a substantial portion of the population that needs well-defined power hierarchies to feel psychologically secure.


Alternate take: what agents can spit out becomes table stakes for all software. Making it cohesive, focused on business needs, and stemming complexity are now requirements for all devs.

By the same token (couldn’t resist), I also would argue we should be seeing the quality of average software products notch up by now with how long LLMs have been available. I’m not seeing it. I’m not sure it’s a function of model quality, either. I suspect devs that didn’t care as much about quality hadn’t really changed their tune.


how much new software do we really use? and how much can old software become qualitatively better without just becoming new software in different times with a much bigger and younger customer base?

I misunderstood two things for a very long time:

a) standards are not lower or higher, people are happy that they can do stuff at all or a little to a lot faster using software. standards then grow with the people, as does the software.

b) of course software is always opinionated and there are always constraints and devs can't get stuck in a recursive loop of optimization but what's way more important: they don't have to because of a).

Quality is, often enough, a matter of how much time you spent on nitpicking even though you absolutely could get the job done. Software is part of a pipeline, a supply chain, and someone is somehow aware why it should be "this" and not better or that other version the devs have prepared knowing well enough it won't see the light of day.


Honestly, in many ways it feels like quality is decreasing.

I'm also not convinced it's a function of model quality. The model isn't going to do something if the prompter doesn't even know. It does what the programmer asked.

I'll give a basic example. Most people suck at writing bash scripts. It's also a common claim as to LLMs utility. Yet they never write functions unless I explicitly ask. Here try this command

  curl -fsSL https://claude.ai/install.sh | less
(You don't need to pipe into less but it helps for reading) Can you spot a fatal error in the code where when running curl-pipe-bash the program might cause major issues? Funny enough I asked Claude and it asked me this

  Is this script currently in production? If so, I’d strongly recommend adding the function wrapper before anyone uses it via curl-pipe-bash.                
The errors made here are quite common in curl-pipe-bash scripts. I'm pretty certain Claude would write a program with the same mistakes despite being able to tell you about the problems and their trivial corrections.

The problem with vibe coding is you get code that is close. But close only matters in horseshoes and hand grenades. You get a bunch of unknown unknowns. The classic problem of programming still exists: the computer does what you tell it to do, not what you want it to do. LLMs just might also do things you don't tell it to...


You made the choice to change your development workflow to that. You chose to abdicate thinking to the LLM.

If it’s working for you, then great. But don’t pretend like it is some natural law and must be true everywhere.


Remains to be seen for production settings.

My guess is no. I’ve seen people talk about understanding the output of their vibe coding sessions as “nerdy,” implying they’re above that. Refusing the vet AI output is the kiss of death to velocity.


> Refusing the vet AI output is the kiss of death to velocity.

The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes. But I see at least two problems with that.

AI is impressively good at extracting intent from a ball of mud with tons of accidental complexity, and I think we can expect it to continue improving. But when a system has a lot of inherent complexity, and it's poorly specified, the task is harder.

The second is that small, incremental, reversible changes are the most reliable way to evolve a system, and AI doesn't repeal that principle. The more churn, the more bugs — minor and major.


> The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes.

Live and even offline data transformation and data migration without issues are still difficult problems to solve even for humans. It requires meticulous planning and execution.

A rewrite has to either discard the previous data or transform or keep the data layer intact across versions which means more and more tangled spaghetti accumulated over rewrites.


This is insane to me, and validates my irrational dislike of next.

Definitely irrational. There are lots of logical reasons to dislike Next (like the fact that they pile new shiny bit on top of new shiny bit without caring about the regular user experience) ... but being mad that it can't run on Vite is silly.

It's like being mad that Rails can't run on Python, or that React can't run on jQuery. Next already has its own build system, so of course it doesn't work with another build system.


Isn’t the next.js build system known for being slow/memory hungry?

Luckily DX is much better now with Turbopack as a bundler. First they improved the dev server, now with Turbo builds the production builds are faster as well. Still not fully stable in my opinion, but they will get there.

It's also wise to use monorepo orchestration with build caching like Turborepo.

They did well on the turbo stuff, no doubt about it.

The main bottleneck with big projects in my experience is Typescript. Looking forward to the Go rewrite. :)


For those stuck in the past yes, they have replaced it with a Rust based toolchain, as is so fashionable nowadays.

100% rational. Nuxt/Astro FTW.

Hope that SSR remains first class as time goes on. I think Astro’s DX is superb overall, and am bullish on server-rendered components in MPAs with a sprinkling of hypermedia libs for better UX.

Some features of my SSR-based side project feel like I had to hack them on, such as a hook that runs only on app start (hacked in via middleware) or manually needing to set cache control headers for auth’d content.

All in all, really happy with it. And it isn’t next.js.


Very apt for the current moment.

Adults push back on aggressors when necessary. Children cower behind the adults.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: