curl | sh is more prevalent in Linux where you can expect a stable ABI from the kernel and sometimes GNU libc. No such things in BSD land. Packages are built against a release always. They don't maintain binary compatibility.
They stole the axios's npm keys and they uploaded malicious artifacts. They did not takeover the axios's repo. The issue is with packaging and distribution, not with code.
Because the way npm works means that as soon as a developer key got stolen, a lot of people got pwned. The key is the only barrier.
Compare that with the average distro. You would have to compromise the developer infrastructure (repo or website) and publish a new version without them being aware while notifying the maintainer that’s its ok to merge the new package script in the distro repo. Hard to pull off in high profile projects.
Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?
IMO, the most sustainable version is either the linux distros/bsd ports/homebrew models. You don't push new libraries to the public registry, instead you write a packaging script that gets reviewed for every new changes.
Another model is Perl's CPAN where you publish source files only.
Trust me, as someone who has contributed to such a package set, almost nobody is inspecting diffs between upstream versions when updating a package. Only the package definitions themselves are reviewed, but they are typically only version + hash bumps.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
Not really talking about upstream. Most supply attacks I’ve heard about are stolen secrets and artifacts uploading. They’re not about repositories or websites being taken over. As the packaging scripts are often in repos, you detect easily if people are trying to update where upstream points to.
I would love to see an AI try to make sense of GTK API.
I may be wrong, but it seems when people are talking about easy glue code, they’re talking about web services API, not OS API, not graphics or sound API, not file formats libraries,…
I used Sonnet 3.5 over a year ago to decrypt a notoriously shitty local government API to get data out of meetings, votes and discussions.
I know it's a piece of shit API done in the worst possible way on purpose (they don't want openness, but had to fulfill a law that mandates "openness") because I had previously tried to do it manually - twice. I ran out of whisky before I got anything done.
Sonnet _3.5_ almost one-shotted it with just the API "documentation" they had and access to Python and curl.
People have also hooked stuff into proprietary APIs on "smart" devices with zero documentation, just by having an Agent tirelessly run through thousands of permutations to figure it out.
Search engines only show a snippet of the content and that always looks convincing. It's the whole content that is off and, unfortunately, a few seconds/minutes can pass before you realize it (If you ever do).
Search engines track that. It's what a "long click" means. If you click a result, then return fairly fast and keep searching or clicking other links, they infer low quality (for that query at least).
Well, and Google's proxy read of "quality" might have flawed assumptions. A concise page where you get what you need and leave quickly might read as "high bounce rate".
Have you ever wonder how people do it without it being a tedium for them?
For things that have a visual elements like UI and UX, you can start with sketches (analog or digital) and eliminate the bad ideas, refine the good ones with higher quality rendering. Then choose one concept and inplement it. By that time, the code is trivial. What I found with LLM usage is that people will settle on the first one, declaring it good enough, and not exploring further (because that is tedious for them).
The other type of problem are mostly three categories (mathematical, logical, or data/information/communication). For the first type you have to find the formula, prove it is correct, and translate it faithfully to code. But we rarely have that kind of problem today unless you’re in a research lab or dealing with floating-point issues.
The second type is more common where you enacting rules based on some axioms originating from the systems you depend on. That leads to the creation of constraints and invariants. Again I’m not seeing LLM helping there as they lack internal consistency for this type of activity. (Learning Prolog helps in solving that kind of problem)
The third type is about modelizing real world elements as data structures and designing how they transform overtime and how they interact with each other. To do it well, you need deep domain knowledge about the problem. If LLM can help you there that means two things: a) Your knowledge is lacking and you ought to talk to the people you’re building the system for; b) The problem is solved and you’d do well to learn from the solution. (Basically what the DDD books are all about)
Most problems are a combination of subproblems of those three categories (recursively). But from my (admittedly small amount of) interactions with pro LLM users, they don’t want to solve a problem, they want it to be solved for them. So it’s not about avoiding tediousness, it’s sidestepping the whole thing.
I've been doing this for a couple decades. I don't wonder how people did it before AI, I did it for years and years before any of this existed...
> What I found with LLM usage is that people will settle on the first one, declaring it good enough, and not exploring further (because that is tedious for them).
I don't relate to this at all. It's so much easier (and less tedious) to experiment and iterate now. I see people doing a lot more of it, not less.
AI tools are also excellent aids to all the other types of problems you elucidated. You're doing theorycraft, and I might even agree with you if I just sat down and theorycrafted out how I thought this would work for each type of problem as you're doing here. (Indeed, you can probably find HN comments I made in 2022 and 2023 that say very similar things as you're saying!)
But in practice, I find all your theories here about why AI tools are not useful in this or that case to just be totally wrong.
My one question for you: What’s your level of editor fluency? Because I would really like to know if there’s a correlation between claiming these kind of time savings and not using advanced features in your editor.
My time is spent more on editing code than writing new lines. Because code is so repetitive, I mostly do copy-pasting, using the completion and the snippets engine, reorganize code. If I need a new module, I just copy what’s most similar, remove everything and add the new parts. That means I only write 20 lines of that 200 lines diff.
Also my editor (emacs) is my hub where I launch builds and tests, where I commit code, where I track todo and jot notes. Everything accessible with a short sequence of keys. Once you have a setup like this, it’s flow state for every task. Using LLM tools is painful, like being in a cubicle reading reports when you could be mentally skiing on code.
My 2023 to early 2025 usage of AI was as "slight improvement to my existing editing and autocomplete capabilities". That was great and I loved it. But sometime over the last 12 months it has switched to "mostly using the editor pane to read rather than edit".
Honestly I experience this as a great loss. All these hours over all these years perfecting the vim editing movements! And now I only spend like 10% of my time directly editing things anymore.
I feel like it would be fun (and also sad and nostalgic) to see a time lapse of the relative size and time spent focused between my editor pane, terminal pane, and AI tool pane. It has changed massively, especially in the last year.
I remember Laravel with Socialite [0]. Laravel is what I usually reach for Web SaaS MVP. You only need a VPS and a managed database for testing out the market and can scale a lot without increasing expenses that much..
Lol, I developed for entrepreneurs who mostly wanted a working proof of concept of their ideas. I guess now you can vibecode them with SaaS for core technical needs.
> However, the best engineers I know are usually among the quickest to open an editor or debugger and use it fluently to try something out
The Pragmatic Programmer book has whole chapters about this. Ultimately, you either solve the problem analogously (whiteboard, deep thinking on a sofa). Or you got fast as trying out stuff AND keeping the good bits.
reply