The singularity doesn't mean that we get an AGI Day with a big announcement from The People In Charge that intelligence has been solved once and for all. It simply means that the frequency of "this changes everything" and "rumored model X at lab Y passes benchmark Z at 110% in less-than-zero-shot" style posts increases monotonically for ever.
Or it will just become seen as a category error. We don't often go around talking about "artificial general strength" or "artificial super-strength", even though it's easy to build a claw arm which can exert more force in any direction than any human can.
Common failure mode in environments that promote being the best over getting things done: If you went from school to college to industry always identifying with being better than everyone else, because that's what it takes to get in the door, sometimes you miss the transition to when you're already in the door. A lot of people don't realize they were supposed to put their guards down until it's too late. It's too bad but this is what the cutthroat labor pipeline rewards.
>In a way this is virtuous, because I think startups are a good thing. But really what motivates us is the completely amoral desire that would motivate any hacker who looked at some complex device and realized that with a tiny tweak he could make it run more efficiently. In this case, the device is the world's economy, which fortunately happens to be open source.
After a while you learn to ignore criticism. I'm not really interested in what people have to say who would never become users anyway. They're simply not the demographic. It's all noise, and when I was younger and more impressionable it caused serious self-doubt. But when I demo something and I see the eyes light up, and then they say "well, what about this?", that's pure gold.
NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.
It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?
The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.
Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)
I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?
> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).
Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.
Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.
All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.
If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.
It scales just fine for thousands of maintainers of thousands of packages for every major linux distribution that powers the internet. You just have to automate enforcement so people do not have a choice.
Are you really saying there is just something fundamental about javascript developers that makes them unable to run the same basic shell commands as Linux distribution maintainers?
No, it really doesn't scale that well. 'Thousands' of packages is laughable compared to the scale of npm. And even at the 'thousands' scale distros are often laughably out of date because they're so slow to update their packages.
You are of course right that a signed package ecosystem would be great, it's just that you're asking people to do this labour for you for free. If you pay some third party to verify and sign packages for you? That's totally fine. Asking maintainers already under tremendous pressure to do yet another labour-intensive security task so you can benefit for free? That's out of balance.
Are they incapable of doing it? Probably not. Does it take real labour and effort to do it? Absolutely.
My 7 teammates and I on stagex actually maintain all this zero-trust signing and release process I am suggesting for several hundred packages and counting. Not asking anyone to do hundreds like my team and I are, but if authors could just at least do the bare minimum for the code they directly author that would eliminate the last gaping hole in the supply chain.
With what keys, and how do you propose establishing trust in those keys?
(As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)
If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.
Identity continuity at a minimum, is of immense defensive value even though we will not know if the author is human or trusted by any humans.
That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.
> If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.
Two immediate problems: (1) package distribution has nothing to do with git (you don’t need to use any source control to publish a package on most indices, and that probably isn’t going to change), and (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.
> That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.
This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.
I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.
But you only $200/month for the productivity of what used to cost monthly salary for 10 software engineers. Doesn't this democratize software construction?
The resources to learn how to construct software are already free. However learning requires effort, which made learning to build software an opportunity to climb the ladder and build a better life through skill.
This is democratization.
Now the skill needed to build software is starting to approach zero. However as you say you can throw money at an AI corporation to get some amount of software built. So the differentiator is capital, which can buy software rather cheaply. The dependency on skill is lessened greatly and software is becoming worthless, so another avenue to escape poverty through skill closes.
You might be right but it's common to see that take as short sighted since there is zero evidence it's ever happened before. The common example is the loom. The loom didn't democritize cloth making, it put all the weavers out of business. But it did democritize everything that needed cloth because cloth became cheap. 1000s more jobs were created by making cloth cheap. In a similar way, 1000s of different jobs might be created by making software creation cheap, but just like cloth makers, software makers have no purpose when making software is easy.
I think the combinatorial space is just too much. When I did web dev it was mostly transforming HTML/JSON from well-defined type A to well-defined type B. Everything is in text. There's nothing to reason about besides what is in the prompt itself. But constructing and maintaining a mental model of a chip and all of its instructions and all of the empirical data from profiling is just too much for SOTA to handle reliably.
reply