Exactly. I'm arguing that what we should be focused on at this relatively early stage is not the amount of output but the rate of innovation.
It's important to note that we're now arguing about the level of quality of something that was a "ha, ha, interesting" in a sidenote by Andrej Karpathy 10 years ago [0], and then became a "ha, ha, useful for weekend projects" in his tweet from a year ago. I'm looking forward to reading what he'll be saying in the next few years.
Because in the beginning of a new technology, the advantages of the technology benefit only the direct users of the technology (the programmers in this case).
However, after a while, the corporations see the benefit and will force their employees into an efficiency battle, until the benefit has shifted mostly away from the employees and towards their bosses.
After this efficiency battle, the benefits will become observable from a macro perspective.
Why is gpt3 relevant? I can't recall anyone using gpt3 directly to generate code. The closest would probably be Tabnine's autocompletion, which I think first used gpt2, but I can't recall any robust generation of full functions (let alone programs) before late 2022 with the original GitHub copilot.
To be honest I don't really read insults either in this e-mail or in the thread you linked. If I'm seeing it right, there's only one comment by the guy in that thread, right? That comment is direct and uses language that may be considered unprofessional ("crap"/"crappy"), but it's not insulting the users (they are not referred to as crappy). Same for the e-mail.
I don’t think the language is unprofessional, it’s direct and it states his opinion.
The one demanding it is the maintainer of keepassxc it would’ve been better to just close the issue that this is a Debian only problem and he should install it like that and just close it.
mainly people have issues with clear, precise and concise language about intend of action instead of idk. a request of discussion
now this is separate from being open for discussion if someone has some good arguments (which aren't "you break something which isn't supported and only nich used") and some claim he isn't open for arguments
and tbh. if someone exposes users to actual relevant security risk(1) because the change adds a bit of in depth security(2) and then implicitly denounces them for "wanting crap" this raises a lot of red flags IMHO.
(1): Copy pasting passwords is a very bad idea, the problem is phsishing attacks with "look alike" domains. You password manager won't fill them out, your copy past is prone to falling for it. In addition there are other smaller issues related to clip board safety and similar (hence why KC clears the clipboard after a short time).
(2): Removing unneeded functionality which could have vulnerabilities. Except we speak about code from the same source which if not enabled/setup does pretty much nothing (It might still pull in some dependencies, tho.)
You're twisting their words. For the second question, they clearly answer yes.
It depends on the threat model you have in mind. If you are a nation state that is hosting data in a US cloud, and you want to protect yourself from the NSA, I would say this is a realistic attack vector.
I haven't twisted their words, they didn't actually answer the question, so I gave my own commentary. For all intents and purposes, as in practically speaking, this isn't going to affect anyone*. The nation state threat is atypical even to those customers of confidential computing, I guess the biggest pool of users being those that use Apple Intelligence (which wouldn't be vulnerable to this attack since they use soldered memory in their servers and a different TEE).
Happy to revisit this in 20 years and see if this attack is found in the wild and is representative. (I notice it has been about 20 years since cold boot / evil maid was published and we still haven't seen or heard of it being used in the wild (though the world has kind of moved onto soldered ram for portable devices).
* They went to great lengths to provide a logo, a fancy website and domain, etc. to publicise the issue, so they should at least give the correct impression on severity.
They answer the second question quite clearly in my opinion:
It requires only brief one-time physical access, which is realistic in cloud environments, considering, for instance:
* Rogue cloud employees;
* Datacenter technicians or cleaning personnel;
* Coercive local law enforcement agencies;
* Supply chain tampering during shipping or manufacturing of the memory modules.
This reads as "yes". (You may disagree, but _their_ answer is "yes.")
Consider also "Room 641A" [1]: the NSA has asked big companies to install special hardware on their premises for wiretapping. This work is at least proof that a similar request could be made to intercept confidential compute environments.
This reads as "yes". (You may disagree, but _their_ answer is "yes.")
Ah yes, so I bet all these companies that are or were going to use confidential cloud compute aren't going to now, or kick up a fuss with their cloud vendor. I'm sure all these cloud companies are going to send vulnerability disclosures to all confidential cloud compute customers that their data could potentially be compromised by this attack.
There is clearly a market for this and it is relevant to those customers. The host has physical access to the hardware and therefore can perform this kind of attack. Whether they have actually done so is irrelevant. I think the point of paying for confidential computing is knowing they cannot. Why do you consider physical access not a realistic attack vector?
Why do you consider physical access not a realistic attack vector?
First we should be careful in what I said; I never said physical access is unrealistic and certainly didn't say this attack is not viable*. What I am saying is that this is not a concern outside a negligible amount of the population. They never will be affected as we have seen with the case of Cold Boot, and all the other infeasible fear mongering attacks. But sure, add it to your vulnerability scanner or whatever when you detect SGX/etc.
But why should this not be a concern for an end user that may have their data going through cloud compute or a direct customer? It comes down to a few factors: scale, insider threats and/or collusion, or straight up cloud providers selling backdoored products.
Let's go in reverse. Selling backdoored products is an instant way to lose goodwill, reputation, lose your customer base, with little to no upshot if you succeed in the long term. I don't see Amazon, Oracle, or whoever stooping this low. A company with no or low reputation will not even make a shortlist for CCC (confidential cloud compute).
Next is insider threats. Large cloud providers have physical security locked down pretty tight. Very few in an organisation know where the actual datacentres are. Cull that list by 50% for those that can gain physical access. Now you need to have justification for why you need access to the physical machine (does the system have failed hardware or bad RAM) that you need to target **. And so on and so forth. Then there is physical monitoring of capturing a recording of you performing the act and the huge deterrent of not losing your cushy job and being sentenced to prison.
Next collusion: so we consider a state actor/intelligence community compelling a cloud provider to do this (but it could be anyone such as an online criminal group or a next door neighbour). This is too much hassle and headache in which they would try to get more straightforward access. But the UK for example, after exhausting all other ways of getting access data to a target, could supply a TCN to a cloud provider to deploy these interposers for a target, they would still need to get root access to the system. Reality is this would be put in the too hard basket; they would probably find easier and more reliable ways to get the data they seek (which is more specific than random page accesses).
Finally I think the most important issue here is scale. There's a few things I think about when I think of scale: first is the populous that should generally be worried (which I stated earlier is a negligible amount). There's the customers of CCC. Then there's the end users that actually use CCC. There's also the number of how many interposers can be deployed surreptitiously. At the moment, very few services use CCC, the biggest is Apple PCC and WhatsApp private processing for AI. Apple is not vulnerable for a few reasons. Meta does use SEV-SNP, and I'm sure they'd find this attack intriguing as a technically curiousity, but won't change anything they do as they're likely to have tight physical controls and separate that with the personnel that have root access to the machines. But outside of these few applications which are unlikely to be targetted, there's nascent use of CCC, so there's negligible chance the general public will be even exposed to the possibility of this attack.
I've ignored the supply chain attack scenario which will be clear as you read what follows.
A few glaring issues with this attack:
1. You need root on the system. I have a cursory understanding of the threat model here in that the OS/hypervisor is considered hostile to SGX, but if you're trying to get access to data and you control the OS/hypervisor, why not just subvert the system at that level rather than go through this trouble?
2. You need precise control of memory allocation to alias memory. Again, this goes back to my previous point, why would you go to all this trouble when you have front door access.
(Note I eventually did read the paper, but my commentary based on the website itself was still a good indicator that this affects virtually noone.)
* The paper talks about feasibility of the attack when they actually mean how viable it is.
** You can't simply reap the rewards of targeting a random machine, you need root access for this to work. Also the datacentre technicians at these cloud companies usually don't have the information apriori of which customer maps to which physical server.
It's a bit more fundamental in my opinion. Cryptographic techniques are supported by strong mathematics; while I believe hardware-based techniques will always be vulnerable against a sufficiently advanced hardware-based attack. In theory, there exists an unbreakable version of OpenSSL ("under standard cryptographic assumptions"), but it is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Proof of existence does exist. Some Xbox variant has now been unbroken (jailbroken) for more than 10 years. And not for lack of trying.
Credit/debit cards with chips (EMV) are another proof of existence that hardware-based protection can exist.
> It is not evident that there even is a way to implement the kind of guarantees confidential computing is trying to offer using hardware-based protection only.
Not in the absolute, but in the more than $10 mil required to break it (atomic microscopes to extract keys from CPU gates, ...), and that to break a single specific device, not the whole class.
As soon as a bad actor has a single key the entire class is broken since the bad actor can impersonate that device, creating a whole cloud of them if they want.
You must be joking. When I try to log in on Outlook I get redirected to 'microsoftonline.com' (suspicious), when I log in on Wikipedia it sends me to something called 'wikimedia.org' (typo squatter?). How the hell am I supposed to know whether npmjs.help or rustfoundation.dev are _not_ the official domains of those projects?
You must be joking, are you still not using a password manager at all?
When you create the username+password combo you either do it yourself, then put in the password manager the domain, or you use whatever the password manager infers at the registration page, then that's basically it, for most sites. Then 1% of the websites insist to use signin.example.com for login and signup.example.com for signup, so you add both domains to your password manager, or example.com.
Now whenever you login, you either see a list of accounts (means you're on the right domain) or you don't (which means the domain isn't correct). And before people whine about "autofill doesn't always work", it doesn't matter, the list should (also) show up from the extension modal/popup, so even if autofill doesn't work for that website, you'd be protected, since the list of accounts are empty for wrong domains.
It's really easy, and migrating to a password manager just sucks the first couple of days, every day after that you'd be happy you finally did it.
When notpushkin said "the spec is still at XSLT 1.0", I think "the spec" is referring to the WHATWG HTML Living Standard spec, which only refers to XSLT 1.0. (It wouldn't make sense to say "the XSLT spec is at XSLT 1.0".)
Do you think an AI could come up with novel answers that a human wouldn't be able to come up with? I think humans could not just come up with answers to these questions, but some people would be able to greatly outperform AIs by using knowledge that is not widely known.
These models will also have access to what’s not widely known. Imagine running it on everyone’s private email for instance. At the very least, it can currently scale and augment human evil (just like it does with coding). The future will just make that division even wider.
To be frank, if you die, isn't it much more likely your friends and family will just stop using your homelab setup? They'll switch back from Jellyfin to Netflix, replace the smart light bulbs with regular ones, etc.
To give a concrete example, matrix multiplication is not commutative in general (AB ≠ BA), but e.g. multiplication with the identity matrix is (AI = IA). So AIB = ABI ≠ BAI.
Or applied to the programming example, the statements: