I like how the FAQ doesn't really actually answer the questions (feels like AI slop but giving benefit of the doubt), so I will answer on their behalf, without even reading the paper:
Am I impacted by this vulnerability?
For all intents and purposes, no.
Battering RAM needs physical access; is this a realistic attack vector?
You're twisting their words. For the second question, they clearly answer yes.
It depends on the threat model you have in mind. If you are a nation state that is hosting data in a US cloud, and you want to protect yourself from the NSA, I would say this is a realistic attack vector.
I haven't twisted their words, they didn't actually answer the question, so I gave my own commentary. For all intents and purposes, as in practically speaking, this isn't going to affect anyone*. The nation state threat is atypical even to those customers of confidential computing, I guess the biggest pool of users being those that use Apple Intelligence (which wouldn't be vulnerable to this attack since they use soldered memory in their servers and a different TEE).
Happy to revisit this in 20 years and see if this attack is found in the wild and is representative. (I notice it has been about 20 years since cold boot / evil maid was published and we still haven't seen or heard of it being used in the wild (though the world has kind of moved onto soldered ram for portable devices).
* They went to great lengths to provide a logo, a fancy website and domain, etc. to publicise the issue, so they should at least give the correct impression on severity.
They answer the second question quite clearly in my opinion:
It requires only brief one-time physical access, which is realistic in cloud environments, considering, for instance:
* Rogue cloud employees;
* Datacenter technicians or cleaning personnel;
* Coercive local law enforcement agencies;
* Supply chain tampering during shipping or manufacturing of the memory modules.
This reads as "yes". (You may disagree, but _their_ answer is "yes.")
Consider also "Room 641A" [1]: the NSA has asked big companies to install special hardware on their premises for wiretapping. This work is at least proof that a similar request could be made to intercept confidential compute environments.
This reads as "yes". (You may disagree, but _their_ answer is "yes.")
Ah yes, so I bet all these companies that are or were going to use confidential cloud compute aren't going to now, or kick up a fuss with their cloud vendor. I'm sure all these cloud companies are going to send vulnerability disclosures to all confidential cloud compute customers that their data could potentially be compromised by this attack.
There is clearly a market for this and it is relevant to those customers. The host has physical access to the hardware and therefore can perform this kind of attack. Whether they have actually done so is irrelevant. I think the point of paying for confidential computing is knowing they cannot. Why do you consider physical access not a realistic attack vector?
Why do you consider physical access not a realistic attack vector?
First we should be careful in what I said; I never said physical access is unrealistic and certainly didn't say this attack is not viable*. What I am saying is that this is not a concern outside a negligible amount of the population. They never will be affected as we have seen with the case of Cold Boot, and all the other infeasible fear mongering attacks. But sure, add it to your vulnerability scanner or whatever when you detect SGX/etc.
But why should this not be a concern for an end user that may have their data going through cloud compute or a direct customer? It comes down to a few factors: scale, insider threats and/or collusion, or straight up cloud providers selling backdoored products.
Let's go in reverse. Selling backdoored products is an instant way to lose goodwill, reputation, lose your customer base, with little to no upshot if you succeed in the long term. I don't see Amazon, Oracle, or whoever stooping this low. A company with no or low reputation will not even make a shortlist for CCC (confidential cloud compute).
Next is insider threats. Large cloud providers have physical security locked down pretty tight. Very few in an organisation know where the actual datacentres are. Cull that list by 50% for those that can gain physical access. Now you need to have justification for why you need access to the physical machine (does the system have failed hardware or bad RAM) that you need to target **. And so on and so forth. Then there is physical monitoring of capturing a recording of you performing the act and the huge deterrent of not losing your cushy job and being sentenced to prison.
Next collusion: so we consider a state actor/intelligence community compelling a cloud provider to do this (but it could be anyone such as an online criminal group or a next door neighbour). This is too much hassle and headache in which they would try to get more straightforward access. But the UK for example, after exhausting all other ways of getting access data to a target, could supply a TCN to a cloud provider to deploy these interposers for a target, they would still need to get root access to the system. Reality is this would be put in the too hard basket; they would probably find easier and more reliable ways to get the data they seek (which is more specific than random page accesses).
Finally I think the most important issue here is scale. There's a few things I think about when I think of scale: first is the populous that should generally be worried (which I stated earlier is a negligible amount). There's the customers of CCC. Then there's the end users that actually use CCC. There's also the number of how many interposers can be deployed surreptitiously. At the moment, very few services use CCC, the biggest is Apple PCC and WhatsApp private processing for AI. Apple is not vulnerable for a few reasons. Meta does use SEV-SNP, and I'm sure they'd find this attack intriguing as a technically curiousity, but won't change anything they do as they're likely to have tight physical controls and separate that with the personnel that have root access to the machines. But outside of these few applications which are unlikely to be targetted, there's nascent use of CCC, so there's negligible chance the general public will be even exposed to the possibility of this attack.
I've ignored the supply chain attack scenario which will be clear as you read what follows.
A few glaring issues with this attack:
1. You need root on the system. I have a cursory understanding of the threat model here in that the OS/hypervisor is considered hostile to SGX, but if you're trying to get access to data and you control the OS/hypervisor, why not just subvert the system at that level rather than go through this trouble?
2. You need precise control of memory allocation to alias memory. Again, this goes back to my previous point, why would you go to all this trouble when you have front door access.
(Note I eventually did read the paper, but my commentary based on the website itself was still a good indicator that this affects virtually noone.)
* The paper talks about feasibility of the attack when they actually mean how viable it is.
** You can't simply reap the rewards of targeting a random machine, you need root access for this to work. Also the datacentre technicians at these cloud companies usually don't have the information apriori of which customer maps to which physical server.
Am I impacted by this vulnerability?
For all intents and purposes, no.
Battering RAM needs physical access; is this a realistic attack vector?
For all intents and purposes, no.