Not a stupid question. CDIs are groovy for minting secrets that are bound to the exact firmware that's running, but are a bit less ergonomic out of the box when it comes to keeping long-lived secrets around across a firmware update. Firmware changes --> CDI changes --> anything derived from or sealed to the CDI is gone, by design.
A more ergonomic approach for sealing long-lived data is to use something like a hash chain [0], where the chain starts with the equivalent of a DICE UDS, and the chain's length is (MAX_VERSION - fw.version). The end of that chain is given to firmware, and the firmware can lengthen the chain to derive older firmware's secrets, but cannot shorten it to derive newer firmware's secrets.
This presumes that the firmware is signed of course, since otherwise there'd be no way to securely associate the firmware with a version number. If the public key is not baked into the HSM, then the hash of the public key should be used to permute the root of the hash chain.
> It is conceivable that contributors, unlike owners and maintainers, could be anonymous, but only if their code has passed multiple reviews by trusted parties. It is also conceivable that we could have “verified” identities, in which a trusted entity knows the real identity, but for privacy reasons the public does not. This would enable decisions about independence as well as prosecution for illegal behavior.
For example, I don't want anyone to know my real name. I'm not up to any mischief (criminal or otherwise), I just want the separation of identities. There isn't a single entity on Earth that I'd feel safe delegating this knowledge with if I could avoid it.
It sounds like, unless someone is an owner or maintainer of a critical open-source project, the blog post isn't necessarily calling for that person's deanonymization. For projects that are both critical and owned/maintained by anonymous entities, I think it's reasonable for an organization to think twice before taking a dependency on such projects, given the sort of anonymous attacks mentioned in the article.
Disclaimer: opinions are my own, not my employer's (Google)
> I think it's reasonable for an organization to think twice before taking a dependency on such projects, given the sort of anonymous attacks mentioned in the article.
I'd argue that "thinking twice" should be the standard bar for all open source dependencies, not a discrimination levied towards anonymous or pseudonymous developers.
(Though, to be fair, I doubt Google would ever use any of my code. I know your cryptographers; they don't need me to contribute lol.)
Many of us on the Asylo team share your reservations about DRM. However, the capability to run software in a not-entirely-trustworthy environment leads to many positive possibilities. For instance, you could imagine a world in which customers didn’t have to trust their cloud vendor or worry about their data falling into unauthorized hands. Or you could implement chat applications which can prove to you that your communications really are being encrypted end-to-end.
In our view, trusted computing has applications well beyond DRM.
Asylo is not tied to EPID; the framework aims to abstract away any unique behavior specific to TEE implementations, and provide a common backend interface that developers can code against. The goal is to allow developers to easily migrate their apps between backends with little to no source-code changes.
Specifically for attestation purposes, Asylo defines the EnclaveAssertionGenerator[1] and EnclaveAssertionVerifier[2] interfaces; these will need technology-specific implementations.
In this initial release we only support a simulated backend, for experimental development. We'll continue looking into specific TEE technologies going forward.
Thanks for the helpful feedback. To answer your question: Asylo is currently x86 specific and provides a simulated enclave backend. We plan on evaluating additional enclave technologies going forward, with the goal of supporting those which gain the most market traction and community support.
Obvious disclaimer: currently working at Google on Asylo.
A couple practical benefits of Titan is that we can use it in many different environments where traditional secure boot is not available. For example, we're using it in both servers and in our custom networking card.
In addition, traditional secure boot doesn't give us a hardware root of trust, nor does it enable tamper-evident logging.
The log signing prevents undetected tampering after-the-fact; the goal is to make it readily apparent when log messages are altered or deleted, even by parties with root access.
Video: https://youtu.be/F-Y7fhIasjM