“The development of new product lines for use in service of critical infrastructure or [national critical functions] NCFs in a memory-unsafe language (e.g., C or C++) where there are readily available alternative memory-safe languages that could be used is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety.”
Now that's a strong statement.
But it's real. There are so many state actors doing cyberattacks now that everything needs to be much tougher. Otherwise, someday soon much of the world stops working.
I think it is important to recognize that this will only apply to generic normal software, which is already mostly written in memory-safe languages for government e.g. Java. Data infrastructure with state-of-the-art scale, performance, and workload requirements will continue to be written in C++ for the foreseeable future, largely because there is no alternative. Maybe Zig some day?
A pattern I’ve seen (and used myself) is that the heavy core engines are written in C++ for performance and maintainability but skinned with a Rust API wrapper. Rust is closer to a “better Java” than a C++ replacement for extreme systems software.
Calling Rust a "better Java" and not a credible alternative for C++ for "extreme systems software" without further detail is an opinion that I find hard to take seriously. So I'd like to understand your viewpoint better. Could you elaborate on what aspects in C++ specifically make it so much better suited for these tasks from performance and maintainability perspective?
I wonder if this is a real solution. "Memory safety" has sure been pushed hard the last few years, but this feels more like a "we need to do something, this is something, we should do this" kind of response than anything that will really address the issue.
If security-through-virtualization had been the fad lately, would that have been proposed instead?
It is not a real solution. The people delivering memory-safe code today do not think their systems are secure against individual, lone attackers, let alone fully-funded state actors. The overwhelming majority of them, all software developers, and software security professionals probably think it is literally impossible to design and develop usable systems secure against such threats, i.e. can achieve the desired requirements.
Let us do this thing that literally every practitioner thinks can not achieve the requirements and maybe we will accidentally meet the requirements in spite of it is a bona-fide insane strategy. It only makes sense if those are not "requirements", just nice-to-haves; which, to be fair, is the state of software security incentives today.
If you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.
But if you think they overestimated the security of such systems, so there are no actual examples of working solutions, then it still makes no sense to go with things that people know certainly do not work. You still need to at least start from things that people believe could be secure against state actors otherwise you have already failed before you even started.
> f you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.
Right. I was around for that era and worked on some of those systems.
NSA's first approach to operating system certification used the same approach they used for validating locks and filing cabinets. They had teams try to break in. If they succeeded, the vendor was told of the vulnerabilities and got a second try. If a NSA team could break in on the second try, the product was rejected.
Vendors screamed. There were a few early successes. A few very limited operating systems for specific military needs. Something for Prime minicomputers. Nothing mainstream.
The Common Criteria approach allows third-party labs to do the testing, and vendors can try over and over until success is achieved. That is extremely expensive.
There are some current successes. [1][2] These are both real-time embedded operating systems.
Provenrun, used in military aircraft, has 100% formal proof coverage on the microkernel.
We know how to approach this. You use a brutally simple microkernel such as SEL4 and do full proofs of correctness on it. There's a performance penalty for microkernels, maybe 20%, because there's more copying. There's a huge cost to making modifications, so modifications are rare.
The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.
> The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.
Well, yeah, that's a problem.
But the bigger problem is that this works for jets, as long as you don't need updates. It doesn't work for general purpose computers, for office productivity software, for databases (is there an RDBMS with a correctness proof?), etc. It's not that one couldn't build such things, it's that the cost would be absolutely prohibitive.
It's not for everything. But the serious verification techniques should be mandatory in critical infrastructure. Routers, BGP nodes, and firewalls would be a good place to start. Embedded systems that control important things - train control, power distribution, pipelines, water and sewer. Get those nailed down hard.
Well, but those need new features from time to time, and certification would make that nigh impossible. I'd settle for memory-safe languages as the happy middle of the road.
seL4 is a bit more than a hypervisor, but it's definitely very low-level. In terms of a useful seL4-based system, you may want to look at https://trustworthy.systems/projects/LionsOS/ – not yet verified, but will be.
I think certification overestimates security, absolutely. Certification proves nothing.
You can use theorem provers to prove correctness, but you can't prove that the business logic is correct. There's a degree to which you just cannot prevent security vulnerabilities.
But switching to memory-safe languages will reduce vulnerabilities by 90%. That's not nothing.
While these types of failures are the 50-70% problem, 30% left seems like a big problem too and the black-hats will just concentrate more on those if the low hanging fruit are removed with rust, C#, python, whatever
It is true that black hats are going to focus on the remainder pretty much by definition because there’s no other choice. The rest is a problem and it needs a solution. But the fact that it exists is not a sound argument against choosing a memory-safe language.
Current solutions are not ideal but they still eliminate a huge portion of defects. Today we can avoid all the memory issues and we also can focus on the next biggest category of defects. If we’ll keep halving possible defects it won’t take long before software is near defect-free.
I guess my point was it won’t be a “tidal wave” of solved security issues. No all the effort that went into find buffer overflow and use after free errors just gets shifted to combing through code for logic errors and missed opportunities to tighten up checks. It’s not going to be 50-70% reduction. Maybe half that? I mean it would help, but it’s not going to fix the problem in a huge way at all.
Now that's a strong statement.
But it's real. There are so many state actors doing cyberattacks now that everything needs to be much tougher. Otherwise, someday soon much of the world stops working.