They will be in for a surprise: Using those massive buggy systems is not one bit easier for the hackers than for the actual users. Maybe the many bugs in those huge systems will turn out to be the best protection against enemy takeover... not actually too crazy an idea, when I think of biology and the mess that are biological systems, where even errors are vital for the functioning of the whole system (e.g. accidentally making a protein that is turned off, but sometimes it turns out it's useful to have it around when the environment changes, but an error-free efficient system would not have made it).
Actually yes. It takes a horde of military personnel to operate the hodgepodge of modern military information systems and technologies. There's nothing easy about it. All sorts of incompatible and buggy tools. Our enemies would have a hard time putting to work the military command & control apparatus -- I mean, we already have trouble enough as it is.
But that doesn't mean the enemy can't learn information and be able to predict our next moves in a conflict. Like what we did with the enigma in WWII, we are subject to the same type of listening where someone knows our next move before we make it.
A company I worked for had a CEO who was rather paranoid about the Chinese stealing the software for our innovative product, and would often rant about it at our all-hands meetings. To which I could only think "Well, if they can make sense of it, good luck to them." I really think it would have been easier for them to re-do the implementation from scratch!
> expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an asset
This is the objective of the adversary in a conflict with respect to information and systems security. It doesn't matter if they can control a system, if they can make it less reliable it's still a win (but not as good). If they can get it to feed out false or misleading information to their opponent, it's a win (hacked a radar, can you show an extra blip on the screen or cause them to sometimes shift position and reduce the confidence of the operator?).
I have been thinking the recent Navy navigation related crashes are related to enemies tampering with systems. They are testing live how weak a windows based fighting ship is.
From what I understood the recent Navy collisions are caused by under-staffing => sailors having to work too long days => sailors literally falling asleep at their posts / seeing things that aren't there / not seeing things that are there
They seem to work twelve on, twelve off, with requiring that some of their personal time being used to maintain physical fitness, so does seem like a disaster waiting to happen.
There is zero evidence of enemy tampering in recent Navy ship collisions / allisions. It was simple incompetence and bad luck. There's really no way to tamper with shipboard surface radars, binoculars, horns, and VHF radios. As long as that equipment is working correctly all collisions can be avoided.
I did not specify an exact scenario because that was not the point I was trying to make (it was actually meant mostly as a joke, no?). If they already have to fight with so many bugs, disabling it will be just as hard on the one hand, and the operators already have top live with disabling issues in the non-invaded system, as one comment posted here said, they didn't even notice the invasion because it didn't feel different from normal operation :-)
What exactly do you mean? The specific scenario I described in very, very broad terms was from a lecture by Eric S. Lander about a specific bacterial cell. If you have an issue with the general description I don't understand it, since you don't say anything at all apart from some snide comment that doesn't even make sense to me. At the very least I would expect someone who bothers to reply because they disagree to say what exactly it is they disagree with, and also be specific about it. While I'm a CS person I have a broad background in life sciences too.
If the bugs were annoying enough that they'd prevent proper functioning of the system, they'd be fixed. If the current users are able to get some utility out of the system with all the bugs, then you can be rest assured so will the hackers.
The joke was not about software specifically, but about the whole system, everything. But even software feels like evolutionary forces are at work - when you work on huge systems developed and added-to over years, sometimes decades, often by new people (lots of churn, contractors), the "design" is less and less visible and it becomes a mess, the role of deciding whether a new "gene" (feature/big fix) works is taken by the (also messy) huge test suite and since nobody understands the whole system any more people code to pass those tests. I made the comparison looking back at when I once was part of those adding to a huge existing piece of software without understanding (it had become impossible, the only objective was to have the tests pass, or even just most of them).
When I later also took biology classes (and org. chem., biochem., physiology etc etc) I could not help but see parallels - I say parallels, not transferring the models over! - to how we develop huge systems, be it software, hardware or the combination of both. No single human understands even a significant part of them any more. "Deliberate design" is not the sole force working on those systems any more.
I'd add that most software systems compete on the market, which is as close to direct evolutionary process as you can get in modern environment. And that process has a fitness function that's quite misaligned with what a designer wants at any step of the process.