Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very good question I've been pondering for years, and I generally came to the same conclusion wrt. military-industrial complex in general - not just software. It seems to me that no one expects any war that would hurt the US any time soon, so it's an open season for fleecing the military budget for all it's worth.

I also wonder sometimes if a similar thing isn't happening in enterprise software - that is, actual software doesn't have to work; it only has to serve as an object of trade between companies, and all the problems will disappear in general organizational noise & inertia.



Some years ago I worked for a DoD contractor that builds systems like this, and honestly I think people do care, but they are wildly, woefully, ignorant about the risks. They truly do not understand the vulnerabilities. I'm not making excuses for them (especially given the pushback I received when I started calling out the more egregious things), but I think it does help understand the problem better.


What are some examples of pushback you received? If it’s sensitive I’d enjoy hearing a made up scenario that followed along the same lines with a problem pointed out and a deflection response given


I could write a long paper on this, and I would have if I thought it would've made a difference... But some highlights:

- Stovepiped organisations: stick in your own lane. But security is cross cutting.

- Security orgs want to stick to what they know about, not what the threat scope is.

- Security unwilling to own risk, fall back on ass-covering checklists and mandatory processes. This leads to them being an obstacle, a cost rather than a benefit.

- True lack of expertise at stakeholder level. Particularly in the US, the experts are contracted, and never speak out of turn.

- Staying quiet. Americans are extremely conscious of organisational (rather than technical) status and embarassment, and it isn't career enhancing to identify naked emperors.

- Good security costs money upfront, and pays back over time. Bad security is free at the beginning, and costs massive amounts to fix, but: (a) fixing is someone else's problem, (b) fixing is new contracts and more work, (c) systems might not be noticeably hacked.

Large companies (e.g. Lockheed Martin) are often very adversarial, and deny fault with lawyers. I've often wondered if places like Japan, with more cooperative cultures, can address this differently.


Security unwilling to own risk, fall back on ass-covering checklists and mandatory processes. This leads to them being an obstacle, a cost rather than a benefit.

How well this statement retains its correctness across time and space. Pretty much my experience in every company beyond certain size.


I mostly agree with the points you're making, but...

> Security unwilling to own risk

Security cannot fundamentally own risks created by other parts of the organisation. I'd actually put it the other way around - the organisation is often unwilling to own risks identified by security.


Security needs to accept that is their job to secure the operations of the org, not prevent the org from doing things which don't fit into easy use cases.

For example, the chem eng group own the risk of the chemistry being wrong and the plant blowing up. They don't get to say 'let's outsource production to ChemCorp'. Likewise, security needs to secure the ICS, not just ignore it and say 'the SCADA guys do that, it needs SMB1' or when the risks are pointed out say 'you must now change the passwords every 30 days'.

Business units burying their head in the sand? Well, that can happen too. Pen-tests are great for demonstrating problems, but how many security orgs have the ovaries to do them and force realisation? Did security work with the business unit to mitigate risk, or just want to shut it down?

I'd love to get specific but my point is that there is a lack of holistic vision across the enterprise, and incentivising cooperation between stovepipes is needed, and being willing to take risk -- not throwing away the rule book, but writing a new chapter on how to apply it in context.


Japan doesn't appear to be doing much better.

https://securityaffairs.co/wordpress/53856/cyber-warfare-2/d...


There isn't really sufficient information to judge. Persistent attackers would almost certainly achieve eventual compromise on the type of network described. It's really a question of whether when they find a problem (crashed system, non-compliant senior staff, buggy security protocols, etc) they report it, and then it gets acted on.

Japan invented some of the best aspects of safety culture, like being process driven, checklists, point & call https://www.atlasobscura.com/articles/pointing-and-calling-j...

It has been remarked before that if security was treated the same as safety critical systems (like aviation operations, and increasingly, hospitals) then we would have much better security. Tbh, I'm not sure, because of the adversarial nature of attack and defence, but it would be interesting to test.


I hesitate to say, because there's a possibility that even years later many of the vulnerabilities are still there.

However, one that I know eventually got fixed I'll talk about (it makes a great example anyway). When port scanning one of our pieces of equipment, I noticed a strange port number that was accepting packets. I started sending random packets to it and for the most part it ignored them, but occasionally I could get the system to crash and restart.

Turns out, the server had a debug port enabled and active, even in the production build. This allowed you to essentially invoke any C function you wanted, remotely, if you knew the format (which was published in the OS manual)! Very, very bad.

When I reported it, I got a lot of responses like, "Well, this will be on a closed net anyway." and "If they get on this network, there's much bigger problems." Both statements were true for the most part, but still a very dangerous attitude to have. Just because a network compromise would be bad, doesn't mean you should make it worse by neglecting defense-in-depth. And never assume that somebody isn't going to plug an ethernet cable into your equipment that shouldn't be there (this happens all the time).


It is absolutely happening in enterprise software. I have met sales guys and startup advisors that prided themselves in being able to play that game well. In a certain segment of this industry, you get laughed at if you try to actually come up with a solution.


That’s a sad aspect of humanity :(


>It seems to me that no one expects any war that would hurt the US any time soon, so it's an open season for fleecing the military budget for all it's worth.

If that was the whole story, the military budget would be plummeting as our representatives realized that they could also, and far more legitimately, take money away from the military to put in their pet projects.


Military is the pet project. It's the only form of public spending with broad support even from anti Federal government people.


> Military is the pet project. It's the only form of public spending with broad support even from anti Federal government people.

Law enforcement has about equally broad support, including from anti-federal-government groups (though not always the same ones that back the military, as their are pro-law-and-order anti-interventionist groups that aren't keen on military spending, and pro-military groups that are federal law enforcement as jackbooted authoritarian thugs.)


Unfortunately federal law enforcement has lost some of its support among the law and order crowd because of the perception that they are in bed with the political opposition.


For political reasons it can be easier to justify defense spending. Then you just make sure that it's _your_ pet project that gets the spending.


False they can’t cut it back for economic reasons. The economy is based on it


Bullshit Jobs!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: