Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For safety system software most people I know would be very nervous (as in, I'm outta here) about testing software components and then not testing the end result as a whole, just too many possible side effects could come into play, including system wide things that only reveal themselves when the entire program is complete and loaded/running.

What you describe already occurs to some extent in the process and machinery safety sector, where specialised PLC programming languages are used - there is a type of graphical coding called Function Block, where each block can be a re-useable function encapsulated inside a block with connecting pins on the exterior. eg a two out of three voting scheme with degraded voting and MOS function available

The blocks are tested, or sometimes provided as a type of firmware by the PLC vendor, and then deployed in the overall program with expectation inside the block is known behavior, but before shipping, the entire program is tested at FAT.

Depending on the type of safety system you are building, and the hazards it protects against, there is potentially the expectation from the standards that every possible combination of inputs is tested, along with all foreseeable (and sometimes unexpected) mis-use of the machine/process.

In reality that's not physically achievable in any real time available for some systems, so you have to make educated guesses where the big/important problems might hide, fuzz etc, but the point is you aren't going to test like that until you think your system development is 100% complete and no more changes are expected.

And if you test and need to make any significant changes due to testing outcomes or emergent requirements, then you are potentially doing every single one of those tests again. At very least a relevant subset plus some randoms.

Background: I am registered TUV FS Eng and design/deliver safety systems.

It's a whole different game, across the multi year span of a project you might in some cases literally average less than one line of code a day, 95%+ of work is not writing code, just preparing to, and testing.



to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.

we have done this in quite a couple of companies where things like functional safety or other requirements had to be met. agile sadly gets a bad rep (as does devops) for the way it is rolled out in its grotesque perverted style in large orgs (wagile etc that are nothing but a promise to useless middle/line managers in large orgs not to fire them, or "dev(sec)ops" being condensed into a job title - if that is you, shoot your managers!).

if you increase test automation and get better visibility into risks already during the requirements management phase (e.g. probably you're doing D/FMEA already?) then nothing stops you from kicking these lazy-ass firmware hardware engineers who are scared of using version control or jenkins to up their game, and make your org truly "agile"). Obviously it's not a technical problem but a people problem (to paraphrase Gerald M. Weinstein) and so every swinging dick will moan about Agile not being right for them or DevOps not solving their issues, while in reality we (as an industry) are having the same discussion since the advent of eXtreme programming, and I'm so tired of it I want to punch every person who invites an Agile coach simply for not having the balls/eggs to say the things everyone already knows, it's infuriating to the point I want to just succumb to hard drugs.


to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.

This is exactly right. I work in a highly regulated space, and we have been working in an Agile framework for awhile now. There are two iterations baked into every release cycle (at the end) for final regression testing. That cycle will re-run every test case generated during the program increment, plus additional test cases chosen based on areas of the application that were touched during development.

On top of final validation, we also have an acceptance validation team that runs full integration tests after final validation is complete.


I would very much like to understand how it might be possible to improve on the conventional workflow for the work I am involved in. But I am not quite clear what agile as you implement it means, in contrast to v-model and waterfall, and how it provides advantages (I am guessing accelerated schedule?) to the process.

Can you refer me to any available online case studies etc, or provide me some more detail?

The sectors I work in we integrate off the shelf hardware such instruments, valves etc, we don't manufacture from components as such.


I recommend you learn more about SAFe agile https://www.scaledagileframework.com/ . They all have their merits, but I find this works up to complex organizations, and can simply drop the things that are not worth it for smaller businesses.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: