Not sure about how parent concretely operates. But there's no reason you cannot do Agile this way.
Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.
> WidgetX handles foobar in main case
> WidgetX handles foobar when exception case arises (More Foo, than Bar)
> WidgetX works like <expected> when zero WidgetY present
Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"
For safety system software most people I know would be very nervous (as in, I'm outta here) about testing software components and then not testing the end result as a whole, just too many possible side effects could come into play, including system wide things that only reveal themselves when the entire program is complete and loaded/running.
What you describe already occurs to some extent in the process and machinery safety sector, where specialised PLC programming languages are used - there is a type of graphical coding called Function Block, where each block can be a re-useable function encapsulated inside a block with connecting pins on the exterior. eg a two out of three voting scheme with degraded voting and MOS function available
The blocks are tested, or sometimes provided as a type of firmware by the PLC vendor, and then deployed in the overall program with expectation inside the block is known behavior, but before shipping, the entire program is tested at FAT.
Depending on the type of safety system you are building, and the hazards it protects against, there is potentially the expectation from the standards that every possible combination of inputs is tested, along with all foreseeable (and sometimes unexpected) mis-use of the machine/process.
In reality that's not physically achievable in any real time available for some systems, so you have to make educated guesses where the big/important problems might hide, fuzz etc, but the point is you aren't going to test like that until you think your system development is 100% complete and no more changes are expected.
And if you test and need to make any significant changes due to testing outcomes or emergent requirements, then you are potentially doing every single one of those tests again. At very least a relevant subset plus some randoms.
Background: I am registered TUV FS Eng and design/deliver safety systems.
It's a whole different game, across the multi year span of a project you might in some cases literally average less than one line of code a day, 95%+ of work is not writing code, just preparing to, and testing.
to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.
we have done this in quite a couple of companies where things like functional safety or other requirements had to be met. agile sadly gets a bad rep (as does devops) for the way it is rolled out in its grotesque perverted style in large orgs (wagile etc that are nothing but a promise to useless middle/line managers in large orgs not to fire them, or "dev(sec)ops" being condensed into a job title - if that is you, shoot your managers!).
if you increase test automation and get better visibility into risks already during the requirements management phase (e.g. probably you're doing D/FMEA already?) then nothing stops you from kicking these lazy-ass firmware hardware engineers who are scared of using version control or jenkins to up their game, and make your org truly "agile"). Obviously it's not a technical problem but a people problem (to paraphrase Gerald M. Weinstein) and so every swinging dick will moan about Agile not being right for them or DevOps not solving their issues, while in reality we (as an industry) are having the same discussion since the advent of eXtreme programming, and I'm so tired of it I want to punch every person who invites an Agile coach simply for not having the balls/eggs to say the things everyone already knows, it's infuriating to the point I want to just succumb to hard drugs.
to reiterate on parents endorsement for agile and the point that you seem to be taking issue with: nothing in Agile says you can't run final acceptance tests or integration tests before shipping.
This is exactly right. I work in a highly regulated space, and we have been working in an Agile framework for awhile now. There are two iterations baked into every release cycle (at the end) for final regression testing. That cycle will re-run every test case generated during the program increment, plus additional test cases chosen based on areas of the application that were touched during development.
On top of final validation, we also have an acceptance validation team that runs full integration tests after final validation is complete.
I would very much like to understand how it might be possible to improve on the conventional workflow for the work I am involved in. But I am not quite clear what agile as you implement it means, in contrast to v-model and waterfall, and how it provides advantages (I am guessing accelerated schedule?) to the process.
Can you refer me to any available online case studies etc, or provide me some more detail?
The sectors I work in we integrate off the shelf hardware such instruments, valves etc, we don't manufacture from components as such.
I recommend you learn more about SAFe agile https://www.scaledagileframework.com/
. They all have their merits, but I find this works up to complex organizations, and can simply drop the things that are not worth it for smaller businesses.
Waterfall is a great methodology where warranted. It ensures you're doing things in a principled, predictable, repeatable manner. We see all this stuff lamenting about and trying to implement reproducibility in science and build systems, yet seem to embrace chaos in certain types of engineering practices.
We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.
Even with “hard” requirements in advance, things are always subject to change, or unforeseen requirements additions/modifications will be needed.
I don’t see why you can’t maintain the spirit of agile and develop iteratively while increasing fidelity, in order to learn out these things as early as possible.
> I don’t see why you can’t maintain the spirit of agile and develop iteratively
The question is not whether you can't. The question is whether it provides advantages. Agile comes with its own downsides compared to a waterfall. Note, that I've been working with agile methods most of my career and I don't want to change that.
If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.
~ Gerald Weinberg (1933-10-27 age:84) Weinberg’s Second Law
> If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.
If builders built buildings the way programmers write programs, we’d have progressed from wattle-and-daub through wood and reinforced concrete to molecular nanotechnology construction in the first two generations of humans building occupied structures.
Bad analogy is bad because programs and buildings aren't remotely similar or comparable.
Still I feel like your analogy is the better one, things are moving very fast. With declarative infra and reproducible builds you’re pumping out high quality, well tested buildings at record speeds.
Programmers don't build, they design. It's more akind to what building architects do in a cad program. They go through many iterations and changing specs.
When programmers are designing it is more likely to be in the early stages when the program is still small. Often once the program gets bigger, the effort devolves to simply building. They might feel like the design is wrong, but the inertia by then is against the design evolving.
What we need is a practical way to keep the design and implementation synchronized and yet decoupled
You don't have too, but it is very common to fall into the trap.
If working within a safety-critical industry and wanting to do Agile, typically you'll break down high-level requirements into sw requirements while you are developing, closing/formalizing the requirements just moments before freezing the code and technical file / design documentation.
It's a difficult thing to practice agile in such an industry, because it requires a lot of control over what the team is changing and working on, at all times, but it can be done with great benefits over waterfall as well.
Actually most functional safety projects use the v-model (or similar, topography can vary a little as to needs), which is waterfall laid out a slightly different way to more clearly show how verification and validation closes out all the way back to requirements with high degrees of traceabilty.
I've always wanted to break that approach for something a little more nimble, probably by use of tools - but I can't see agile working in functional safety without some very specific tools to assist, which I am yet to see formulated and developed for anything at scale. Also, there are key milestones where you really need to have everything resolved before you start next phase, so maybe sprints, dunno.
The thing about doing waterfall/v-model is if done correctly there is little chance you get to the final Pre-Start Safety Review/FSA 3, or whatever you do before introducing the hazard consequences to humans, and a flaw is discovered that kicks you back 6 or 12 months in the design/validation/verification process. This, while everyone else stands around and waits because they are ready and their bits are good to go, and now you are holding them all up. Not a happy day if that occurs.
FS relies on high degree of traceability and testing the software as it will be used (as best possible), in it's entirety.
So not sure how agile could work in this context, or at least past the initial hazard and risk/requirements definition life cycle phases.
FS is one of things where your progress that you can claim is really only as far as your last lagging item in the engineering sequence of events. The standard expects you to close out certain phases before moving onto subsequent ones. In practice it's a lot messier than that unless extreme discipline is maintained.
(To give an idea of how messy it can get in reality, and how you got to try and find ways to meet the traceability expectations, sometimes in retrospect - last FS project I was responsible for design we were 2.5 years in and still waiting for the owner to issue us their safety requirements. We had to run on a guess and progress speculatively. Luckily we were 95%+ correct with our guesses when reconciled against what finally arrived for requirements)
But, normally racing ahead on some items is a little pointless and likely counterproductive, unless just prototyping a proof of concept system/architecture, or similar activity. You just end up repeating work and then you also have extra historical info floating around and there's possibility that some thing that was almost right but no longer current gets sucked into play etc etc etc. Doc control and revision control is always critical.
Background: I am a TUV certified FS Eng, I have designed/delivered multiple safety systems, mainly to IEC 61511 (process) or IEC 62061 (machinery).
LNG Plants, Burner Management Systems, Mine Winders, Conveyors - any process plant or machinery where there is potential for harm to come to humans and the is an electronic programmable device mitigating the risk, eg a Safety PLC running a Safety Instrumented System.
I am about to do some automotive FS, so that is potentially ISO 26262, but it might actually be more 61508, which is the parent standard for the safety group of standards.
Could you use both to good effect? Waterfall to make a plan, schedule, and budget. Then basically disregard all that and execute using Agile and see how you fare. Of course there would be a reckoning as you would end up building the system they want rather than what was spec'd out.
You could. You might even say it's difficult to make any project estimate without your plan being waterfall. Planning and execution are deliberately two very different things, and convincing the customer - or the steering committee of that - is key to a good product.
These are all just heuristics that help people manage the fundamentally unmanageable: the unpredictable future. Everyone does a little bit of everything when working. A big company will waterfall year long strategies with the individual parts agile’d. Individuals will waterfall their daily tasks while working on an agile sprint.