Not if the public view on the data doesn't change. Just the storage. That's sorta the point. To not have to version if you just change the underlying storage model.
For example let's say for whatever reason we were storing a duration as an int. But instead we decided to migrate it to start time and end time.
Do we need to force that change on everyone downstream and add a new major version API? Or can we just compute the old duration from the new attributes in the transformation.
Even in your example do we really need to change the frontend in this case? For small projects the extra boiler plate probably doesn't out weigh the benefits but for large projects it absolutely does.
If the data structure needs to be changed it needs to be changed at the beginning of the system typically the front end. And downstream code needs to be fixed to accommodate that.
Otherwise youve created 20 different state machines for each part of the system stacked on top of each each other each expecting a different data structure and returning a different data structure.
So changing anything after the system is sufficiently complex is an exercise in masochism and development will slow to a crawl to avoid breaking one of the 20 downstream black boxes.
There should only be a single data structure contract that all teams follow.
There needs to be a SINGLE data structure passed through the system originating at the beginning of the system.
The data structure can be added to by code along the pipeline but never changed.
Am I understanding that you think that there should be just one data structure which is shared between all teams which is a superset of all fields that it could have and people just add data to those fields? So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?
At this point you may as well just use a object or dictionary. The type doesn't give you any idea about the actual shape of the data.
> So at any given point you don't even know what data is present or not present depending on which point in the pipeline it has passed or not?
At least with a non mutating data structure with the public contract you can tell if it's passed through a part of the pipeline because a required field is blank or null.
With a mutating data structure that gets changed, say 20 different times throughout the system you have no idea if it's passed through some part of the pipeline or not.
Even better if somehow everything can interact via a centralized database where all state is stored even intermediate state. Its not just storage. It's also state management.
For example let's say for whatever reason we were storing a duration as an int. But instead we decided to migrate it to start time and end time.
Do we need to force that change on everyone downstream and add a new major version API? Or can we just compute the old duration from the new attributes in the transformation.
Even in your example do we really need to change the frontend in this case? For small projects the extra boiler plate probably doesn't out weigh the benefits but for large projects it absolutely does.