Im curious, help me understand where the breakdown happens. Abstracting a few layers away, can we assume that you have system input, your system does something, it has output. If this is correct, can we also then say that inputs are generated by systems you dont control that might not play nice with updates within your system. Similarly, changing outputs might break something dow stream.
If all that holds, my question then becomes, why does your system not have a preprocesisng and postprocessong layer for compatibility? Stuff like that is easier than ever to build, and would allow your components to grow with the ecosystem?
It’s all about risk. If you have a simple enough system, you might be able to hide it behind an abstraction layer that adequately contains the possible effects of change.
But many interesting useful real-world systems are difficult to contain within a perfect black box. Abstractions are leaky. An API gateway, for example, cannot hide increased latency in the backend.
People accountable for technology have learned, through years of pain, not to trust changes that claim to be purely technical with no possible
impact on business functionality. Hence testing, approval and cost.
If all that holds, my question then becomes, why does your system not have a preprocesisng and postprocessong layer for compatibility? Stuff like that is easier than ever to build, and would allow your components to grow with the ecosystem?