> but the process is pretty much the same - check for breaking changes,
Unless you're relying on buggy behaviour, there should be no breaking changes in an LTS update.
(...of course, there's no guarantee that you're not relying on buggy (or, at least, accidental) behaviour. People relying on `memcpy(3)` working as expected when the ranges overlap, simply because it happened to do so with historic versions of the `libc` implementation they most commonly happened to test with, is one example. But see also obxkcd https://xkcd.com/1172/ and https://www.hyrumslaw.com/ )
It’s impossible to avoid the occasional breaking change in an LTS, especially for software like this. Security fixes are inherently breaking changes— just for users we don’t like.
> Or a security vulnerability has forced a breaking change.
Theoretically, I suppose?
Do you have a historic example in mind?
I've been running Debian "stable" in its various incarnations on servers for over a decade, and I can't remember any time any service on any installation I've run had such an issue. But my memory is pretty bad, so I might have missed one. (Or even a dozen!) But I have `unattented-upgrades` installed on all my live servers right now, and don't lose a wink of sleep over it.
This happens all the time on systems that are running hundreds of thousands of apps across hundreds of customers.
The worst one I know: for a while basically all Cloud Foundry installations were stuck behind a patch release because the routing component upgraded their Go version and that Go version included an allegedly non-breaking-change that caused it to reject requests with certain kinds of malformed headers.
The Spring example app has a header with the specific problem impacted. And the vast majority of Cloud Foundry apps are Spring apps, many of which got started by copying the Spring example app.
So upgrading CF past this patch release required a code change to the apps running on the platform. Which the people running Cloud Foundry generally can’t get — there’s usually a team of like 12 people running them and then 1000s of app devs.
OpenSSL isn't necessarily the best at LTS, but 1.0.1 released a series of changes to how they handled ephemeral diffie hellman generation, which could be hooked in earlier releases, but not in later releases.
For the things I was doing on the hooks, it became clear that I needed to make changes and get them added upstream, rather than doing it in hooks, but that meant we were running OpenSSL with local patches in the interim of upstream accepting and releasing my changes. If you're not willing to run a locally patched security critical dependency, it puts you between a rock and a hard place.
Comparing a single function to an entire ecosystem is crazy. Making an LTS imposes a skew of compatibility and support to all downstream vendors as well as the core team. The core team has done a great job on keeping GAed resources stable across releases. Understand there’s more to it than that but you should be regularly upgrading your dependencies as par-four the course not swallowing an elephant every 2 years or whenever a CVE forces your hand. The book Accelerate highlights this quite succinctly.
Unless you're relying on buggy behaviour, there should be no breaking changes in an LTS update.
(...of course, there's no guarantee that you're not relying on buggy (or, at least, accidental) behaviour. People relying on `memcpy(3)` working as expected when the ranges overlap, simply because it happened to do so with historic versions of the `libc` implementation they most commonly happened to test with, is one example. But see also obxkcd https://xkcd.com/1172/ and https://www.hyrumslaw.com/ )