Just one problem with your argument: we’re not talking about languages that have orderly null pointer exceptions. We’re expressly talking about languages like C, C++ and Rust.
Your web server example is uncompelling, because a panic-based abort is not the only thing that can distress your system. The simplest example is if the library code doesn’t terminate, accidentally triggering an infinite loop. Or (better from some perspectives, worse from others) an infinite loop that allocates until you run out of memory, denying service until maybe an out-of-memory killer kills the process. In such scenarios, your system can easily end up in a state you didn’t write code expecting, where everything is just broken in mysterious ways.
No, if you want your web server to be able to return an orderly 500 in as many situations of unforeseen errors as possible, the plain truth is that you need the code that will produce that to run on a different computer (approximate definition, though with some designs it may not quite need to be separate hardware), and that you’ll need various sorts of supervisors to catch deviant states and (attempt to) restore order.
In short: for such an example, you already want tools that can abort a runaway process, so it’s actually not so abnormal for a library to be able to abort itself.
There’s genuinely a lot to be said for deliberately aborting the entire process early, and handling problems from outside the process. It’s not without its drawbacks, but it is compelling for things like web servers.
> Your web server example is uncompelling, because a panic-based abort is not the only thing that can distress your system.
You seem to be saying that it wouldn't catch 100% of the problems, so catching only 80% is not that useful.
I see that as uncompelling. 80% helps a lot!
> you need the code that will produce that to run on a different computer
Problem is, we're using C, C++ and Rust, because latency and performance matters. Otherwise we'd be using Go or Java.
So in order to do what you're proposing, we'd have to do an outbound call on every link of a large filter/processing chain, serializing, transferring, and parsing the whole request data at each recoverable step.
What I’m describing about producing 500s from a different machine is standard practice at scale, part of load balancers. And at small scale, it’s still pretty standard practice to do that from at least a different process, part of reverse proxying.
Your web server example is uncompelling, because a panic-based abort is not the only thing that can distress your system. The simplest example is if the library code doesn’t terminate, accidentally triggering an infinite loop. Or (better from some perspectives, worse from others) an infinite loop that allocates until you run out of memory, denying service until maybe an out-of-memory killer kills the process. In such scenarios, your system can easily end up in a state you didn’t write code expecting, where everything is just broken in mysterious ways.
No, if you want your web server to be able to return an orderly 500 in as many situations of unforeseen errors as possible, the plain truth is that you need the code that will produce that to run on a different computer (approximate definition, though with some designs it may not quite need to be separate hardware), and that you’ll need various sorts of supervisors to catch deviant states and (attempt to) restore order.
In short: for such an example, you already want tools that can abort a runaway process, so it’s actually not so abnormal for a library to be able to abort itself.
There’s genuinely a lot to be said for deliberately aborting the entire process early, and handling problems from outside the process. It’s not without its drawbacks, but it is compelling for things like web servers.
I would also note that, if you choose to, you can normally catch panics <https://doc.rust-lang.org/std/panic/fn.catch_unwind.html>, and Rust web servers tend to do this.