I can't get it why people would prefer to add "?" to everything instead of just having exceptions which automate that behavior.
In the bad old days of C there were two kinds of programs: programs without correct error handling, and programs where half the loc are unhappy paths that do what exceptions do... with a huge amount of work.
Today people are repeating the same mistakes of the past, putting a "?" on everything is a lot better than what you had to do in C, but why do that when you can just use a language with exceptions?
It is like somebody showed cavemen fire (exceptions) and they decided it wasn't worth anything and went to go screw around with other things.
I prefer having extra work done writing code (adding "?") than having to do extra work reading code. Exceptions are functionally invisible control flow; it isn't clear to the reader that a function may blow up if the exceptions are unhandled.
In Swift, at least, the possibility that a function can throw must be marked as part of its signature, and the exception cannot be ignored if it is thrown so the call requires explicit syntax as well, so there is no way to miss that something could "blow up" when reading the code.
It's a little old at this point, but I find the Swift Error Handling Rationale design doc to be absolutely fascinating. It cites other language’s error handling paradigms (including Rust) if you're curious:
Fascinating, looking forward to reading this later today! Have used a handful of languages over the years, and I don’t have any academic perspective in different error handling techniques, but there’s no doubt that the way swift does it feels particularly natural, safe, but still gets out of your way. I love all the options for handling errors in a meaningful way.
In Java functions declares Exceptions in its type signature, so it does all of that automatically. Then you get a compile error if you don't handle it in the function, or you need to declare the function throws it, so it is type safe.
Note that people now consider that as a mistake, people prefer having Exceptions be hidden instead of explicit and requiring handling like that.
We are in the distributed systems age. If systems are composable, operations can fail for reasons that are completely unfathomable to the client. It's not reasonable to have a SharkBitTheOpticFiberCableException and 100,000 other ones that handle every reason why an operation failed.
What the client should know is how an error affects what it is doing, it wants answers to questions like
* Is it likely this error will recur if I retry immediately? in 1 minute? in 1 day?
* What is the scope of this error? Does it affect the entire system? Does it affect a particular database record?
* What do I tell the user?
* What do I tell the system administrator?
Actual improvement in this area won't come from information hiding but it could come out of attaching some kind of ontology to exceptions where exceptions are tagged with information of the above sort, that it is not about having names for them and a hierarchy, but in about having rather arbitrary attributes that help the exception management framework (somewhere high in the call stack!) do the best it can in a bad situation.
While I think this is a good point, a lot of the answers cannot be determined by the generator of the exception. Your SQL library cannot know what the implications of an error are -- is this a minor part of the system for which the error can be logged but mostly ignored, or is it critical? Etc.
People want error handling to vanish so that they can follow the "normal" flow, but in fact error handling is one of the critical things code does.
Actually there is somewhat standardized set of SQL error codes and they can be put into a hierarchy like the HTTP codes.
For instance, you can have a SQL error because the syntax of your SQL is wrong. If you're not doing "dynamic SQL" you know this is a programming error (it doesn't matter what input was supplied to the functions.) One common error is "attempted to insert a row with a duplicate key", frequently you want to catch that SQLException and rethrow all the rest.
The ideal SQL library for Java would expose the hierarchy implicit in SQL errors as a class hierarchy.
I had a similar thought a while back [0]. Developing a sane ontology of error types and their implications is a hard problem, but I think it could be done. The subset of errors that is the most frustrating and hard to deal with are ones where, as you point out, the client will have no way to estimate how long a failure mode might persist, at which point you resort to exponential backoff (actually probably an s-curve).
The issue is that sometimes the solution to the issue would require the client to get up and get out a shovel, and go dig somewhere or something. When the abstractions break down that hard there isn't really a way for the developer of the code to handle that unless they somehow stuff a full blown AGI into their program, and even then it would be a stretch.
The mistake was not "explicit errors". It was having a mix of error types, some explicit and some implicit, with no convenient way to combine them, plus the interface complications.
Note that most newer languages are choosing explicit errors. This includes at least Go, Rust, Swift, Zig, and Odin.
The second mistake was only flirting with Bertrand Meyer’s work until the Gang of Four showed up and wrecked Java forever.
Meyer + functional core nets you a great deal of code with no exception declarations and an easy path for unit tests. If it hurts to do stuff it might not be the language that sucks, it might be you. Pain is information. Adapt.
There’s the Design By Contract work of course, but I’m still trying to cite what I thought was his best advice which is to separate decisions from execution, which is compatible with but I find to be subtler than the functional core pattern.
Often we mix glue code and IO code with our business logic, and that makes for tough testing situations. Especially in languages that allow parallel tests. If you fetch data in one function and act upon it in another, you have an easy imperative code structure that provides most of the benefits of Dependency Injection. Your stack traces are also half as deep, and aren’t crowded with objects calling themselves four times in a row before delegating.
if (this.shouldDoTheThing()) {
this.doTheThing();
}
Importantly with this, structure, growth in complexity of the yes/no decision doesn’t increase the complexity of the action code tests, and growth in glue code (auth headers, talking to multiple backends, etc) doesn’t increase the complexity of the logic tests.
A big part of scaling an application is finding ways to make complexity additive or logarithmic, rather than multiplicative. But people miss this because they start off with four tests checking it the wrong way, and it takes four tests to do it the right way. But then later it’s 6 vs 8, and then 8 vs 16, and then it’s straight to the moon after that.
> Remember that Eiffel, unlike other programming languages, is not just a programming language. Instead, it is a full life-cycle framework for software development. As a consequence, learning Eiffel implies learning the Eiffel Method and the Eiffel programming Language. Additionally, the Eiffel development environment EiffelStudio is specifically designed to support the method and language. So having an understanding of the method and language helps you to appreciate the capabilities and behavior of EiffelStudio.
I read "Object-Oriented Software Construction" to do so, but it was long enough ago that I googled "The Eiffel Programming Language" because my brain had substituted that title instead because IMHO, it's more accurate.
The above link should have many current resources for you.
Correction: Some people. Java's checked and unchecked exception approach is quite nice if used judiciously. It certainly beats checking for error after every function call (default: mostly people ignore error codes) and you even get typed errors so you can trivially incorporate exception handling in the conceptual design as a first class design element.
I am frankly not sure how people get confused about "control flow" and exceptions. (In decades of Java programming the only thing that can still cause minor reading/writing nuisance are generic types and type erasure in over elaborate generic code.)
this (plus try-with-resources) is the genius of exceptions. The tragedy of exceptions in Java is that checked exceptions convert the above to
try {
... something ...
} catch(ACheckedExceptionThatHasNothingToDoWithThisCode x) {
throw new SomeOtherCheckedExceptionToPleaseTheCompiler(x)
} finally {
... do what has to be done ...
}
with the variations of
throw new AnUncheckedExceptionSoIDontVandalizeMyCodeMore(x)
and
catch(...) {
// i forgot to rethrow the exception but at least the compiler isn't complaining
}
as well as
// i forgot to add a finally cause because I was writing meaningless catch clauses
As much as I think checked exceptions are a mistake in Java, it is not hard to make up your mind about rethrows and apply them in a checked or unchecked form with little or no thought.
The unhappy path that you get for free with exceptions is correct for code with ordinary control flow. Most of the code has no global view of the application and is no position to handle errors. On the other hand, for many simple programs, the correct behavior is "abort the program, clean up resources, display an error message" which a sane exception system gives you for free (except for the finally which cleans up the happy path too)
For a complex control flow there is something high up in the call stack that has global responsibility. Imagine a webcrawler which is coordinating multiple threads that call fetchUrl(url) fetchUrl doesn't need to catch exceptions at all, just clean up with finally. What it may need to do is tag exceptions with contextual information that will help the coordinator make decisions. That webcrawler in particular will deal with intermittent failures all the time and only the coordinator is in a position to decide if it wants to retry and on one schedule.
Java isn't generic over exceptions. You can't write a method that takes an instance of Foo and says "my method throws whatever Foo.bar() throws" or even "my method throws iff Foo.bar() throws".
And this means that your method either always demands to be wrapped in a try-catch, or you migrate to unchecked exceptions.
Rust makes errors a part of the regular type system, so they automatically benefit from all its features.
> You can't write a method that takes an instance of Foo and says "my method throws whatever Foo.bar() throws" or even "my method throws iff Foo.bar() throws".
If you could it'd mean altering the implementation would automatically alter the API, which would be rather unexpected.
That's why the Java approach is to wrap exceptions and propagate causal chains. The underlying errors thrown by the implementation can change but the advertised exceptions don't, but no information is lost.
The designers of the Java functional and stream library for one. None of the functional contracts have throws. So you are forced to have un-checked exceptions for everything, unless you want a truly mind-boggling amount of try-catch everywhere which will rapidly exceed your normal code by factor of 2x-3x.
Some people don't like the Lispy signatures so I did start coding up a version with with a fluent interface but didn't quite finish.
Overall I would say the implementation of lambdas and method references in Java 8 was genius, but the stream library was a big mistake. Part of it is that has this cumbersome API that in principle would let it optimize query execution by looking at the pipeline as a whole but doesn't really take advantage of it.
In Java, yes. Note that the JVM doesn't enforce checked exceptions. It's a language level thing. So in Kotlin for example, where all exceptions are unchecked, you can use the streams library without needing try/catch.
I get where you are coming from, but imagine if every other "to the human" process description we had was done this way.
I actually think this would be a fun one. How to make scrambled eggs, but where all failure cases are covered. Would be the "Hal fixes a lightbulb" in prose.
That gets to the original promise of computers, doesn't it? That they'd perform repetitive tasks quickly and reliably.
Meanwhile, every time I make scrambled eggs, there is a small but very real chance that my house burns down. And we accept this because to err is human.
Sorta? But a lot can be packed away in "other directions." Most recipes, for example, assume that setup/teardown is intrinsic to the kitchen. As such, to know the procedures to do those things, you would look somewhere else.
That is, you aren't accepting a risk that things will go wrong. You have moved what to do about many exceptions to somewhere else.
Assume all functions can throw and there is no extra work reading. A function that has no possibility of error is so uninteresting in the context of error handling.
Furthermore, handling errors has little to do with where the error is actually caused. In general, you can only do two things with errors: log and kill the operation or retry the operation. Neither of these has anything to do with the leaf function 20 items down in the stack that actually made the network call that failed.
"Assume all things can throw" is what I've seen people do in Java code that adds a million try catch wrappers around everything just in case something may go wrong at some point.
The end result is either completely unreadable or impossible to figure out. "How do I return fallback data for FooBarService.wiggle()" often ends up in digging through (incomplete, outdated) documentation or with code that breaks unexpectedly, sometimes even in production.
Note that Rust has the same issue, any method can panic and allocation may just fail at some point. There are very few ways good ways to handle those problems correctly, which is why this "everything may kill your program" approach is often criticised.
Don't get me started on Java and checked exceptions. If you don't have checked exceptions, you should not have a million try/catch blocks. In fact, just the opposite. Since you only care about errors when you can retry (or ignore) you should only have a small number of try/catch blocks. Ideally one or none.
My best example of this is a UI application that I built that had a single try/catch block around the event loop. It just displayed the error message to the user and returned to the event loop. If they tried to save a file to a network and failed, they got the message, and could just hit save again for somewhere else. No other code needed.
> There are very few ways good ways to handle those problems correctly, which is why this "everything may kill your program" approach is often criticised.
In Erlang, "everything may kill your program" is typical method of operation, and there should always be some kind of path to reset your state from known good values.
> A function that has no possibility of error is so uninteresting that focusing on that is the wrong thing.
I disagree, a function that has no possibility of an error is a proper function, and what we need for performance optimized code.
Proper functions by definition are just mappings from a domain to a range. That mapping really shouldn’t be predicated on any other state, so it should never fail if the inputs are valid within the domain.
We need to focus on such functions if we want performance, because we can only achieve too speeds by not worrying about checking the function result for correctness. Given a proper function, we should just be able to compute the result and move on to the next function.
Therefore it’s of great benefit to us (as authors of performant code) to separate our fallible functions from our infallible ones. Keep the fallible ones iutside of hot loops, only infallible ones inside, and that’s a recipe for mechanical sympathy of the sort that results in great performance.
If I am in the business of writing robust code; then "assuming all functions can throw" means at the very least forcing every function call to be surrounded by a try/catch block? It almost always make sense to handle an error locally if you can; for example if I want to retry the operation (let's say I'm writing a distributed database client), it may make sense for me to retry another node rather than unwinding to the application level that has now lost all context.
>A function that has no possibility of error is so uninteresting that focusing on that is the wrong thing.
I spend a lot of time debugging errors in code that has 0% chance of failing. It tends to involve a lot of matrix math. This isn't something you can say is universally true especially given all the hype around AI now.
> It almost always make sense to handle an error locally if you can
This is highly presumptuous. I have written many programs that did not need to handle errors locally, and so exception handlers were only at the very top level (or, actually, just below the top-level usually - but the point is that there were generally few and I had flexibility to decide where to put them). Perhaps you and I write very different applications. But the fact remains that the "almost always" in your statement doesn't hold.
Alternatively line of reasoning: if this was always true then there would be little point to Rust's ? as it would be so rarely used.
> If I am in the business of writing robust code; then "assuming all functions can throw" means at the very least forcing every function call to be surrounded by a try/catch block?
No, absolutely not! You only care about errors where you can retry/ignore or log and terminate so you only have try/catch in those areas. So maybe one or two.
What you're describing here are unchecked exceptions, which Rust has in the form of panic. There are other kinds of errors that can be handled closer to the point where they occur.
In theory, I like exceptions. In practice, I hate them. Few languages statically check exception handling - e.g. Java, and even then only partially - leading to stability-ruining edge cases leaking into production in the most unexpected of places caught only by QA if you're lucky. Exception handling codegen can also be rather atrocious, leading to unavoidable performance degredation when third party middleware throws unavoidable exceptions, even when you do fix the stability bugs. They're also a nasty and reoccuring source of undefined behavior when they unwind past a C ABI boundary, an issue I've encountered in multiple codebases with multiple exception-throwing languages. In my personal experience, programmers are also rather terrible at writing exception-safe code.
Result and ? force you to think about - or at least acknowledge - the edge cases. For a throwaway script or small scale easily tested program, that might be a drawback. For MLOC+ codebases where link times alone are sufficient to start impeding testing iteration times, it can be a big help for correctness and stability, while still being relatively lightweight compared to other manual error handling.
Finally - Rust has exceptions. They're called panics. They can be configured to abort instead of unwind. This helps set the tone - they're really meant for bugs, and exceptionally exceptional circumstances. They cause all the problems of exceptions, too - unconsidered edge cases, undefined behavior unwinding past C ABIs, the works. Fortunately, it's reasonable in Rust to aim to eliminate all panics but bugs.
It's very important to minimize the burden of handling errors in code with simple control flow. Frequently I see people try very hard to handle errors with monads in languages like Scala at the micro level and they are so burned out by this that they don't put any effort into handling errors properly at the macro level.
If you make the micro level as automatic as you can it is possible devs will address the macro level, and what is necessary at the micro level is not dealing with a crisis that prevents the compiler from building your code, but rather cleaning up the environment consistently in both normal in error conditions and giving the macro level sufficient context for the error that it can do the right thing.
I've built crash collection and deduplication systems, I've heard of triage that helps discount crashes generated by hardware failures or overeager overclocking. I've collected telemetry and setup symbol servers and source indexing to streamline bug squishing, and helped build systems which verify game content up-front to discover even non-code bugs before they're shipped to users, and to properly attribute said errors to the content that generated said errors in an easily navigatable and fixable way. I've helped engineer error-tollerant systems that won't require handholding by engineering to recover from bugs. Plenty of focus on the macro.
But all it takes is a single uncaught exception slipping past QA to cause one to consider a recall of physical product, even in this era of ubiquitous internet, for a handheld console game for something as trivial as a missing or corrupt sound effect. If things at the micro level are neglected too much, and nothing you do at the macro level can really mitigate that in a sane manner... except use tools that check you're doing things right at the micro level. And I have yet to see exceptions handle that micro level particularly well.
Error handling in Rust is actually a lot worse than you think. In fact it may be the single worst aspect of the language.
Fundamentally it is difficult to impossible to fix bugs without knowing what code caused it. Java-style exceptions give you a backtrace for free, which is a huge head start. With Rust you have to do a lot of manual plumbing with something like error_stack to get similar functionality, out-of-the-box Errs do NOT capture this.
Far more productive to work in an environment that does the right thing "for free" vs having to do it manually.
> Java-style exceptions give you a backtrace for free, which is a huge head start. With Rust you have to do a lot of manual plumbing with something like error_stack to get similar functionality
With crates like anyhow and eyre you also get backtraces "for free" nowadays, without needing to do manual plumbing (all you need to do is toggle on a feature flag).
This requires `bar` to have a function signature that notes it may error, `E2` must implement `From<E1>`, and the caller of `bar` must use the result or explicitly silence the warning. Meaning if a program creates a Result the error must be handled - you can't silently let errors bubble up through the call stack.
`Result` implements some common combinators like `.ok()` to convert to `Option`, `map`, `map_err`, `or_else`, etc to reflect the common cases of error handling.
And finally, since Result doesn't require non-local control flow like exceptions you know that `drop` will run as the functions return back up the callstack.
And if you want to use Result like exceptions... you can. But you can't hide it from callers, and callers are still free to handle them elegantly.
How would an exception automate the behavior of „?“?
What ? does in rust is to unwrap the result check if it is err and return from the function with an error result. On top of that it will auto-convert the error type (if the type has the from/into traits implemented)
So it would do:
try {
//the code that may fail
} catch (error) {
//do we just throw the same error?
//or convert the exception to a custom other exception
}
If I see an API that throws me an low level exception without context I go mad. Like an file not found exception etc when executing an API that does multiple file IO operations.
Exceptions are not a form of gotos, they are both less powerful as they are structured and more powerful (as they are nonlocal). They desugar to continuations, but so does rust option type handling and ?. In fact they are pretty much equivalent.
I'm not terribly familiar with either language, but I don't see any particular difference between swift and rust error handling for example, swift will also mark fallible function calls with try, similarly to ? in rust.
For what is worth the author of the swift standard library believes that try is a mistake: as most functions can fail in practice it just becomes noise. It might be more useful to mark can't fail regions.
I was actually wondering (in Zig, which has a per-statement "try" which is essentially the same as the ? in Rust) whether it also makes sense for whole blocks, which would look a lot like traditional try-catch block in languages with exceptions, e.g. instead of:
...but would behave exactly the same as the indiviual trys, if any function in the block returns with an error, that same error is passed up to the caller. But I guess that forcing individual trys makes you think harder about handling individual errors than just pushing the responsibility for error handling up the callstack.
Sure, but we have now (almost) gone full circle:).
When writing exception safe code, for me is more important to know which functions are guaranteed not to fail as they will be called in the commit path. Currently I just comment which operations are no-throw and otherwise assume that everything else can fail, but it would be nice to have the compiler tell me.
I've been pondering the same thing![0] Essentially, one would get (checked-)exception-like behavior, except that performance would be better, and whether a function can fail or not would still need to be declared explicitly in its return type.
> But I guess that forcing individual trys makes you think harder about handling individual errors than just pushing the responsibility for error handling up the callstack.
It probably does but it can also make code much harder to read if you have to check for errors after every other line. I'm a bit divided on this.
If you consider the case where you call a function that throws an exception without you expecting it -- then the control flow will skip your code, and this is indeed not very structured, like a goto, and in fact less local than a goto.
I think the nonlocal part is the scary part: it becomes very scary to figure out which parts of the code can fail and how, especially when failures can come from an arbitrarily deep call stack.
Maybe checked exceptions could be more useful to explicitly annotate allowed failures, but at the same time we all know how that's going in Java world.
* C# uses exceptions, and when I code it feels exactly the right choice
* Go uses error return values, and when I code it feels exactly the right choice
For some reason both feel very much ideal in use. Maybe it is because in each case the language syntax/ethos fits very well with the choice made, and so is frictionless when developing in the flow (and if used properly of course)? Maybe some other reason. Hey ho.
It's mostly philosophical, are you fine with blowing up with an exception, or would you rather have your functions return known values for the unhappy path? I personally like exceptions in exceptional cases, but much rather having functions with explicit contracts (e.g. "this will return either True or False in all input cases", not "this will return either True, or Exception in all cases when $foo doesn't exist in the database, and woe unto the programmer that forgets to catch this.")
Nim handles that with the {.raises: [].} pragma and the effect system, which is quite a neat approach. It’s like opt-in checked exceptions, but with much nicer ergonomics than Java used to have
Exceptions come with their own weirdness. Usually, if you want to handle an exception, you need to wrap the code that could generate it in a block, which means any variables declared there won't be available in the parent scope. I'd much rather have the ability to just write normal code and deal with the error on the spot, along with some syntactic sugar (such as "?") to return that error to the caller.
> It is like somebody showed cavemen fire (exceptions) and they decided it wasn't worth anything
Oh, it absolutely can not be that the Rust way is more powerful and you didn't understand it yet. No way. It's all those other people that don't understand the old concept that almost all of them know.
Despite what CS and SE classes try to drill into you, null results or failure cases are nearly always better handled right when they happen instead of passing them up with layers of exception handling. Log it, pass null up, and just immediately handle it. Fail early and none of the rest of the function matters.
Even types of exceptions are rarely useful results outside of reading the logs or sometimes in libraries outside of your control.
How do you handle a DB connection time out in your stack? You can log it and retry, eventually the entire call must be terminated though, and the quickest way is through exception propagation.
I mean, ask the C++ community. They've had exceptions forever, but a large chunk of them forbid exceptions in their codebases.
I think there's a pretty good rule of thumb in modern systems-ish language design: If Go and Rust and Zig all do a certain thing, that thing is probably a great idea. These languages have very different priorities, but often they overlap.
Exceptions make it considerably harder to reason about state by reading the program text. As the notion that programmers should have some actual understanding of what they write slowly becomes less unfashionable, language features that make understanding code needlessly harder are losing some of their appeal even though they speed up writing the code.
What really makes code hard to read is having multiple paths to disentangle. There is one little error deep in the call stack but you have to vandalize the 10 functions above it in the call stack to carefully separate the error and non-error paths -- what's the probability that you will end up cleaning up properly in both paths when it isn't done for you with finally? What's the probability that somebody looking at this code is really going to find the subtle error in the error path or an error in the happy path caused and hidden by the complexity of the unhappy path?
I think the first C program I saw was a type-in terminal emulator from Byte magazine around 1985 and I was struck by the akwardness of the error handling in the C stdlib, spent a lot of time looking at the code when I realized the author had "spaced it" at one point such that the error handling was wrong and thought "this sucks" but learned how to write C programs with 3x the LOC because of all the alternate paths I had to put in to handle errors.
When I saw exceptions for the first time I felt strongly liberated because I got for free what I was working for so hard in C so I got to spend more time thinking about algorithms, the needs of the customer, things like that.
Exceptions make it difficult to find failure-points in the code. The ? annotates that at its call site, which improves discoverability by a lot and reduces readability by only a little.
> Exceptions make it difficult to find failure-points in the code
My experience doing Java, Go and Rust has been completely the opposite. Exception stack traces in Java are amazingly wonderful things - they exactly pin-point the failure points in the code. The amount of hunting I need to do to find out where something failed in the call stack in Go/Rust is tedious. You need a module/crate for error tracing or you up waddling against a strong current of despair.
> Java are amazingly wonderful things - they exactly pin-point the failure points in the code.
Yes.
Once the exception has happened. At runtime. Which is not when I want to be trying to fix things. I’d much rather handle as much as possible statically, knowing that what I push into has every non-panic code path cleanly handled.
I’ve never had the equivalent experience with exceptions, it’s always “well I’ve wrapped everything I possibly can in as much try-catch and handling as I possibly can, and oh look, some random piece of code has still thrown some random exception we’ve never seen before”.
After a couple of years of coding Rust, I found the error system, including the ?, well thought out. It is explicit and clear that the error is or maps to the function return error.
The only thing is that Rust rightfully uses the ? to return early system on option as well, which removed the ability to have None coalescing with "?". This was the right choice from a language point of view, but I wish there would be a None coalescing syntax in Rust.
1. Exceptions have very high performance costs (equivalent to a longjmp which is very slow), so if you expect to have exceptional cases, it's probably a lot more efficient to not use exceptions.
2. Exceptions break the linear flow of the code when you read it, so now you have to read a lot more code to figure out what the exception paths are and where and how they are handled.
1. Exceptions as commonly implemented in C++ have high overhead in the exception path. But that's just an implementation strategy. There is no reason why it wouldn't be possible to generate exactly the same code as for an optional type if desired (and in fact it was proposed for c++ cf. Herbceptions).
2. So do returns, but we have long settled that SESE is undesirable.
Kind of yes, and kind of no. On the happy path, if errors are very very rare, the check is also basically free thanks to branch prediction. They start to cost something when you start to add a higher frequency of errors, which incidentally is where exceptions cost a lot more.
The cost is higher because of all the branches that are scattered everywhere to check return codes. With exceptions there's a check at the place the error is thrown, but that's inevitable. There aren't checks scattered throughout the rest of the code, which would otherwise reduce icache utilization.
Arguing about icache utilization is a little silly here - the code will be laid out for you as though the branches are not taken (or you should force it to do so). In that case, the only "waste" of icache is the CMP and JMP, an additional 4-8 bytes per return, and literally 0 cycles.
When you do take an error, each RET takes you 1 cycle, plus the 10-15 cycle mispredict for the CMP+JMP because there's a stack engine in the CPU that tells you the address to return to. It's counterintuitive that doing "a lot" of things is cheaper than doing fewer things, but it's true.
In comparison, an exception involves taking the one control flow break to some cold control code (maybe page faulting), figuring out where to go using a jump table (slow), restoring the old state from that context (slow), figuring out the type of the thrown object (in many languages, also slow), and then handling accordingly. Each of these steps can easily take 100+ cycles, and may be more.
The math does not work out in favor of exceptions. Neither do the benchmarks in most cases. You do 1 slow thing to avoid doing 20 things that are trivially fast.
The checks you're talking about are duplicated more or less per statement in some types of code. Every single call site ends up with an `if err != nil` or moral equivalent. It can add up, also consider the extra register pressure. The return values aren't valuable anymore, they're just error signalling.
The compiler doesn't necessarily know what your error types are, it can try to use heuristics to move those blocks around but it's not like an exception where the types are a part of the language and the compiler can know that. We're talking about startup code here, nobody will be annotating their error branches with manual predictor probabilities, so we're limited to what the compiler can do.
Yes the act of throwing an exception is more work but it's exceptional, so doesn't matter. The slowest part is calculating the stack trace anyway and that's of huge value, which you don't get with error codes anyway.
There's no register pressure - TEST EAX, EAX (or CMP, EAX, $0) // JNZ $ERROR_HANDLER is the instruction sequence we're talking about. Most error types are enums where 0 = "good" and any nonzero value is not good. This is the inverse of the "null pointer check" in C. It consumes no registers and a negligible number of code bytes.
There is obviously a sparsity of exceptional cases where error-handling code like this is worse than using exceptions. I would claim that it's a lot more sparse than you think. Many people use exceptions for things like "file not found" or "function failed for whatever reason," (my favorite) "timeout," or "bad input from the user." These cases are often not that exceptional!
They are slow because you need to restore context from an unknown/unpredictable place in the code, you have a table lookup (from a very cold table) to get the next program counter value, and you have to save and restore register values, while the callstack and the calling convention handle all of that complexity for you if you don't break the natural flow of the program.
But couldn't one implement exceptions internally by returning error codes? Yes, this would change the ABI but as long as we're not talking about the interface of a library, i.e. are not leaving the realms of our source code, this should be ok, shouldn't it?
In a sense, try/catch would then just be syntactic sugar that frees you from manually checking for errors after every single function call. Instead, you just handle them in bulk in a catch block, potentially a couple stack frames further upstairs.
EDIT: I just realized my suggestion wouldn't exactly be equivalent to exceptions, in the sense that it wouldn't give stack traces but error return traces, like in Zig:
> In a sense, try/catch would then just be syntactic sugar that frees you from manually checking for errors after every single function call. Instead, you just handle them in bulk in a catch block, potentially a couple stack frames further upstairs.
They are not the same. Errors force you to explicitly handle unexpected conditions. Exceptions don't. And "?" is for making error handling not take up half of loc.
Read up on how exceptions work in C++ implementation-wise. It's not pretty.
That's the problem, though, right? 99.999% of the time you absolutely should not be "handling" an error: you should merely propagate it so it gets closer to code that has actual intent. Languages that force you to try to "handle" errors--which includes Java, due to their botched concept of checked exceptions--both encourage the wrong behavior in the developer and cause the code to be littered with boilerplate to implement the propagation manually.
Meanwhile, they manage to encode the concept of "can fail" into not merely the type signature of a function but into the syntax used to access it, when--like other monadic behaviors, including "requires scoped allocation"--this is the kind of thing you tend to need to refactor into a codebase at a later time: instead, the code should always be typed as if everything can fail and everything can allocate (not just memory, but any resource); languages that get this right--such as C++ and Python--thereby deserve their stickiness.
Exceptions are not "undefined" behaviour, and they don't "corrupt the database". On the contrary, they're very often used to abort database transactions cleanly, even in complex chains of deeply nested function calls.
What people mean by "not handling errors" is that the Visual Basic style of "On Error Resume Next" is a terrible, terrible thing to do. The equivalent in modern languages is a try-catch block in the middle of a stack of function calls 200 deep. That function likely has no idea what the context before it is. Is it being called from a CLI? A kernel module? A web server? Who knows!
Just yesterday I had to deal with legacy code that made this mistake, and now it's going to cause a multi-day problem for several people.
It's a ASP.NET HTTP authentication module that simply swallows exceptions during authentication (e.g.: "Can't decrypt cookie"), doing essentially nothing. When deployed "wrong" (e.g.: encryption key is invalid) it just gets stuck in a redirect loop. The authentication redirects back with a cookie, it is silently ignored, then it redirects to the authentication page which already has a cookie so it redirects back, and so on.
There is nothing in the logs. No exceptions bubble up to the APM or the log analytics systems. The result is HTTP 200 OK as far as the eye can see, but the entire app is broken and we don't even know where or why exactly.
That's not even mentioning the security risks of silently discarding authentication-related errors!
This is what people mean by don't "handle" errors. Middleware or random libraries should never catch exceptions. It's fine if they wrap a large variety of exception types in a better type, but even then it is important to preserve the inner exception for troubleshooting.
I've had to tell every developer this that I've worked with recently as a cloud engineer. Stop trying to be "nice" by catching exceptions. Exceptions are not nice by definition and ignoring that reality won't help anyone.
In the bad old days of C there were two kinds of programs: programs without correct error handling, and programs where half the loc are unhappy paths that do what exceptions do... with a huge amount of work.
Today people are repeating the same mistakes of the past, putting a "?" on everything is a lot better than what you had to do in C, but why do that when you can just use a language with exceptions?
It is like somebody showed cavemen fire (exceptions) and they decided it wasn't worth anything and went to go screw around with other things.