Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is exactly why tail call "optimisation" is not, in fact, an optimisation.

Indeed much of this discussion, including the title and text of the original post, carefully avoids using the word "optimisation". Somehow it got introduced at some point in these comments.

To flip it the other way: consider a normal (non-tail-call) program and choose a local variable in a routine that happens to be called quite a lot. Replace it with a list that you append to every time you call the routine, even though only the last entry is ever examined. The program will leak memory, potentially quite quickly, and eventually crash. Is it fair to say it's just less optimised than before? I would say it's worse than that: it has an actual bug.

That's exactly the situation in a program created with the assumption of tail calls that is then used in an environment without them.



> Replace it with a list that you append to every time you call the routine, even though only the last entry is ever examined. The program will leak memory, potentially quite quickly, and eventually crash.

I had to think on this example a bit, but I believe these programs exist and are quite common. This is an accurate description of any program in which memory is allocated and never explicitly freed. Some languages include runtimes that perform garbage collection, but until the garbage collection runs, the program is leaking memory and would crash without the garbage collection.

Which is to say, I think I agree with you, but had to chew over the example a bit. That it isn't a quantitative difference in speed or memory usage, but a qualitative difference in the types of programs that are acceptable in a language. Thank you for it.


> Is it fair to say it's just less optimised than before? I would say it's worse than that: it has an actual bug.

Yes, a program that depends on tail call optimization, despite having no guarantee of tail call optimization being performed, has a bug.

That doesn't mean that tail call optimization is not an optimization. In languages that allow, but do not guarantee, tail call elimination, it is an optimization performed by the optimizer that makes a program run more optimally than it would have otherwise. There is a long history of referring to it as an optimization, for example in the docs of LLVM: https://llvm.org/docs/LangRef.html#call-instruction

I don't think it makes sense to redefine the word "optimization" such that it excludes examples where a program could erroneously depend on a certain optimization being performed.


OK, if you prefer to be precise, there are two situations:

(1) In languages (or runtimes) where tail calls are not guaranteed to be elided, tail call elision is an optimisation. In that context, you could quite reasonably describe it as tail call optimisation. Programs that are written in these languages and rely on tail call elision are buggy.

(2) In languages (or runtimes) where tail calls are guaranteed to be elided, tail call elision is not an optimisation. In that context, you cannot reasonably describe it as tail call optimisation. Programs that are written in these languages and rely on tail call elision are not buggy (at least, not for that reason!).

This conversation is about making tail call elision a guaranteed feature of WASM. Therefore, in this context, it is not reasonable to describe tail call elision as an optimisation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: