I thought about it when I read Charlie Stross getting one last year [0]. Then thought, what is essential about it? And decided to try something related, even if it was likely to be just another distraction:
This is the simple "just-write" function I wrote in emacs, and it also needs the olivetti package and the wmctrl program.
To my surprise, it actually did the trick. I've written more and more pleasantly since then. I feel it's better than if I had gotten a Writer Deck or similar. So I thought I'd share in case it does the trick for anyone else.
(I also use a font and a color scheme that pleases me, but that's minor.)
I think the idea of knots as a basis for everything has come and gone several times. One of those were in the 90s, which is when I became aware thanks to the excellent "Gauge Fields, Knots and Gravity" by John Baez and Javier P Muniain, that was part of the "Series on Knots and Everything" [1]. Those are really intriguing ideas.
I found myself in a similar situation and also started de-googling, which is much nicer and liberating than I was fearing.
I did the exact same thing with Immich (what a great software, by the way!).
And in case it helps:
Instead of always relying on google maps, I now mostly use CoMaps (https://www.comaps.app/). Way better than using directly OpenStreetMap. And for my Pixel 7, I switched to LineageOS with gapps (https://lineageos.org/) and I'm not missing anything and am very happy with it.
Also, I'm trying now Nextcloud (https://nextcloud.com/), with a setup similar to Immich, and now I do believe there is life beyond google, and it's a better life.
Does Immich read real file names of photos from iOS Photos metadata? I don't even know whether Apple preserves it and exposes to other apps?
I used Ente and I learned all the files I had "added/uploaded" to iCloud photos had lost their real names (that I had painstakingly given them over the years/decades) when ente exported to those photos back on my laptop via their desktop app and were these long random uuid strings kinda names. That was my yikes moment and I was glad I had still kept my photos outside of iCloud and Ente. And it is not even Ente's fault. Apple does this skullbuggery.
I used immich-go to upload photos from Google Photos and it worked great, I'm not sure how well that works with iCloud but it's at least a supported option (...although with a TODO next to it)
I was really surprised to learn that there aren't right now. It sounds like FUTO, the org behind Immich, is working on something like this but they haven't put out any real details so it's probably far off.
Thanks for clarifying! FWIW, I wasn't there, but that matches exactly what I remember from that time (I did follow all the different renamings in detail).
Just to give a brief answer to those reasonable criticisms:
The mixed-grade already exists in complex numbers (it is very useful there, and even more so in geometric algebra).
Differential forms are included in geometric algebra (the exterior/outer products are isomorphic). Turns out, combining that product with the inner product gives you an invertible product (as Clifford found out). That by itself already is a huge advantage.
Finally, Maxwell's equations are sweetly summarized in differential forms, but even more in geometric algebra: dF = J . Not only it is just one equation instead of two, but in addition the "d" (or "nabla") is directly invertible thanks to the geometric product (which differential forms lack and then have to use more indirect methods, including the Hodge dual).
By the way, I'm very partial to geometric algebra, but wouldn't say it is an "error" not to use it! Maybe just a big missed opportunity :)
You can do that using differential forms as well - using the co-differential δ, we can write a single equation (δ + d)F = J. However, from the perspective of Yang-Mills theory, that's a rather questionable approach as we're stitching together the Bianchi identity and the Yang-Mills equation for no particular reason...
Cool, I didn't know that. Still, the main point of the geometric algebra version is that it's not a "stitching" exercise, but a natural operation in the algebra -- and even better, an invertible one.
Well, I got sucked down the same Geometric Algebra rabbit hole, plus another "remarkably concise and intuitive way" one too, which is Clojure.
So I wrote a library that can generate any GA and do all kind of fun operations (which could be used in particular as a basis for a physics engine). Just in case any of this is of your interest too: https://cljdoc.org/d/net.clojars.jordibc/geometric-algebra/
It does preclude it, but clojure found an arguably elegant solution to it, using recur[1] instead. As a plus, in addition to achieving the same result as tail-call elimination, it does check that the call is indeed in tail position, and also works together with loop[2].
For me, it made me not miss tail-call elimination at all.
the call to bar is a tail call. How does recur optimize this? Well, it doesn't, since "general TCO" / "full TCO" means that any tail call gets optimized, such that the stack does not grow. Clojure recur/loop is just a loop notation.
Looping construct in most languages provides a simple conversion of self recursion (a function calls itself recursively) to a loop: update the loop variables and then do the next loop iteration.
But the general case of tail calls is not covered by a simple local loop, like what is provided by Clojure.
The issue arises when you program really heavily with closures and function composition. You sadly cannot do functional programming as in "programming with functions" without care on the JVM.
It is, IMO, a missed opportunity to use a hard-coded identifier for `recur`ing instead of the `(let sym ((...)) ...)` form that would let you nest loops.
Aside from that, I agree. Tail-call optimization's benefits are wildly overblown.
The benefits aren't overblown if you are someone who learned Lisp with a functional approach. As in, using higher-order functions etc. You have to be careful whenever you approach a problem that way on the JVM.
what does tail-call optimization have to do with higher-order functions? I thought the former pertains to iterative procedures written with recursive syntax, where the recursive call is at the very end of the function and called by itself, so stack size is O(1). Higher-order functions means passing functions to things like map, filter, etc.
In the context of higher order functions, tail call elimination allows for the avoidance of building up intermediate stack frames and the associated calling costs of functions when doing things like composing functions, particularly when calling large chains of nested function calls. The benefits of TCO for something like mapping a function can also be pretty large because the recursive map can be turned into a while loop as you describe at the beginning of your comment.
The optimization of stack frame elision is pretty large for function calls on the JVM and the stack limits are not very amenable to ‘typical’ higher order function ‘functional programming’ style.
> tail call elimination allows for the avoidance of building up intermediate stack frames and the associated calling costs of functions when doing things like composing functions, particularly when calling large chains of nested function calls.
This is more general than what tail-call-optimization can handle. This is true, but only in the context of recursive functions, and you don't actually save anything besides not needing to re-allocate the stackframes below your recursion point. Other optimizations such as inlining may perform some of this in the general case. Regardless, you get the same benefits by using `recur` in Clojure, it's just explicit, it still uses no extra stack space.
The downside is purely stylistic. It's functionally the same as if you did `(let recur () ...)` in Scheme.
I’m not certain, but I am pretty sure tail call optimization includes generic tail call elimination. This does not rely on the function being recursion. In effect the compiler converts all tail calls into direct jumps and allows for the reuse of stack space, limiting the total stack size for any given chain of tail called function to a statically determined finite size. This same optimization also allows for the omission of instructions which manage stack frames and their associated bookkeeping data. I know the Ocaml compiler does this, and I’m almost sure that GHC does as well.
I do not know if the above is included in what clojure does for tail calls, recursive or not, but on the JVM the elimination of those calls can and does have an impact.
> I’m not certain, but I am pretty sure tail call optimization includes generic tail call elimination.
I believe they're related, but not the same thing.
> I do not know if the above is included in what clojure does for tail calls, recursive or not, but on the JVM the elimination of those calls can and does have an impact.
As far as I know the JVM doesn't allow a program to arbitrarily modify the stack, so any support would need to be baked into the JVM itself, which it might be now, but I'm not finding any indication that it is. The `loop`/`recur` construct essentially compiles to a loop in the Java sense (to my understanding), so it is as efficient as a recursive loop with TCO. The more general tail-call elimination likely isn't possible on the JVM, but you're correct that it would likely result in a speed up.
All of this is sort of besides the point: I don't think there's much in terms of higher-order functions (which is an extremely broad category of things) that you can't do in Clojure just because it lacks TCO. At least no one has been able to give me an example or even address the point directly. Speed is not really what I'm referring to.
If you program purely and represent "state" as a function
s -> (a,s)
Then the JVM bites you the moment you naively abstract over that. You end up having to manually "compile" stuff like traversing a list with a stateful update. For example, Scala's pure FP ecosystem is full of specialized functions for state due to this.
> (I just wish I understood what in meant in "Effective Programs" by "typos are not important". They are, aren't they ? A typo is a runtime error begging to occur during the demo, how is that "not important" ? Never mind.)
They are going to create an error that you can see and fix immediately, in the vast majority of cases. Certainly in compiled languages (like clojure), but less so in interpreted languages like python, where a misspelled variable in a hidden part of the code may not cause an exception until that part of the code is run.
At least, that's my understanding of what he means by typos being less important than other types of mistakes.
By the way, when I saw his comment originally I was surprised too. But when comparing to the other kind of mistakes he talks about I realized that, yes, I'd rather have a typo than any of the other problems! Though of course, I'd prefer to have none :)
The problem is that having a typo-based issue, especially one that would have been trivially caught by a compiler from the 70s, can sometimes prevent you from having a chance to tackle the important problem.
(Now, maybe it's PTSD from that time a typo in a script botched a demo in front of important people, and, well, let's say I avoided a bunch of scaling / domain complexity / temporal cohérence / etc... issues by not having to work on the thing any more ?)
I really like jq, but I think there is at least one nice alternative to it: jet [1].
It is also a single executable, written in clojure and fast. Among other niceties, you don't have to learn any DSL in this case -- at least not if you already know clojure!
To my surprise, it actually did the trick. I've written more and more pleasantly since then. I feel it's better than if I had gotten a Writer Deck or similar. So I thought I'd share in case it does the trick for anyone else.
(I also use a font and a color scheme that pleases me, but that's minor.)
[0] https://www.antipope.org/charlie/blog-static/2024/09/zen-and...