I think it would, for all practical purposes, be impossible to determine an optimal warrior, even at very small core sizes. Not only is the search space huge but the evaluation function can take unbounded time to resolve. We should consider the halting problem embedded inside the optimization target as a clue to the problem's difficulty.
That's the thing: Core War matches last a finite time (after which the match is judged a tie). So you have a finite memory space, finite time, and a finite number of match combinations. And for predetermined constant N, the bounded halting problem ("does the program halt within N steps") is in NP.
For the nano hill[1], the constants are: each warrior has a max of five lines of code, core size is 80 instructions, and a match lasts a maximum of 800 cycles.
If N = 1, it's clear that the best you can do is drop a bomb at a fixed location and hope you hit. So that is mostly a tie. For N=2, it's probably still not possible to do anything useful. With N = 10, perhaps a quickscan is possible. N = 800 -- who knows?
I'm hoping to prepare a presentation on how I use Emacs for language learning for Emacs Conf 2026. There are a couple of talks for I'm very much looking forward to this year's conference[0] (happening in less than a month!) and specifically this talk[1] on language learning with Emacs.
There's kinda not much to it. It's just that you have:
1. a full-fledged programming language
2. no namespacing (nothing is private)
3. no modern GUI concepts to take into account (no CSS `flex-direction`...)
4. no edit-compile-run cycle
5. people have written extensions for many decades
6. people always write extensions with the possibility in mind that their extensions may be extended
Then you can probably see how it works, with just your imagination!
Of course there's a number of epiphanies that may be needed... Like how the principle "compose many small Unix programs with text as the universal interface" is just like "compose many functions with return values as the universal interface", or that it isn't an editor and more like a terminal (with integrated tmux-like functionality) that you decided to turn into an editor, or that an editor waiting for text entry is just stuck in a `while`-loop of reading the next input character, what even is a shell, what even is a computer, etc etc.
The site has a little game where you click to dismiss as many cookie-consent popups as possible, in 30 seconds. I suppose if you can't see the cookie consent popups, then at the end of the timer you just have zero points.
In what sense is this 64KB? Clearly there's more than 64KB of code in the repo. And since it's typescript it's not like there's a binary that could be 64KB.
In a discrete Fourier transform, the number of frequencies you get out is the number of datapoints you have as input. This is because any frequencies higher than that are impossible to know (ie, they are above the sampling frequency).
But in the continuous Fourier transform, the output you get is a continuous function that's defined on the entire real line.
Traditional suicide is incredibly stigmatized; ending one's life manually is a huge trauma to place on loved ones. The benefit of MAID is that it's dignified, and won't leave families searching for answers after a death.
The transformer was a major breakthrough in NLP, and it was clear at the time of publishing that it would have a major impact. But I will add that it is common in the Deep Learning field to give papers catchy titles (see, off the top of my head: all the YOLO papers, ViT, DiT, textual inversion). The transformer paper is one in a long line of seminal papers with funny names.
To be fair, it would be surprising if the supply of AI researchers worth hiring (or poaching with Mega Millions comp packages) lasted that long, either.
There has to be a point of diminishing returns somewhere, and a headline-worthy hiring frenzy should be expected to hit it quickly.
reply