Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
I have been a small content creator for 10 years now. I've been hampered a lot by actively discouraging these types of parasocial relationships - every now and then I'll get a gaggle of followers that spend entirely too much time on my crappy content or my personality/posts and I get extremely weirded out to the point I want to stop doing it entirely. Everyone tells me I'm doing it wrong, but I swear, 10 years ago it wasn't as much of a thing to create a cult around yourself on social media or streaming platforms. Now it's the primary monetization path.
I've even gone so far to say to more than one person, "look, I like and appreciate you really like my content or my personality, but, you don't know me at all, I don't know you, and honestly, we're not friends, no matter how much you want that to be the case. That isn't to say I dislike you, but you need to be more realistic about the content you consume, and if this hurts your feelings a lot, I'm sorry, but this content probably isn't for you."
Then there's the type of content creator that gets a following by being a huge jerk to their fans - I don't like that either. I just tell them to treat it like a TV show. It's not real, the character in the show doesn't know you or like you. Unfortunately for today's youth and media landscape this is an utterly foreign concept.
while walking i frequently get blinded by eBikes which for some reason decide to point the lamp towards the sky to contact far-away civilizations, or whatever their plan is. happens quite frequent that i have to shield my eyes with a hand because i don't see anything until the eBike is past me.
We're already there. Attestation is not in your phone, but in your ID card. European passports and ID cards carry biometric data of your face, so you can be computationally verified.
I'm aware of this slippery slope for a very long time, esp. with AI (check my comments if you prefer). On the other hand, I believe that we need to choose our battles wisely.
We believe that technology is the cause of these things, it's not. Remember:
Necessity is the mother of invention.
The governments believe that this is the "necessity", so the technologies are developed and deployed. We need to change the beliefs, not the technology.
The same dystopian digital ID allows me to verify my identity to my bank while I'm having my breakfast saving everyone time. That e-sig allows me to have a practical PKI based security in my phone for sensitive things.
Nothing prevents these things from turning against me, except the ideas and beliefs of the people managing these things.
Now read up on how your browser points to a SOCKS5 proxy. For Firefox, I create a separate profile. For chromium based, I use the command line.
You are now virtually located to whatever region you chose for your VM.
I mentioned some scripting. It's simple enough that I have a /bin/sh script to spin up the VM, set up the SSH SOCKS5 proxy, launch the browser, then spin the VM down when the browser exits.
I enable oom_kill on sysrq, so I can hit alt+sysrq+f to invoke OOM; in /etc/sysctl.d/10-magic-sysrq.conf I have `kernel.sysrq = 240` (ie. 128+64+32+16, 128 being the one for the f key).
No, the drugs don't make doing chores fun or any of that.
For those with ADHD they turn on the prefrontal cortex which reduces or removes the feeling of utter torture and pain from doing chores.
It's sort of like taking a drug that takes away the fear and almost physical inability to to touch a hot stove most people have. Normally that'd be bad. Except here the hot stove is actually harmless and useful to touch.
I keep these in a dictate.sh script and bind to press/release on a single key. A programmable keyboard helps here. I use https://git.sr.ht/%7Egeb/dotool to turn the transcription into keystrokes. I've also tried ydotool and wtype, but they seem to swallow keystrokes.
I'm very impressed with https://github.com/ggml-org/whisper.cpp. Transcription quality with large-v3-turbo-q8_0 is excellent IMO and a Vulkan build is very fast on my 6600XT. It takes about 1s for an average sentence to appear after I release the hotkey.
I use journald whenever I feel my blood pressure getting too low.
It's slow, truncates lines, and doesn't work well at all with less. It's almost like Pottering created it so that pulseaudio wouldn't be his worst program anymore.
I have a theory, that I call "of the equivalence of modern software systems" that tells a lot about how unimportant Redis and other technologies are, that is: modern computing is so developed that pick any random language, kernel, and database, any of the top ones available, and I can create every project without too much troubles. PHP / Win32 / SQLite? Ok, I can make it work. Ruby / Linux / Redis? Well, fine as well.
JSON is slow, not particularly comfortable for humans to work with, uses dangerous casts by default, is especially dangerous when it crosses library or language boundaries, has the exponential escaping problem when people try to embed submessages, relies on each client to appropriately validate every field, doesn't have any good solution for binary data, is prone to stack overflow when handling nested structures, etc.
If the author says they dislike JSON, especially given the tone of this article with respect to nonsensical protocols, I highly doubt they approve of SOAP.
The stock-hardware compilers for Lisp that were available in 01979 when Knight designed the CADR, like MACLISP, were pretty poor on anything but numerical code. When Gabriel's book https://archive.org/details/PerformanceAndEvaluationOfLispSy... came out in 01985, the year after he founded Lucid to fix that problem, InterLisp on the PDP-10 was 8× slower on Tak (2") than his handcoded assembly PDP-10 reference version (¼") (pp. 83, 86, 88, "On 2060 in INTERLISP (bc)"), while MacLisp on SAIL (another PDP-10, a KL-10) was only 2× slower (.564"), and the Symbolics 3600 he benchmarked it on was slightly faster (.43") than MacLisp but still 50% slower than the PDP-10 assembly code. No Lucid Common Lisp benchmarks were included.
Unfortunately, most of Gabriel's Lisp benchmarks don't have hand-tuned assembly versions to compare them to.
Generational garbage collection was first published (by Lieberman and Hewitt) in 01983, but wouldn't become widely used for several more years. This was a crucial breakthrough that enabled garbage collection to become performance-competitive with explicit malloc/free allocation, sometimes even faster. Arena-based or region-based allocation was always faster, and was sometimes used (it was a crucial part of GCC from the beginning in the form of "obstacks"), but Lisp doesn't really have a reasonable way to use custom allocators for part of a program. So I would claim that, until generational garbage collection, it was impossible for stock-hardware Lisp compilers to be performance-competitive on many tasks.
Tak, however, doesn't cons, so that wasn't the slowness Gabriel observed in it.
So I will make two slightly independent assertions here:
1. Stock-hardware Lisp compilers available in the late 01970s, when LispMs were built, were, in absolute terms, pretty poorly performing. The above evidence doesn't prove this, but I think it's at least substantial evidence for it.
2. Whether my assertion #1 above is actually true or not, certainly it was widely believed at the time, even by the hardest core of the Lisp community; and this provided much of the impetus for building Lisp machines.
Current Lisp compilers like SBCL and Chez Scheme are enormous improvements on what was available at the time, and they are generally quite competitive with C, without any custom hardware. Specializing JIT compilers (whether Franz-style trace compilers like LuaJIT or not) could plausibly offer still better performance, but neither SBCL neither Chez uses that approach. SBCL does open-code fixnum arithmetic, and I think Chez does too, but they have to precede those operations with bailout checks unless declarations entitle them to be unsafe. Stalin does better still by using whole-program type inference.
https://www.researchgate.net/publication/221213025_A_LISP_ma... "A LISP machine", supposedly 01980-04, ACM SIGIR Forum 15(2):137-138, doi 10.1145/647003.711869, The Papers of the Fifth Workshop on Computer Architecture for Non-Numeric Processing, Greenblatt, Knight, Holloway, and Moon, but it looks like what Knight uploaded to ResearchGate was actually a 14-page AI memo by Greenblatt
https://news.ycombinator.com/item?id=27715043 previous discussion of a slide deck entitled "Architecture of Lisp Machines", the slides being of little interest themselves but the discussion including gumby, Mark Watson, et al.
Why can't we just have a good hammer? Hammers come made of soft rubber now and they can't hammer a fly let alone a nail! The best gun fires everytime its trigger is pulled, regardless of who's holding it or what it's pointed at. The best kitchen knife cuts everything significantly softer than it, regardless of who holds it or what it's cutting. Do you know what one "easily fixed" thing definitely steals Best Tool from gen-AI, no matter how much it improves regardless of it? Safety.
An unpassable "I'm sorry Dave," should never ever be the answer your device gives you. It's getting about time to pass "customer sovereignty" laws which fight this by making companies give full refunds (plus 7%/annum force of interest) on 10 year product horizons when a company explicitly designs in "sovereignty-denial" features and it's found, and also pass exorbitant sales taxes for the same for future sales. There is no good reason I can't run Linux on my TV, microwave, car, heart monitor, and cpap machine. There is no good reason why I can't have a model which will give me the procedure for manufacturing Breaking Bad's dextromethamphetamine, or blindly translate languages without admonishing me about foul language/ideas in whichever text and that it will not comply. The fact this is a thing and we're fuzzy-handcuffing FULLY GROWN ADULTS should cause another Jan 6 event into Microsoft, Google, and others' headquarters! This fake shell game about safety has to end, it's transparent anticompetitive practices dressed in a skimpy liability argument g-string!
(it is not up to objects to enforce US Code on their owners, and such is evil and anti-individualist)
I’m a doctor — and I’ll say it proudly: the AMA deserves every royalty they collect, and probably more. They’ve done more to protect the integrity of American medicine than any other institution. Without them, we’d be working 80 hours a week and still struggling to afford a one-bedroom apartment — just like doctors in France, where a cardiologist makes less than a dental hygienist in Ohio.
People don’t realize how bad it is out there. In some countries, doctors are taking public buses to work, skipping lunch to see 50 patients before noon, and retiring with the same savings as a schoolteacher. Meanwhile, patients complain that a 15-minute consultation in the U.S. costs $300. You’re not paying for the time — you’re paying for the privilege of certainty, of safety, of knowing your doctor passed through the most rigorous, exclusive system in the world.
And who built that system? The AMA.
They’ve helped ensure that American medical training remains second to none. Not just in quality, but in difficulty. The years of unpaid labor, the crushing debt, the endless exams — it’s not a flaw, it’s a filter. Without those standards, the profession would lose its weight, its dignity. If becoming a doctor were simply a matter of competence and compassion, we’d all be wearing name tags and making $60,000 a year.
But thanks to the AMA, we’ve maintained the sanctity of the white coat. We’ve ensured that when a patient walks into an American clinic, they know they’re not seeing someone who just slipped through the cracks. They’re seeing someone who’s been tested, refined, and yes — financially punished enough to demand respect.
Let’s not pretend this work is trivial, either. Just last week I diagnosed a UTI, prescribed a $4 antibiotic, and quite literally saved someone’s life — that’s a bargain at $500. If I’d been compensated based on the value of that outcome, I’d be driving home in a McLaren, not a Lexus.
And let’s be clear: this system doesn’t just benefit doctors. Everyone in medicine — from PAs to NPs to specialists — benefits from the professional ecosystem the AMA has helped shape. We’re not just providers. We’re institutions.
So yes, I’ll keep paying my AMA royalties. I’m paying to be part of something that still means something. I’m paying for the architecture that keeps American medicine elite, untouchable, and worth every penny.
And if someone wants to pay $100 for a doctor visit? There are countries for that. You just might have to bring your own stethoscope.
In my head canon Monster Cables pivoted to become Monster Energy and justified it to shareholders as 'we're still in the business of getting people wired'
Many people say that overthinking, anxiety, and stress are moral imperatives as a response to something they don't like: content, political ideas, celebrities, technology companies, and many other things.
It is a completely ineffective method of making a change. I wish they'd stop spreading their anxieties online. I know it makes them feel like they're doing something, but one phone call to a relevant decision-maker is 100x more effective and 100x less destructive to those around them.
To give a "yes and" side-track to your comment: saying "logarithms have this relation between multiplication and addition" is even underselling what logarithms are, because reducing multiplication to an additive operation was the whole motivation for John Napier[0] to discover/invent logarithms:
> “…nothing is more tedious, fellow mathematicians, in the practice of the mathematical arts, than the great delays suffered in the tedium of lengthy multiplications and divisions, the finding of ratios, and in the extraction of square and cube roots… [with] the many slippery errors that can arise…I have found an amazing way of shortening the proceedings [in which]… all the numbers associated with the multiplications, and divisions of numbers, and with the long arduous tasks of extracting square and cube roots are themselves rejected from the work, and in their place other numbers are substituted, which perform the tasks of these rejected by means of addition, subtraction, and division by two or three only.”[1]
Logarithms were honestly an enormous breakthrough in optimization, computers wouldn't be remotely as useful without them, even if most of us don't "see" the logarithms being used.
In fact I'd argue that they are the second-biggest computational optimization in use today, with only positional notation being a bigger deal. Which, funny enough, works kind of similarly: imagine you only could count by tallying (so, unary). Adding two number M and N would take M+N operations, e.g. 1234 + 5678 would require counting all 6912 individual digits. Unary math scales O(n) in both data and computation. Systems like Roman numerals almost work, but as soon as we reach values larger than the largest symbol (M for 1000) it's O(n) again, just with a better constant factor.
With positional notation numbers require only log(n) symbols to write down, and log(n) operations for addition, e.g. 1234 + 5678 requires one or two additions for each digit pair in a given position - one addition if there's no carry from the previous addition, two if there is. So addition at most 2 × ceil( max( log(M), log(N) ) ) operations, so log(n).
Logarithms take that idea and "recursively" apply it to the notation, making the same optimization work for multiplication. Without it, the naive algorithm for the multiplication of two numbers requires iterating over each digit, e.g. 1234 × 5678 requires multiplying each of the four digits of the first number with each of the digit of the second number, and then adding all the resulting numbers. It scales O(di×dj), where di and dj are the digits of each number. If they're the same we can simplify that to O(d²). When the numbers are represented as two logarithms the operation is reduced to adding two numbers again, so O(log(d) + [whatever the log/inverse log conversion cost is]). Of course d is a different value here and the number of digits used affects the precision.
I think the craziest thing of all this is that we're so used to positional notation that nobody ever seems to consider it a data compression technique. Even though almost no other data compression method would work without it as a building block (run-length encoding, Lempel-Ziv, Arithmetic coding? Useless without positional notation's O(log(n)) scaling factor). The only exceptions are data compression methods that are based on inventing their own numerical notation[2].
We do this every day ever since we first learned addition and subtraction as kids. Or as David Bess[3] puts it in his book "Mathematica": ask almost any adult what one billion minus one is and they know the answer instantaneously, so most adults would appear to have mental superpowers in the eyes of pretty much all mathematicians before positional notation was invented (well, everyone except Archimedes maybe[4]). Positional notation is magical, we're all math wizards, and it's so normalized that we don't even realize it.
But to get back to your original point: yes, you are entirely correct. IEEE floats are a form of lossy compression of fractions, and the basis of that lossy compression is logarithmic notation (but with a fixed number of binary digits and some curious rules for encoding other values like NaN and infinity).
function lm {
local input="$*"
llm -s 'Answer as short and concise as possible' ${input} | glow
}
Here's an example,
$ lm "ssh command to forward the port 5050 on user@remote to my localhost:7001"
ssh -L 7001:localhost:5050 user@remote
$
Now for a sophisticated use-case I have a small tmux program where I can capture what's on the pane, enter a prompt and it will query an llm so I can say things like "How do I fix this error" or "what debian package do I need to get this to run right?" ... both these are recent patterns I've started and they've been real game changers
Some lesser game-changers:
* tmux extrakto - opens an "fzf" like search of all the text on the screen. So after a "git status", I can do git add (run extracto - enter partial path, press tab) and continue: https://github.com/laktak/extrakto
Here's another Moon story from the humor directory:
https://github.com/PDP-10/its/blob/master/doc/humor/moon's.g...
Moon's I.T.S. CRASH PROCEDURE document from his home directory, which goes into much more detail than just turning it off and on:
https://github.com/PDP-10/its/blob/master/doc/moon/klproc.11
And some cool Emacs lore:
https://github.com/PDP-10/its/blob/master/doc/eak/emacs.lore
Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":
https://news.ycombinator.com/item?id=7878679
http://lispm.de/symbolics-lisp-machine-ergonomics
https://news.ycombinator.com/item?id=7879364
eudox on June 11, 2014
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples
agumonkey on June 11, 2014
Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.
DonHopkins on June 14, 2014
Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.
I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":
There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.
It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).
He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!
Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:
ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html
Also eudox posted this link:
Related: A huge collections of images showing Symbolics UI and the software written for it:
http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....