I've asked AI to help some, and maybe it's me who hallucinated it, but something that's super stuck with me from reading Philip K Dick's VALIS trilogy/Radio Free Albemuth were two dual modes: of the scorching mid-day heat of the Palm Tree Garden, a sweltering heat of the sun/that red even with your eyes closed, then at night, a sort of relief, an un-watched-over state. I enjoyed VALIS the first time a lot, but going back and finding these specific sections has a strong lure to it.
At the time it felt cute, a nice flourish. But over the years, the idea has sort of grown into me. I find that during the day, my critical mind is quite active & wants really exact precise things. Expectations can be large & slow down just letting things pour out of me. Now, this isn't the same in-between sleep/waking state as the article, but at night a lot of my concern goes away, and I can just enjoy things, work on things, uninhibited. Let it flow. Some level of tiredness can help.
I would like to be better about the flip side. I think the morning is another interesting, that a lot of people use well & love. Before the world is really awake, seizing the moment. Ursala Le Guin wrote about her daily routine, which involves waking prompty & writing writing writing. I feel like there's likely strong similarity. But also it sure feels good to have a bunch of work under your belt at the beginning of the day, right away. https://www.openculture.com/2019/01/ursula-k-le-guins-daily-...
I'm super interested in how Carv 2 radically simplified the product (Oct 2024), to just a motion sensor that clips to the boots. https://getcarv.com/blog/introducing-carv-2
The first Carv 1 had a whole footbed that detected where you were applying pressure, had some 2D pressure sensing to some degree. But it couldn't really determine your leg orientation, just pressure.
Now there's no pressure, but it can see orientation. Installation is also radically easier. Supposedly Carv 2 is a very loved product, but I'm a bit aghast at seeing such a complex powerful sensor networking system replaced by something much much simpler.
FluxPose just launched their Kickstarter. It's not really designed for sports per se, but has so many properties that seem like they could be excellent for this kind of application. The trackers are tiny and light. They use radio so they can go anywhere, won't be obstructed by clothing. You wear the base station, which acts as a stable 0-point of reference for the system, which seems incredibly crucial for skiing, seeing compression and relative lean. It has up to 10 trackers, so you could definitely attach some to your skiis to. It works at up to 300Hz (although maybe that's with less trackers, unclear?). I don't know if raw data is available but I hope so!
I don't ski as much as I want, and I don't have the cash to throw after this product, but damn, FluxPose looks like such a huge change. I hope we see more Carv like people out there, and I hope it collides with the tracker revolution that's getting much more data. https://www.fluxpose.com/
Not super related, but for whatever reason I've had a flare up in thinking about sci-fi moving cities/neighborhoods again. Kim Stanley Robinson's 2312 with a Mars city going around the terminator (not super well described tbh but fun), Hannu Rajaniemi's Fractal Prince (book #2 of Jean le Flambeur's series) with a reconfiguring Mars city wandering around on stilts (iirc). We live in an age with so many new malleable systems, but so much of the world about us is fixed and rooted, and these sci-fi realms where not just people but places too move about is an interesting idea. That what I thought of, seeing the traveling neighborhood title.
I dig this idea a lot. I hope we can expand more on remote work, make great use of new freedoms for such excellent purposes.
The work that XLA & schedulers are doing here is wildly impressive.
This feels so much drastically harder to work with than Itanium must have been. ~400bit VLIW, across extremely diverse execution units. The workload is different, it's not general purpose, but still awe inspiring to know not just that they built the chip but that the software folks can actually use such a wildly weird beast.
I wish we saw more industry uptake for XLA. Uptakes not bad, per-se: there's a bunch of different hardware it can target! But what amazing secret sauce, it's open source, and it doesn't feel like there's the industry rally behind it it deserves. It feels like Nvidia is only barely beginning to catch up, to dig a new moat, with the just announced Nvidia Tiles. Such huge overlap. Afaik, please correct if wrong, but XLA isn't at present particularly useful at scheduling across machines, is it? https://github.com/openxla/xla
Thanks for sharing this. I agree w.r.t. XLA. I've been moving to JAX after many years of using torch and XLA is kind of magic. I think torch.compile has quite a lot of catching up to do.
> XLA isn't at present particularly useful at scheduling across machines,
I do think it's a lot simpler than the problem Itanium was trying to solve. Neural nets are just way more regular in nature, even with block sparsity, compared to generic consumer pointer-hopping code. I wouldn't call it "easy", but we've found that writing performant NN kernels for a VLIW architecture chip is in practice a lot more straightforward than other architectures.
JAX/XLA does offer some really nice tools for doing automated sharding of models across devices, but for really large performance-optimized models we often handle the comms stuff manually, similar in spirit to MPI.
I agree with regards to the actual work being done by the systolic arrays, which sort of are VLIW-ish & have a predictable plannable workflow for them. Not easy, but there's a very direct path to actually executing these NN kernels. The article does an excellent job setting up how great at win it is that the systolic MXU's can do the work, don't need anything but local registers and local communication across cells, don't need much control.
But if you make it 2900 words through this 9000 word document, to the "Sample VLIW Instructions" and "Simplified TPU Instruction Overlay" diagrams, trying to map the VLIW slots ("They contain slots for 2 scalar, 4 vector, 2 matrix, 1 miscellaneous, and 6 immediate instructions") to useful work one can do seems incredibly incredible challenging. Given the vast disparity of functionality and style of the attached units that that governs, and given the extreme complexity in keeping that MXU constantly fed, keeping very tight timing so that it is constantly well utilized.
> Subsystems operate with different latencies: scalar arithmetic might take single digit cycles, vector arithmetic 10s, and matrix multiplies 100s. DMAs, VMEM loads/stores, FIFO buffer fill/drain, etc. all must be coordinated with precise timing.
Where-as Itanium's compilers needed to pack parallel work into a single instruction, there's maybe less need for that here. But that quote there feels like an incredible heart of the machine challenge, to write instruction bundles that are going to feed a variety of systems all at once, when these systems have such drastically different performance profiles / pipeline depths. Truly an awe-some system, IMO.
Still though, yes: Itanium's software teams did have an incredibly hard challenge finding enough work at compile time to pack into instructions. Maybe it was a harder task. What a marvel modern cores are, having almost a dozen execution units that cpu control can juggle and keep utilized, analyzing incoming instructions on the fly, with deep out-of-order depenency-tracking insight. Trying to figure it all out ahead of time & packing it into the instructions apriori was a wildly hard task.
In Itanium's heyday, the compilers and libraries were pretty good at handling HPC workloads, which is really the closest anyone was running then to modern NN training/inference. The problem with Itanium and its compilers was that people obviously wanted to run workloads that looked nothing like HPC (databases, web servers, etc) and the architecture and compilers weren't very good at that. There have always been very successful VLIW-style architectures in more specialized domains (graphics, HPC, DSP, now NPU) it just hasn't worked out well for general-purpose processors.
Opera 10 was getting into some wild stuff. 9 was obviously just winning. But I loved how 10 literally gave you the user your own endpoints on the web. The browser is the server (by way of proxy)! Massively inspirational decentralization. https://www.ctrl.blog/entry/opera-unite.html
* They came with a mail and chat (IRC) clients, a download manager, a set of browser dev tools, and in the age of limited internet traffic all of that was smaller than a single download of Firefox.
* Their dev tools were the first that allowed remote debugging. You could run Opera on your phone (Symbian, Windows Mobile, early Android) and debug your website from a computer.
* They were the first browser to sync your bookmarks, settings, history, extensions across devices.
* They were the first to add process isolation, albeit initially on Linux only. If an extension crashed your page it didn't take the whole browser down with it. This was later added first by Microsoft in IE8 and then by Google in Chrome.
Their browser was a brilliant piece of tech and a brilliant product. Too bad that the product couldn't survive under pressure.
I've spent decades being unclear about what the WindowMaker value proposition is.
Is there something deeper here? Because on the surface it primarily looks like some desktop widgets/dock-apps. Which isn't bad, it's more than the irrelevancy of the desktop today! widgets are great!
But I always feel like there was something more weird & implied with WindowMaker. Maybe just that it was taken as heir apparent to NeXTSTEP. But did it actually have interesting data systems, could apps talk? Or was it still lots of isolated micro-apps/desktop widgets?
To me I always assumed it was heir apparent to NeXTSTEP. I feel like there was a lot of missed opportunities back in the day. Imagine all the manpower going into Gnome and/or KDE going into GNUStep and keeping up with Apple APIs + embrace/extend of Apple APIs.
Precisely. Gnome, KDE, XFCE, and literally any other Free Software DE implement the Windows kind of desktop organisation. While WindowMaker/GNUStep show what the unexplored future could've been.
I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.
Tried 3/4 of the tools, and none helped me reattach neovim.
Ended up using dtach. Needs to be run ahead of time, but very direct and minimal stdin/stdout piping tool that's worked great with everything I've thrown at it.
https://github.com/crigler/dtach
With ram, you can verify pretty quick what you are getting.
I really wouldn't want to buy any NAND vendor until a bunch of years after they build a reputation. It's too scary to get a decent bargain SSD drive that actually oh secretly dies really early, doesn't actually have anywhere near the endurance it claims.
At the time it felt cute, a nice flourish. But over the years, the idea has sort of grown into me. I find that during the day, my critical mind is quite active & wants really exact precise things. Expectations can be large & slow down just letting things pour out of me. Now, this isn't the same in-between sleep/waking state as the article, but at night a lot of my concern goes away, and I can just enjoy things, work on things, uninhibited. Let it flow. Some level of tiredness can help.
I would like to be better about the flip side. I think the morning is another interesting, that a lot of people use well & love. Before the world is really awake, seizing the moment. Ursala Le Guin wrote about her daily routine, which involves waking prompty & writing writing writing. I feel like there's likely strong similarity. But also it sure feels good to have a bunch of work under your belt at the beginning of the day, right away. https://www.openculture.com/2019/01/ursula-k-le-guins-daily-...
reply