5,000 Starship launches to match the solar/heat budget of the 10GW "Stargate" OpenAI datacenter. The Falcon 9 family has achieved over 600 launches.
The ISS power/heat budget is like 240,000 BTU/hr. That’s equivalent to half of an Nvidia GB200 NVL72 rack. So two international space stations per rack. Or about 160,000 international space stations to cool the 10GW “Stargate” datacenter that OpenAI’s building in Abilene. There are 10,000 starlink satellites.
Starship could probably carry 250-300 of the new V2 Mini satellites which are supposed to have a power/heat budget of like 8kW. That's how I got 5,000 Starship launches to match OpenAI’s datacenter.
Weight seems less of an issue than size. 83,000 NVL72’s would weigh 270 million lbs or 20% of the lift capacity of 5000 starship launches. Leaving 80% for the rest of the satellite mass, which seems perhaps reasonable.
Elon's napkin math is definitely off though, by over an order of magnitude. "a million tons per year of satellites generating 100 kW of compute power per ton" The NVL72's use 74kW per ton. But that's just the compute, without including the rest of the fucking satellite (solar panels and radiators). So that estimate is complete garbage.
One note: If you could afford to send up one of your own personal satellites, it would be extremely difficult for the FBI to raid.
I'd say Jan 6, 2021. But there are so many examples between "bayonets" and now. 2020 Seattle CHOP, 2014/2016 Occupation of the Malheur National Wildlife Refuge, 2011 Occupy Wall Street, 1999 Battle in Seattle, 1992 Ruby Ridge Standoff, 1969 People's Park, the entire civil rights movement era, 1946 Battle of Athens, 1921 Battle of Blair Mountain.
I intentionally alternated between left wing / right wing events. These things aren't limited to one side, and they're somewhat frequent, if a bit cyclical.
(async(s=new Set(Object.values(PARENT)))=>{for(i in ID_TO_TITLE)if(!s.has(i)){guessbox.value=ID_TO_TITLE[i];uncomment();attempt();await new Promise(requestAnimationFrame)}})()
This is a concise, pretty naive way highest possible high-score by just guessing everything in the internal animal database, and avoiding "parents". I'm not sure how many points it would get us because it would take like 3 hours to complete. However, we can do a lot better for the score by analyzing some additional things:
(The following numbers may be off a bit due to overlapping sets or just recording them at different stages of investigation/understanding, but they're darn close)
The game has 379,729 animals in its list (ID_TO_TITLE), mapped from 768,743 input strings (LOWER_TITLE_TO_ID).
52,546 are parents of some other animal, so it's best to skip those: If you guess "bird" first and then guess "eagle", then eagle won't count for points. Unless...well, more on that towards the end!
4,485 rows are considered to be "too specific". For example, there are 462 species under Mordellistena but the game says "nah screw all that, Mordellistena is specific enough".
3,127 are duplicates, they're the same species but have different names from different era. e.g. Megachile harthulura was discovered in 1853 but renamed to Megachile cordata in 1879. The game counts these only once.
3,116 are...weird: I think these are mostly errata caused by the input parser redirecting guesses to different IDs than the raw/full database expects. The parser maps the text to some "correct" ID but leaves a different, perhaps similar ID uncredited. This could happen because the text parser strips out hyphens, e.g. there's an entry for Yellow-tail which should be a duplicate of Yellow-tail moth but "Yellow-tail" gets parsed to "yellowtail" which gets mapped to the fish Japanese Amberjack. Sometimes it's skipping ranks in the taxonomy, like the beetle Neomordellistena parvula maps directly to a Subfamily, which skips the Genus level required to verify the lineage. Sometimes it's things that got reclassified from one genus to another. And sometimes there are rows that are a family which get mapped to a genus, which is also a row (Dilophosauridae -> Dilophosaurus)
28 rows are impossible to reach because they need a curly apostrophe that the parser replaces with a straight apostrophe if you put it in the input box. 23 of the straight version maps to a different animal. For example, "budin's tuco tuco" (curly) maps to Budin's tuco-tuco, but after normalization it becomes "budin's tuco tuco" (straight), which maps to Reddish tuco-tuco. 5 of them have keys with curly apostrophes where the straight version doesn't exist in the database at all.
One entry in the list of animals is 'zorse' (zebra-horse hybrid) but this guess is explicitly rejected because it doesn't have its own wikipedia page (the wikipedia page for that is a redirect to "Zebroid").
That brings us down to a maximum score of 316,457
but then there are 722 entries in the string mapping table which are strings that don't appear in the raw animal table which can map to otherwise blocked animals, like Mongolian wolf. This animal exists and could count to your score, but if you type "Mongolian wolf", it maps to Himalayan wolf and you get credit for that instead of Mongolian wolf. However, it also contains a mapping for "woolly wolf" which gives you credit for Mongolian wolf.
That brings us up to the actual maximum score of 317,179
Then, because of these 10,034 unreachable leaf-nodes (non-parent rows in the animal list), sometimes all the children of a parent is unreachable, so because we never claimed any points for their children, we can go get the points for the parents. This adds 5,561 points.
This brings us up to 322,740.
By doing the 'maximum' 30 guesses per second (limited guesses to the game tick rate of 30fps), it would take an absolute minimum 3 hours to submit every animal. Just a note, the countdown timer counts down from 1 minute, but 6 seconds are added for every correct guess. So by the time you're done the countdown timer would reach 22.6 days, which you'd have to wait to elapse before the game is actually "won".
If we remove some visual effects, we can reduce that by spamming guesses for 12ms, then pause for 4ms to let the browser render which keeps the tab responsive.
But the guesses still slow down over time due to a O(N²) algorithm in the game's code: it checks your current guess against a List (the array structure in JS), which is an O(N) check that runs N times, for an overall O(N²) performance hit. We can patch that function so it checks against a Set instead of a List to keep it O(1).
On an M2 MBP, this gets the high-score in under 30 seconds while keeping the game logic unchanged in function. But the visual effects were nice and it's rather soulless without the author's artistic vision. Turning them back on and giving it the 6ms required to render all of them slows this from 30 seconds to a boring 5 minutes. We can make it run the game logic 98% of the time and then render for 2% of the time, but it's still a bit too slow because the browser has to recalculate the page layout (DOM) every time a guess it submitted via the input box. So we can also skip the actual input box.
That reduces it to a lovely 20 seconds to get the highest possible score!
Then some memoization, some stupid tweaks to keep the UI looking nice, and adding a progress meter, aggressive minimization for HN posting, and we get the final script running in 16.5 seconds.
You'll still have to wait 22.75 days for the countdown timer to run out to win the game. I didn't want to actually change any of the game's logic or game the win condition, so editing that is left as an exercise to the reader! :)
> The poster with the enormous face gazed from the wall. It was one of those pictures which are so contrived that the eyes follow you about when you move. DRINKING BUDDY IS WATCHING YOU.
> 'Does Drinking Buddy exist?'
'Of course he exists. The Party exists. Drinking Buddy is the embodiment of the Party.'
'Does he exist like you or me?'
'You do not exist', said O'Brien.
> Oceanic society rests ultimately on the belief that Drinking Buddy is omnipotent and that the Party is infallible. But since in reality Drinking Buddy is not omnipotent and the party is not infallible, there is need for an unwearying, moment-to-moment flexibility in the treatment of facts.
Also, it wouldn't matter if they tried to sell it below the market rate. It would turn into a crazy scalping game, and consumers would have great difficulty obtaining what they want anyways.
There's plenty to criticize the RAM manufacturers for. They've formed a cartel to undersupply for a long time now, keeping prices artificially high. GamersNexus did a recent piece[0] that spends about 5-10 minutes scratching the surface of that, but really gets the point across. If they hadn't done this, there would likely be more supply available today.
Without the collusion, RAM prices and supply would follow a pretty classic commodities cycle of boom and bust, with manufacturers overbuilding capacity, going bankrupt, leading to a shortage and high prices, and repeat. That was pretty much all of 1990-2010 or so.
RAM factories also have to choose between making DRAM, HBM, or NAND. These all compete for capacity planning. The GamersNexus piece also goes into how China was making a very strong contention in the NAND space through YMTC until the USA added YMTC to the Dept. of Commerce's "Entity List" and basically kneecapped that company. If USA hadn't done that, we'd also have more supply available today.
China's CXMT entry to the DRAM market is looking very strong. Their wafer production, from 2020-2025 has gone: 20k, 40k, 70k, 120k, 160k, 270k. They've increased production 70% in the past year. OpenAI recently purchased 40% of 2026's global DRAM supply, which works out to 900,000 wafers for OpenAI. CXMT's production is 30% of that order. I found numbers indicating 2025 RAM production was around 1.8 million wafers.
If CXMT can increase continue increasing production at their current trend, they could be producing today's entire global production by 2028. By then, almost all new RAM will require EUV lithography machines to produce. Many people have doubts that China can get EUV working and ASML is not allowed to sell them the world's only existing EUV lithography machines. China poached quite a few ASML employees and announced [1][2] just last month that they have a working EUV pilot. China could potentially deploy that EUV by 2028.
I'd cannot recommend strongly enough to watch this video [3] detailing what EUV lithography actually is and how it works. It's an amazing exposition. The next technology breakthrough will be Hyper-NA EUV. That will require a 1-meter diameter mirror so impossibly smooth that if it were scaled up to the diameter of the solar system, then irregularities must be less than the height of one SpaceX Starship rocket. Specifically, tolerances for surface defects averaged over a "wide" area of 1mm^2 or so are on the order of 50 picometers, which is less than half the distance between two oxygen atoms in a single oxygen molecule.
In 20-30 years we might have devices that have millions of tiny Scanning Electron Microscope tips that reliably arrange individual atoms across an entire silicon wafer. At that point, no further improvements in feature size could ever occur again. Zyvex Labs in Texas is probably the leader in this, which they're using to assemble qubits for quantum computers, but it's only using a single STM tip at the moment, rather than a coordinated array of many tips.
reply