Hacker Newsnew | past | comments | ask | show | jobs | submit | scottlamb's commentslogin

> I know it’s thermal throttling because I can see in iStat Menus that my CPU usage is 100% while the power usage in watts goes down.

There's another possibility. If your battery is low and you've mistakenly plugged it into a low-power USB-C source (phone charger), you will also see 100% CPU usage, low power usage, and terrible performance. Probably not the author's problem, but it's been mine more than once! It might be worth adding something to detect this case, too. You can see your charger power under "System Information"; I assume there's an API for it also.


I have an M1 MacBook Air and do a once weekly virtual D&D session with some friends. I hook it up to my 4K monitor, and I assumed it had to do with that. It kept becoming a slideshow (unless I put an ice pack under it!) and I realized it’s because the battery life is so good it’s the only time of the week I charge the thing, so charging the battery was making the poor laptop a hot mess that was thermal throttling like crazy. This is with a nice dock that can push around 100W, so it isn’t necessarily an underprovisioned charger.

I started charging it an hour or two before our session, and the issues stopped.


While I have definitely done this a few times, one of my MacBooks could draw more power than the power supply could deliver and there was a particular computer game that I discovered I could “only” play for about five hours before the laptop shut itself off. Because it was having to draw supplemental power from the batteries to keep up.

IIRC the next generation of MacBook was the one that came with the larger power brick, which didn’t at all surprise me after that experience. Then they switched to GaN to bring the brick size back down.


> I know it’s thermal throttling because I can see in iStat Menus that my CPU usage is 100% while the power usage in watts goes down.

When I read this I wondered "Why isn't core temperature alone not a reliable indicator of thermal throttling?". Isn't that the state variable the thermal controller is directly aiming to regulate by not letting it exceed some threshold?


My M4 Max Macbook Pro can run for a while at like 105°C and fans to the max before throttling, when it starts throttling it doesn't exceed that threshold, and then the temperature goes down for a while before throttling stops

Interesting, yeah iStat Menus reports the wattage of the charger, sometimes I've charged my mac with like a 5 or 10W charger and I didn't have that issue But now that rings a bell, I think a coworker had that issue recently. I wonder why that happens

Mine either. Choosing Rust by no means guarantees your tool will be fast—you can of course still screw it up with poor algorithms. But I think most people who choose Rust do so in part because they aspire for their tool to be "blazing fast". Memory safety is a big factor of course, but if you didn't care about performance, you might have gotten that via a GCed (and likely also interpreted or JITed or at least non-LLVM-backend) language.

Yeah sometimes you get surprisingly fast Python programs or surprisingly slow Rust programs, but if you put in a normal amount of effort then in the vast majority of cases Rust is going to be 10-200x faster.

I actually rewrote a non-trivial Python program in Rust once because it was so slow (among other reasons), and got a 50x speedup. It was mostly just running regexes over logs too, which is the sort of thing Python people say is an ideal case (because it's mostly IO or implemented in C).


This. If it were a business-critical money fountain, I'd expect follow-the-sun SRE coverage. I don't think it is, so I can probably accept drinking my morning coffee without scrolling HN once in a while. There's only so much one can beat oneself up about a slow/incorrect response when the on-call is handled by what, just one person? maybe two people in the same time zone?

(Might be wise though to have PagerDuty configured to re-alert if the outage persists.)


I agree there's a scale below which this (or any) optimization matters and a scale above which you want your primary key to have locality (in terms of which shard/tablet/... is responsible for the record). But...

* I think there is a wide range in the middle where your database can fit on one machine if you do it well, but it's worth optimizing to use a cheaper machine and/or extend the time until you need to switch to a distributed db. You might hit this middle range soon enough (and/or it might be a painful enough transition) that it's worth thinking about it ahead of time.

* If/when you do switch to a distributed database, you don't always need to rekey everything:

** You can spread existing keys across shards via hashing on lookup or reversing bits. Some databases (e.g. DynamoDB) actually force this.

** Allocating new ids in the old way could be a big problem, but there are ways out. You might be able to switch allocation schemes entirely without clients noticing if your external keys are sufficiently opaque. If you went with UUIDv7 (which addresses some but not all of the article's points), you can just keep using it. If you want to keep using dense(-ish), (mostly-)sequential bigints, you can amortize the latency by reserving blocks at a time.


> I don't expect [the 5% of students who end up going into research] would have called the program particularly optimized either

This. I went to the University of Iowa in the aughts. My experience was that because they didn't cover a lot of the same material in this MIT Missing Semester 2026 list, a lot of the classes went poorly. They had trouble moving students through the material on the syllabus because most students would trip over these kinds of computing basics that are necessary to experiment with the DS+A theory via actual programming. And the department neither added a prereq that covers these basics or nor incorporated them into other courses's syllabi. Instead, they kept trying what wasn't working: having a huge gap between the nominal material and what the average student actually got (but somehow kept going on to the next course). I don't think it did any service to anyone. They could have taken time to actually help most students understand the basics, they could have actually proceeded at a quicker pace through the theoretical material more for the students who actually did understand the basics, they could have ensured their degree actually was a mark of quality in the job market, etc.

It's nice that someone at MIT is recognizing this and putting together this material. The name and about page suggest though it's not something the department has long recognized and uncontroversially integrated into the program (perhaps as an intro class you can test out of), which is still weird.


>It's nice that someone at MIT is recognizing this and putting together this material. The name and about page suggest though it's not something the department has long recognized and uncontroversially integrated into the program (perhaps as an intro class you can test out of), which is still weird.

While this comes out of CSAIL, I wouldn't ascribe too much institutional recognition to this. Given the existence of independent activities period, it's probably a reasonable place for it given MIT's setup. Other institutions have "math camp" and the like pre-classes starting.

It's probably a reasonable compromise. Good schools have limited bandwidth or interest in remedial education/hand-holding and academics don't have a lot of interest in putting together materials that will be outdated next year.


> Good schools have limited bandwidth or interest in remedial education/hand-holding and academics don't have a lot of interest in putting together materials that will be outdated next year.

I think they rarely escape doing this hand-holding unless they're actually willing to flunk out students en masse. Maybe MIT is; the University of Iowa certainly wasn't. So they end up just in a state of denial in which they say they're teaching all this great theoretical material but they're doing a half-assed job of teaching either body of knowledge.

I also don't think this knowledge gets outdated that quickly. I'd say if they'd put together a topic list like this for 2006, more than half the specific tools would still be useful, and the concepts from the rest would still transfer over pretty well to what people use today. For example, yeah, we didn't have VS Code and LSP back then, but IDEs didn't look that different. We didn't (quite) have tmux but used screen for the same purpose. etc. Some things are arguably new (devcontainers have evolved well beyond setting up a chroot jail, AI tools are new) but it's mostly additive. If you stay away from the most bleeding-edge stuff (I'm not sure the "AI for the shell (Warp, Zummoner)" is wise to spend much time on) you never have to throw much out.


The whole container universe is pretty different even if the process/threads/etc. foundations aren't that changed. Certainly <umm> a book I wrote about the state of computing in the early 2010s--largely derived from things I had written over a few prior years--was hopelessly out of date within just a few years.

There certainly are fits and starts in the industry. I'm not sure the past 5 years or so looks THAT different from today. (Leaving aside LLMs.)

From my peripheral knowledge, MIT does try to hand-hold to some degree. Isn't the look-left and look-right, one of those people won't be here next year sort of places. But, certainly, people do get in over their head at some places. I tutored/TAd in (business) grad school and some people just didn't have the basics. I couldn't do remedial high school arithmetic from the ground up--especially for some people who weren't even willing to try seriously.


> Certainly <umm> a book I wrote about the state of computing in the early 2010s--largely derived from things I had written over a few prior years--was hopelessly out of date within just a few years.

I could see it being obsolete quickly to the extent that when someone was trying to learn devops and saw a book on the (virtual) shelf that didn't cover containers next to one that did, they'd pick the latter every time. You probably saw this in your sales tanking. But I'm not sure many of the words you actually did write became wrong or unimportant either. That's what I mean by additive. And in the context of a CS program, even if their students were trying out these algorithms with ridiculously out-of-date, turn-of-the-century tools like CVS, they'd still have something that works, as opposed to fumbling because they have no concept of how to manage their computing environment.


I didn't care about sales :-) It was free and I did a couple of book-signings at sponsored conferences that other people paid for. A lot of the historical content remained accurate but the going-forward trajectory shifted a lot.

The way DevOps evolved was sort of a mess anyway but welcome to tech.

I sort of agree more broadly but I can also see a lot of students rolling their eyes at using outdated tools which is probably less the case in other disciplines.


I could definitely see eye-rolls if students who know (of) git are being taught about CVS. But I'm not sure it matters that much. This stuff is tangential to the core course material, so a student (or small project group) can pick the tool of their choice. If they know something newer or better than suggested, great.

> Isn't the look-left and look-right, one of those people won't be here next year sort of places.

the same MIT that doesn't give out grades in the first year? (just Pass / NoPass)

the high achievers who scored solid grades to get there literally kill themselves when they pull Cs and Ds, even though it's a hard class and is sort of "look left, look right"


Not sure of your point. Pass/Fail was intended to ease freshmen in. (Most people didn't fail.)

Yes, poor grades were often a shock to people accustomed to being straight A students in high school. Though most made it through or ended up, in some cases, going elsewhere.


> Permanent identifiers should not carry data.

I think you're attacking a straw man. The article doesn't say "instead of UUIDv4 primary keys, use keys such as birthdays with exposed semantic meaning". On the contrary, they have a section about how to use sequence numbers internally but obfuscated keys externally. (Although I agree with dfox's and formerly_proven's comments [1, 2] that XOR method they proposed for this is terrible. Reuse of a one-time pad is probably the most basic textbook example of bad cryptography. They referred to the values as "obfuscated" so they probably know this. They should have just gone with a better method instead.)

[1] https://news.ycombinator.com/item?id=46272985

[2] https://news.ycombinator.com/item?id=46273325


I don't think the objection is that it exposes semantic meaning, but that any meaningful information is contained within the key at all, eg. even a UUID that includes timestamp information about when it was generated is "bad" in a sense, as it leaks information. Unique identifiers should be opaque and inherently meaningless.

Your understanding is inconsistent with the examples in vintermann's comment. Using a sequence number as an internal-only surrogate key (deliberately opaqued when sent outside the bounds of the database) is not the same as sticking gender identity, birth date, or any natural properties of a book into a broadly shared identifier.

No it's not, they very explicitly clarify in follow-up comments that unique identifiers should not be embedded any kind of meaningful content. See:

https://news.ycombinator.com/item?id=46276995

https://news.ycombinator.com/item?id=46273798


Okay, but they ignore the stuff I was talking about, consistent with my description of this as a straw man attack.

> A running number also carries data. Before you know it, someone's relying on the ordering or counting on there not being gaps - or counting the gaps to figure out something they shouldn't.

The opaquing prevents that.

They also describe this as a "premature optimization". That's half-right: it's an optimization. Having the data to support an optimization, and focusing on optimizing things that are hard to migrate later, is not premature.


Insert order or time is information. And if you depend on that information you are going to be really disappointed when back dated records have to be inserted.

Right, to ensure your clients don't depend on that information, make the key opaque outside the database through methods such as the ones dfox and formerly_proven suggested, as I said.

> Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.

This is an io_uring-specific thing. It doesn't guarantee any ordering between operations submitted at the same time, unless you explicitly ask it to with the `IOSQE_IO_LINK` they mentioned.

Otherwise it's as if you called write() from one thread and fsync() from another, before waiting for the write() call to return. That obviously defeats the point of using fsync() so you wouldn't do that.

> If you call fsync(), [O_DSYNC] isn't needed correct? And if you use [O_DSYNC], then fsync() isn't needed right?

I believe you're right.


I guess I'm a bit confused why the author recommends using this flag and fsync.

Related: I would think that grouping your writes and then fsyncing rather than fsyncing every time would be more efficient but it looks like a previous commenter did some testing and that isn't always the case https://news.ycombinator.com/item?id=15535814


I'm not sure there's any good reason. Other commenters mentioned AI tells. I wouldn't consider this article a trustworthy or primary source.

Yeah that seems reasonable. The article seems to mix fsync and O_DSYNC without discussing their relationship which seems more like AI and less like a human who understands it.

It also seems if you were using io_uring and used O_DSYNC you wouldn't need to use IOSQE_IO_LINK right?

Even if you were doing primary and secondary log file writes, they are to different files so it doesn't matter if they race.


> It also seems if you were using io_uring and used O_DSYNC you wouldn't need to use IOSQE_IO_LINK right? Even if you were doing primary and secondary log file writes, they are to different files so it doesn't matter if they race.

I think there are a lot of reasons to use this flag besides a write()+f(data)sync() sequence:

* If you're putting something in a write-ahead log then applying it to the primary storage, you want it to be fully committed to the write-ahead log before you start changing the primary storage, so if there's a crash halfway through the primary storage change you can use the log to get to a consistent state (via undo or redo).

* If you're trying to atomically replace a file via the rename-a-temporary-file-into-place trick, you can submit the whole operation to the ring at once, but you'd want to use `IOSQE_IO_LINK` to ensure the temporary file is fully written/synced before the rename happens.

btw, a clarification about my earlier comment: `O_SYNC` (no `D`) should be equivalent to calling `fsync` after every write. `O_DSYNC` should be equivalent to calling the weaker `fdatasync` after every write. The difference is the metadata stored in the inode.


> I think there are a lot of reasons to use this flag besides a write()+f(data)sync() sequence:

> * If you're putting something in a write-ahead log then applying it to the primary storage, you want it to be fully committed to the write-ahead log before you start changing the primary storage, so if there's a crash halfway through the primary storage change you can use the log to get to a consistent state (via undo or redo).

I guess I meant exclusively in terms of writing to the WAL. As I understand most DBMSes synchronously write the log entries for a transaction and asynchronously write the data pages to disk via a separate API or just mark the pages as dirty and let the buffer pool manager flush them to disk at its discretion.

> * If you're trying to atomically replace a file via the rename-a-temporary-file-into-place trick, you can submit the whole operation to the ring at once, but you'd want to use `IOSQE_IO_LINK` to ensure the temporary file is fully written/synced before the rename happens.

Makes sense


> As I understand most DBMSes synchronously write the log entries for a transaction and asynchronously write the data pages to disk via a separate API or just mark the pages as dirty and let the buffer pool manager flush them to disk at its discretion.

I think they do need to ensure that page doesn't get flushed before the log entry in some manner. This might happen naturally if they're doing something in single-threaded code without io_uring (or any other form of async IO). With io_uring, it could be a matter of waiting for completion entry for the log write before submitting the page write, but it could be the link instead.


> I think they do need to ensure that page doesn't get flushed before the log entry in some manner.

Yes I agree. I meant like they synchronously write the log entries, then return success to the caller, and then deal with dirty data pages. As I recall the buffer pool manager has to do something special with dirty pages for transactions that are not committed yet.


This might be one of the best things about the current AI boom. The agents give quick, frequent, cheap feedback on how effective the comments, code structure, and documentation are to helping a "new" junior engineer get started.

I like to think I'm above average in terms of having design docs alongside my code, having meaningful comments, etc. But playing with agents recently has pointed out several ways I could be doing better.


If I see an LLM having trouble with a library, I can feed its transcript into another agent and ask for actionable feedback on how to make the library easier to use. Which of course gets fed into a third agent to implement. It works really well for me. Nothing more satisfying than a satisfied customer.

I've done something similar. I ask agents to use CLIs, then I give them an "exit survey" on their experience along with feedback on improvements. Feels pretty meta.

Nvidia is selling shovels, widely believed to be a better business than panning for gold. So...

* You can't generalize from Nvidia to companies spending all the money on hardware, electricity, and labor without making a profit.

* It's also worth asking if Nvidia will keep having those earnings if all the AI companies crash. Unsure about this. At least there's a bunch of pent-up demand from people wanting GPUs for other reasons.

* Also, there's the $100B they invested in OpenAI...


Strange question.

The July megaquake prophecy scare was dumb because it originated in a work of fiction, not intended to be taken seriously by its author and not based on any scientific evidence. If the "prophecy" had come true, it'd be by luck alone. fwiw, I'd say it didn't come true; the 8.8 magnitude earthquake was near Kamchatka and didn't actually damage Japan, though a tsunami seemed plausible enough that there was a precautionary evacuation.

This "strong quake" is a thing that happened, not a "smart prophecy" [1]. Talk of aftershocks is not a prophecy either; it's a common-sense prediction consistent with observations from many previous earthquakes.

[1] smart prophecy is an oxymoron. A prediction is either based on scientific evidence (not a prophecy) or a (dumb) prophecy.


You are certainly reading something into my question that isn't there. I'm genuinely ignorant. I thought you were saying that predictions of a strong aftershock following an M8.8 were dumb, but the same thing following an M7.6 were smart. Is that not the case?

Again, sorry if this seemed antagonistic or something, I really am just unsure of what you were saying.


A manga book published in 1999 randomly predicted a disaster in March 2011, which seemed to come true with Fukushima. The manga was re-published in 2021 predicting a M8.8 in July 2025, but nothing happened. This is the dumb prophecy part, it was not based on seismology studies, just a shot in the dark to try to seem prophetic again. Countless works of fiction are published every year which predict some future disaster at an arbitrary date. Every once in a while, one of those thousands of random predictions can be interpreted as coming true when something bad happens on that day, which retroactively drives interest in that work of fiction, and less scientific minds believing the author has actual future predicting power beyond the abilities of science.

A relatively major (but not M8.8) quake has now hit in December 2025. It is intelligent to expect there may be aftershocks in the days after a significant earthquake actually happens, which can sometimes be larger than the initial quake. This is a well-accepted scientific fact born out of large amounts of data and statistical patterns, not whimsical doomsdayism.

Fukushima's M9.0-9.1 was around a 1-in-1000-year scale event. The last time Japan saw such a powerful earthquake was in the 869 AD. It would be reasonable to expect one of that scale to not happen again for another 1000 years.


Math nazi in me really wants to point out that an event with a 1:1000 probability would be expected to be seen (> 50% probability) in about 700 years, not 1000.


Heh, hence why I said 1-in-1000-year, rather than just 1-in-1000. Indeed 1:1000 would happen within 693 years with 50% probability, 1:1443 would happen within 1000 years with 50% probability.


Roughly how many Paul Erdős's to every Oswald Teichmüller though?



Great response and very informative - no clue how I totally missed the references and stories about this manga. That’s pretty cool - I’ll have to look it up!


You asked what I would have asked, in a sentence, my understanding is: it was LITERALLY a prophecy, I.e. an unscientific statement out of thin air, that in July, there would be an earthquake followed by a larger one. Here, we have reality, an earthquake, ergo the first prong of a mega quake was satisfied, as opposed to prophesied.


Ah, that's probably it. Thank you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: