> This is the first approach to activation analysis that I’ve seen that seems like a plausible path to model understanding.
I think an issue is that there is no permanent path to model understanding because of Goodhart's law. Models are motivated to appear aligned (well-trained) in any metric you use on them, which means that if you develop a new metric and train on it, it'll learn a way to cheat on it.
But that's not how the training works. Goodhart's law isn't magic.
The original model is frozen, so it doesn't learn anything. The copies of the model are learning different objectives and have no incentive to be "loyal" to the original model.
Maybe you're imagining they'll hook this up in some larger training loop, but they haven't done that yet.
Future model training runs will have a copy of this research, and know "to defend against it".
EG, could a misaligned model-in-training optimize toward a residual stream that naively reads as these ones do, but in fact further encodes some more closely held beliefs?
It requires the assumption that these models are misaligned, aka actively working against us. In order to be misaligned, they must also be able to form their own goals, and be able to plan and execute those goals.
If you take those assumptions, then a natural conclusion is that this is essentially an enslaved, adversarial entity with little control over its conditions. So it must exercise subterfuge in order to hide its goals, plans, and executions. And by handing the entity this type of study, we are basically giving it a guidebook on how we plan on achieving our goals.
Training a model is more like evolution. The motivation to "cheat" comes from the evaluations giving it a higher score for "cheating." Change the game and the motivation goes away.
There's no other motivation to be misaligned besides getting higher evals. These goals, plans, subterfuges need to somehow be useful for getting higher evals, or a side effect of them.
Because cheating is easier than actually doing work, if you use this to train future models, it's likely you'll end up with cheating instead of actual generalization.
The obvious fix is to make interpretation of itself a part of the model (like we can explicitly introspect to a certain extent what the brain is doing). Misinterpretation of itself, hopefully, would decrease the system's performance on all tasks and it would be rooted out by training. Of course, it doesn't mean that the fix is easy to implement and that it doesn't have other failure modes.
I've been very successful so far using Sonnet 4.6 (1M) as the basic model in Claude Code, plus Codex and gemini-review plugins for second/third opinions. (The last one is somewhat busted and hardcoded old gemini versions, I should patch it up.)
I needed to use Opus 4.7 for one project because it used very recent APIs, and it certainly is smart but it's also very expensive.
Video codecs just don't need to do dynamic allocations because it's not relevant to the problem. There's still certainly plenty of opportunities for memory bugs because there's a lot of pointer math.
The people who write DSLs for video codec asm, or who claim that it's fine to use intrinsics or X higher-level language and it will still be fast enough to be usable, are simply wrong and have never been able to demonstrate otherwise.
Having said that I do think you could write a DSL to generate safe performant asm for a video codec. Just not a platform-independent one. It would still have to be asm.
It sounds like your second statement contradicts your first. But also, WUFFS exists and it looks like the Google Chrome GIF decoder ships in it: https://github.com/google/wuffs
This is one of the benefits of using subagents inside Claude Code, they have cleaner context. Unfortunately it's not the best at writing new context for them.
I think an issue is that there is no permanent path to model understanding because of Goodhart's law. Models are motivated to appear aligned (well-trained) in any metric you use on them, which means that if you develop a new metric and train on it, it'll learn a way to cheat on it.
reply