The only listed qualification is "You’ve subscribed to a Pro, Max, or Team plan by April 3, 2026 at 9 AM PT", however I am not getting the banner for this credit. I suspect that there's an unstated additional qualification: that your account hasn't previously received an extra usage credit.
I don't know if they should be handing out more credits right now considering that my Sonnet requests in Claude Code are routinely delayed by several minutes, presumably due to capacity issues...
I previously received one and was able to activate this one too. I agree it seems like there's more to this, just wanted to add another anecdote to further confuse things. :)
Dunno, maybe there is a bug. I was both a subscriber, and had the extra usage enabled, and paid for extra usage before, and didn't get the extra credits. I am on the max plan, so was rather looking forward to the extra $100 to burn on /fast mode.
I agree. I happened to see Boris' tweet about it as soon as it posted, and the endpoint for redeeming the credits (the one that fires when you click it on the usage page) was already returning sporadic 400s for everyone.
I wouldn't be surprised if they pulled it back so they could spread out the load.
edit: whoops, meant to leave this as a reply to the (now-sibling) comment from 'flutas.
Agreed, watching national or world news is useless. If you want to know what is likely to happen instead of what someone wants you to think will happen, we now have prediction markets. Whenever I see a headline I'm curious about, now instead of reading the article I just go to a prediction market and check the probabilities.
Prediction markets miss all the experts, whether academic or laypeople wonks, who simply don't care to have a financial stake in the decision. I can't imagine how it'd be representative. In any case, the people weighing in are getting their information from somewhere and it's not thin air. How can you understand an issue without knowing motivations/vested interests from all sides, history leading up, etc?
If I have a specific interest in a topic I can do extensive research over many hours and come to my own conclusions. But for the vast majority of news headlines I see, including almost all "national" or "world" news, I don't have the time to do hours of research. In this case, reading or one or three news articles is far more likely to give me a biased and ultimately incorrect take than looking at a prediction market, which takes all the available information and condenses it into one number that matters.
Thanks - I wish it could be drilled into by category, i.e. what are the stats for categories of import (filter out sports, crypto, etc). My worry is the average could appear rosier if the share of trivial events are high.
I don't know if it's just getting older or some deeper change in society, but more and more the reading of how my peers view the world depresses me. Even beyond the specific issues with prediction markets, there is a whole lot more to understanding our world than merely knowing the rough odds of possible outcomes.
On the other hand it's a boon to those establishing new businesses. And a huge boon to employees. And a boon to the overall economy because it accelerates transfer of know-how out of older and more dysfunctional companies into newer and more nimble ones. This is what made Silicon Valley what it is, starting all the way back with the Traitorous Eight in 1957 and continuing today.
There are so many wannabe "New Silicon Valley" alternative areas that are unwilling to copy the non-compete ban, and subsequently fail to compete with the real Silicon Valley. It's a necessary ingredient in my opinion.
Once you have matched humans on a problem then further progress on that problem is not necessarily meaningful anymore, in terms of quantitative measurement of intelligence. ARC-AGI-3 is designed to compare AIs to humans, not to measure arbitrarily high levels of superhuman intelligence. For that you would want a different benchmark.
On the public set of 25 problems. These are intended for development and testing, not evaluation. There are 110 private problems for actual evaluation purposes, and the ARC-AGI-3 paper says "the public set is materially easier than the private set".
Benchmarks on public tests are too easy to game. The model owners can just incorporate the answers in to the dataset. Only the private problems actually matter.
The harness seems extremely benchmark specific that gives them a huge advantage over what most models can use. This isn't a qualifying score for that reason.
I agree it's not cheating that restricted sense. But I'm not really convinced that it can't be cheating in a more general sense. You can try like 10^10 variations of harnesses and select the one that performs best. And probably if you then look at it, it will not look like it's necessarily cheating. But you have biased the estimator by selecting the harness according to the value.
Once the model has seen the questions and answers in the training stage, the questions are worthless. Only a test using previously unseen questions has merit.
All traffic is monitored, all signal sources are eventually incorporated into the training set in one way or another. The person you're responding to is correct, even a single API call to any AI provider is sufficient to discount future results from the same provider.
ok! So if someone uses an existing, checkpointed, open source model then the answer is yes the results are valid and it doesn't matter that the tests are public.
You live in a conspiracy world. Those AI providers don't update the models that fast. You can try ask them solve ARC-AGI-3 without harness and see them struggle as yesterday yourself.
Where do you see that? I only skimmed the prompts but don't see any aspects of any of the games explained in there. There are a few hints which are legitimate prior knowledge about games in general, though some looks too inflexible to me. Prior knowledge ("Core priors") is a critical requirement of the ARC series, read the reports.
The test doesn't prove you have AGI. It proves you don't have AGI. If your AI can't solve these problems that humans can solve, it can't be AGI.
Once the AIs solve this, there will be another ARC-AGI. And so on until we can't find any more problems that can be solved by humans and not AI. And that's when we'll know we have AGI.
AI X that can solve the tests contrasted with AI Y that cannot, with all else being equal, means X is closer to AGI than Y. There's no meaningful scale implicit to the tests, either.
Kinda crazy that Yudkowsky and all those rationalists and enthusiasts spent over a decade obsessing over this stuff, and we've had almost 80 years of elite academics pondering on it, and none of them could come up with a meaningful, operational theory of intelligence. The best we can do is "closer to AGI" as a measurement, and even then, it's not 100% certain, because a model might have some cheap tricks implicit to the architecture that don't actually map to a meaningful difference in capabilities.
Will there be a point in that series of ARC-AGI tests where AI can design the next test, or is designing the next text always going to be a problem that can be solved by humans and not AI?
I don't see why AI couldn't design tests. But they can only be validated by humans, as they are intended to be possible and ideally easy for humans to solve.
Yes, but I guess you see what I'm getting at. If designing the next ARC-AGI test is impossible for AI without a human in the loop, then AGI becomes unreachable by definition.
It doesn't prove anything of the sort. ARC-AGI has always been nothing special in that regard but this one really takes the cake. A 'human baseline' that isn't really a baseline and a scoring so convoluted a model could beat every game in reasonable time and still score well below 100. Really what are we doing here ?
That Francois had to do all this nonsense should tell you the state of where we are right now.
Linux native semaphores are enough. Linux has been able to be very performant without it. That feature seems like way too over engineered for little gains.
Valve built more games than Epic in the past 10 years. Epic essentially only released Robo Recall and Fortnite + extra content, plus a spinoff of Rocket League which was an acquisition. Valve released a couple of duds (Artifact, Dota Underlords) but also some good games: Half-Life: Alyx, Counter-Strike 2, and Deadlock. They also did "The Lab" and "Aperture Desk Job" which, while not full games, were quite good as demos for their hardware.
I'm sure any studio would trade their entire decade of portfolio to get where Fortnite is. Sony did in fact basically do that to great failure (despite Hell divers 2 being very well received, it's no Fortnite).
> the key insight is that changes should be flagged as conflicting when they touch each other
Not really. Changes should be flagged as conflicting when they conflict semantically, not when they touch the same lines. A rename of a variable shouldn't conflict with a refactor that touches the same lines, and a change that renames a function should conflict with a change that uses the function's old name in a new place. I don't think I would bother switching to a new VCS that didn't provide some kind of semantic understanding like this.
reply