Hacker Newsnew | past | comments | ask | show | jobs | submit | htrp's commentslogin

vibes maybe?

If effective AI enhanced SWEs can ship features in a week, the guys who ship 1 feature a quarter stand out more?


Quality matters as well as speed though: reworking comes at a cost, so you really need to be tracking more than one metric. A lot of problems are caused by optimising for one metric above all else.

If it takes 1 quarter to develop a feature and a developer ships a feature in 1 quarter then that makes sense.

If it takes 2 weeks to ship a feature and a developer ships in 1 then yeah, I'm highly suspicious of that.


Was the CTO advocating a more measured approached to ai adoption?

I have a feeling that I have witnessed it, although I was told the CTO decided to move on to other challenges.

>Higher usage limits

>The following three changes—all effective today—are aimed at improving the experience of using Claude for our most dedicated customers.

>First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.

>Second, we’re removing the peak hours limit reduction on Claude Code for Pro and Max accounts.

>Third, we’re raising our API rate limits considerably for Claude Opus models,

Looks like Elon's finally giving up on XAI and just selling the compute


> Looks like Elon's finally giving up on XAI and just selling the compute

I don't think that's certain yet, but I do think that the open-source models like Gemma and Qwen are getting so good so fast that even Anthropic has real risk around the long-term value of their models and tooling.

Basically, if I'm Anthropic or xAI, I try to get revenue whenever and wherever possible and see what sticks. There's no value in playing for monopolistic control when everything is so volatile.


There's always money in the giggawatt datacenter

I don't know if it relates to the same data centers, but this also comes hours after several still recent Grok models were deprecated at short notice. Grok 4.1 Fast is the cheapest way to do research on X (cheaper than the X API!) and it's gone on May 15: https://docs.x.ai/developers/models - freeing up compute to sell?

Fuck, I loved grok 4.1, it was a really capable model for the money.

I'd run agents consuming hundreds of millions of tokens for less than a hundred dollars.


Unlikely, because xAI had huge amount of overcapacity.

Probably a good idea in all honesty. xAI is a deeply unserious lab

From a technical standpoint xAI is basically Gemini team B who were give A+ salaries to join the company.

But even then, I suspect their hands were tied in some areas because Elon had some expectations from his AI.


Did Google outbid Elon for team A? Or A team just don't like Elon?

It's an internal jokes since very few high profile Deepmind engineers accepted his offer despite some serious cash being thrown at them.

Meta engineers on the other hand, couldn't wait to jump ship. But that only reinforces the B team theory.


LLaMA was pretty good at the time

There's only so much determinism you can create when you try not to filter (read CENSOR) your LLM.

The details are secret. It very well could be wasted GPU time but Anthropic could have made a killer offering as well.

I'm just speculating, but a particularly killer offering Elon wouldnt be able to refuse would be if Anthropic agreed to give them some training data / technology.


Billions in revenue just before your IPO isn't a bad deal either.

The icing on the cake for Elon is that it strengthens the competition to OpenAI.

Or is that actually his main motivation. Hard to know. Either way it's a win win win for him.


That's certainly one way one could spin this.

I guess loosing a ton of money then trying to get some if it back makes you a genius...


Yeah real geniuses go down with the ship and never change what they set out to do

Elon has many many faults but "loosing" money doesn't appear to be one of them. He's literally the richest person alive!

Giving Musk the benefit of the doubt, here's a thought experiment: It doesn't seem like any of the big labs in the US can keep a lead for more than 3 months. The Chinese models are closing in. Even if xAI comes up with the best model, so what?

On the other hand, power and compute are limited. Ridiculous as orbital compute sounds, land/power on earth is not easily scalable. There are too many limiting factors, chief among which in the US is regulation. But in space, if you make one satellite work, you just get more resources and launch more. This also leads naturally to Tesla's plan for a chip fab.

So if you squint, Musk might not be that crazy.


No I don't ever give up. I would have to be dead or completely incapacitated.

-Elon

https://x.com/XFreeze/status/2012390928221094335


I don't think this is giving up. He's getting inside information on how Claude works, and a huge stream of Claude usage data. This will all inform future grok development, IMO.

question is, will they buy cursor?

Or he just got leverage on a competitor

Actual title:

New Compute Partnership with Anthropic.

SpaceXAI has signed an agreement with Anthropic to provide access to Colossus 1.


Haslem played 72 minutes the entire 82 game season. That's like the Engineering manager who ships a PR once a year.

And to continue with the analogy, he neither replaces the coach, nor the actual team players. He just sits on the bench, paid for his - additional - role. Exactly the contrary of the Coinbase manager-IC, which is supposed to replace 2 jobs in 1.

6 month earnouts..... wait until july

you can own your upstream supply chain while simultaneously being less responsive to user pain points

1 9 of uptime later

This seems like an advertisement for an open source package

>Scale Python across 1,000 CPUs or GPUs in 1 second. Burla is a high-performance parallel processing library with an extremely fast developer experience. Scale batch processing, vector embeddings, inference, or build pipelines with dynamic hardware.

Edit: Author comment was flagged dead. They work at burla which is a managed cloud service for parallelizing python


Looks like it was hit by some sort of automated ChatGPT detector.

What does legora, harvey, or crosby add here other than the default westlaw/TR/lexis integrations?

I'd imagine it's like using Cursor/Claude Code vs. a Jetbrains IDE plugin.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: