Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexCoventry's commentslogin

You only need to train a range of small models in order to establish a plausible scaling law, IMO.

I don't think this is giving up. He's getting inside information on how Claude works, and a huge stream of Claude usage data. This will all inform future grok development, IMO.

So now Elon Musk gets to read all of our Claude conversations?? :-(

Yes, if this turns into a mass famine/deindustrialization, Americans are going to own it the way Germans owned the holocaust.

We've already taken 600,000 lives by being complicit with foreign national Elon Musk's genocide in Africa. Most of them children.

It's not genocide to stop handouts to the third world. It's genocide to go around murdering white farmers in mass to take their land, as is happening now in South Africa and previously happened in Zimbabwe.

Yes, I saw it too on Pravda Sozial.

what are you talking about

They are probably talking about claims like this: https://www.doge-impact.org/

The Germans owned the holocaust because they lost WW2 and afterwards became a vassal state of the Allies and later just the US. History is written by the victors.

The Germans "owned" the holocaust because the Nazis (German) started, conducted, and maintained the systematic collection, extermination, and destruction of certain classes of the population under their control.

Who else should have "owned" it?


I assume the point is that what make them acknowledge and repent from what they did is that they lost the war.

Many massacres and genocides are "owner-less" and obscured by history. To give a few exemple, you might find, but the trail of tears is not as front-and-center in US' history teaching as the holocaust is in German history teaching.

You'll find similar situations for all colonial powers who didn't get dismantled and forced to accept their wrongs after losing a war. You may even go as far as to say that Germany is the outlier here.


I think people are more concerned about the massive deindustrialization and famines which could result from the Strait of Hormuz being chaotically strangled, not the hit to their pocket books at the gas pump

That's very unfortunate. How did it have access to the production DB in the first place?

I'm thinking twice about running Claude in an easily violated docker sandbox (weak restrictions because I want to use NVIDIA nsight with it.) At this stage, at least, I'd never give it explicit access to anything I cared about it destroying.

Even if someone gets them to reliably follow instructions, no one's figured out how to secure them against prompt injection, as far as I know.


You need to be more specific. OpenAI's commitment to assist the Trump administration with domestic mass surveillance seems to have been largely memory-holed.


You're right, unfortunately. How naive of me to think that at least the HN audience would care.


Yeah, it's eerie, same with how everyone seems to have forgotten that OpenAI betrayed democracy by committing to work on unsupervised autonomous weapons and domestic mass surveillance.


Honestly I find comments like yours much more eerie. By all accounts they never agreed to any of that but you say it with such confidence like it's a fact.


The Trump administration's handling of Anthropic showed that regardless of what the contract or the law says or means, they will severely penalize any vendor who refuses their demands. And OpenAI stepped right into that relationship immediately after the administration showed that. So either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying.

If this sounds crazy to you, though, I'd like to know, and understand why. I miss ChatGPT/Codex.


> regardless of what the contract or the law says

That is not really established. The Anthropic issue was specifically about DoD use and Anthropic's military use restrictions. What the Trump admin did was bad and coercive but its not proof that contract terms and law are irrelevant. For instance, why not just use eminent domain if they don't care about contracts and want whatever they want?

> either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying

Couldn't OpenAI have negotiated different terms, accepted a narrower scope, or drawn different red lines? Their public DoD terms still exclude things like mass domestic surveillance and autonomous weapons outside human control. Do you not believe that or believe it doesn't matter at all? Either of those is problematic to the conclusions that follow from them.

I also think the whole argument implies something about Anthropic's position that's not as clean in reality. NSA is already using Mythos despite the Pentagon dispute, and Anthropic is still talking to the administration. Trump even said they were "shaping up" recently.

Isn't it also a possibility that one company negotiated poorly and took a position of perceived moral authority that Trump et al threw a hissy fit over and over reacted to? That's happened countless times with this admin and is far more likely in my opinion given Anthropic hasn't cut all ties and continues to try and work out a contract.

I wholeheartedly agree the current administration is dangerous. I just don't think the conclusion "OpenAI must be complying with the same demands Anthropic refused" follows from what we've seen. And I think there are plenty of other far more plausible conclusions to draw from the events.


> For instance, why not just use eminent domain if they don't care about contracts and want whatever they want?

They were threatening Anthropic with the Defense Production Act[1], which almost comes to the same thing as eminent domain, forcing the provision of goods and services instead of forcing relinquishment of property.

> Do you not believe that or believe it doesn't matter at all?

I don't think it matters at all. The Trump administration is full of scofflaw bullies. Their threats against Anthropic are actually relatively tame, compared to their bullying of Minnesota and the horrific human-rights violations they've committed against immigrants, despite multiple court orders trying to rein them in. Anyone doing business with them is either enthusiastically complying, has some kind of hold over them beyond law or contract, or is setting themselves up for harsh punishment.

> I also think the whole argument implies something about Anthropic's position that's not as clean in reality.

Anthropic software is embedded in military and intelligence services, and that takes time to wind down. My understanding is that it will take months.[2] So yeah, it's a messy, time-consuming divorce, but the origin of the conflict is actually very clear cut.

The NSA has two sides, defensive and offensive. Given Anthropic's approach to restricted release of Mythos, I assume they're releasing it to the defensive side. Anthropic has always taken the position that they're willing to help secure the US, they're just not willing help turn it into a tyranny. Apparently someone has convinced Trump and Hegseth that there's more at stake with Mythos than looking tough on a dissident company.

> Isn't it also a possibility that one company negotiated poorly and took a position of perceived moral authority that Trump et al threw a hissy fit over and over reacted to?

Not really. It's the Trump administration which has negotiated poorly, by capriciously pushing its counterparty around, trying to force it into illegal/immoral/dangerous activity.

> Trump even said they were "shaping up" recently.

He's also repeatedly said he has a workable deal with the Iranians. Do you trust his claims about any of his counterparties?

> And I think there are plenty of other far more plausible conclusions to draw from the events.

I'd be interested in your interpretation.

[1] https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h... https://archive.ph/iQebR

[2] https://federalnewsnetwork.com/defense-news/2026/03/dod-conf...


Is it hitting intermediate milestones with solid pre-written and human-reviewed acceptance tests? If not, sounds like a very risky commitment.


I went to a conference and talked to all the vendors about their products.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: