Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its very simple, xAI needs money to win the AI race, so best option is to attach to Elon’s moneybank (spacex) to get cash without dilution




> xAI needs money to win the AI race

Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.

Like what is the believed inflection point that changes us from the current situation (where all of the state-of-the-art models are roughly equal if you squint, and the open models are only like one release cycle behind) to one where someone achieves a clear advantage that won't be reproduced by everyone else in the "race" virtually immediately.


I _think_ the idea is that the first one to hit self improving AGI will, in a short period of time, pull _so_ far ahead that competition will quickly die out, no longer having any chance to compete economically.

At the same time, it'd give the country controlling it so much economic, political and military power that it becomes impossible to challenge.

I find that all to be a bit of a stretch, but I think that's roughly what people talking about "the AI race" have in mind.


Like any other mega-scaler, theyre just playing Money Chicken.

Everyone is spending crazy amounts of money in the hopes that the competition will tap out because they can't afford it anymore.

Then they can cool down on their spending and increase prices to a sustainable level because they have an effective monopoly.


Money Chicken is the best term I've seen for this!

They ultimately want to own everyone's business processes, is my guess. You can only jack up the subscription prices on coding models and chatbots by so much, as everyone has already noted... but if OpenAI runs your "smart" CRM and ERP flows, they can really tighten the screws.

If you have the greatest coding agent under your thumb, eventually you orient it toward eating everything else instead of letting everybody else use your agent to build software & make money. Go forward ten years, it's highly likely GPT, Gemini, maybe Claude - they'll have consumed a very large amount of the software ecosystem. Why should MS Office exist at all as a separate piece of software? The various pieces of Office will be trivial for the GPT (etc) of ten years out to fully recreate & maintain internally for OpenAI. There's no scenario where they don't do what the platforms always do: eat the ecosystem, anything they can. If a platform can consume a thing that touches it, it will.

Office? Dead. Box? Dead. DropBox? Dead. And so on. They'll move on anything that touches users (from productivity software to storage). You're not going to pay $20-$30 for GPT and then pay for DropBox too, OpenAI will just do an Amazon Prime maneuver and stack more onto what you get to try to kill everyone else.

Google of course has a huge lead on this move already with their various prominent apps.


Dropbox is actually a great example of why this isn't likely to happen. Deeper pocketed competition with tons of cloud storage and the ability to build easy upload workflows (including directly into software with massive install base) exists, and showed an active interest in competing with them. Still doing OK

Office's moat is much bigger (and its competition already free). "New vibe coded features every week" isn't an obvious reason for Office users to switch away from the platform their financial models and all their clients rely on to a new upstart software suite


> Off on a tangent here but I'd love for anyone to seriously explain how they believe the "AI race" is economically winnable in any meaningful way.

Because the first company to have a full functioning AGI will most likely be the most valuable in the world. So it is worth all the effort to be the first.


> Because the first company to have a full functioning AGI will most likely be the most valuable in the world.

This may be what they are going for, but there are two effectively religious beliefs with this line of thinking, IMO.

The first is that LLMs lead to AGI.

The second is that even if the first did turn out to be true that they wouldn't all stumble into AGI at the same time, which given how relatively lockstep all of the models have been for the past couple of years seems far more likely to me than any single company having a breakthrough the others don't immediately reproduce.


Remember how he argued for Tesla’s Solarcity acquisition because solar roofs?

Data centers in space are the same kind of justification imo.


Solar roofs are much more practical to be honest.

Putting solar roofs on a building? For a car company?

There's a synergy effect here - Tesla sells you a solar roof and car bundle, the roof comes without a battery (making it cheaper) and the car now gets a free recharge whenever you're home (making it cheaper in the long term).

Of course that didn't work out with this specific acquisition, but overall it's at least a somewhat reasonable idea.


In comparison to datacenters in space yes. Solar roofs are already a profitable business, just not likely to be high growth. Datacenters in space are unlikely to ever make financial sense, and even if they did, they are very unlikely to show high growth due to continuing ongoing high capital expenses inherent in the model.

I think a better critique of space-based data centres is not that they never become high growth, it's just that when they do it implies the economy is radically different from the one we live in to the degree that all our current ideas about wealth and nations and ownership and morality and crime & punishment seem quaint and out-dated.

The "put 500 to 1000 TW/year of AI satellites into deep space" for example, that's as far ahead of the entire planet Earth today as the entire planet Earth today is from specifically just Europe right after the fall of Rome. Multiplicatively, not additively.

There's no reason to expect any current business (or nation, or any given asset) to survive that kind of transition intact.


For an electrification company.

It's obviously a pretty weird thing for a car company to do, and is probably just a silly idea in general (it has little obvious benefit over normal solar panels, and is vastly more expensive and messy to install), but in principle it could at least work, FSOV work. The space datacenter thing is a nonsensical fantasy.

> win the AI race

I keep seeing that term, but if it does not mean "AI arms race" or "AI surveillance race", what does it mean?

Those are the only explanations that I have found, and neither is any race that I would like to see anyone win.


Big tech businesses are convinced that there must be some profitable business model for AI, and are undeterred by the fact that none has yet been found. They want to be the first to get there, raking in that sweet sweet money (even though there's no evidence yet that there is money to be made here). It's industry-wide FOMO, nothing more.

Typically in capitalism, if there is any profit, the race is towards zero profit. The alternative is a race to bankrupt all competitors at enormous cost in order to jack up prices and recoup the losses as a monopoly (or duopoly, or some other stable arrangement). I assume the latter is the goal, but that means burning through like 50%+ of american gdp growth just to be undercut by china.

Imo I would be extremely angry if I owned any spacex equity. At least nvidia might be selling to china in the short term... what's the upside for spacex?


> The alternative is a race to bankrupt all competitors at enormous cost in order to jack up prices and recoup the losses as a monopoly

I don't know of an instance of this happening successfully.


Walmart? It's certainly more successful in physical markets

See Amazon

Different markets entirely—I can't walk into amazon, and I don't order online from Walmart.

You can order online from Walmart:

https://www.walmart.com/

Amazon can ship it to a location near you.


Again, different markets, because I'm not going to do either of those things—if I'm ordering online amazon has better selection, and if I want to walk somewhere to pick something up I'm not going to wait for shipping.

Are you saying that Amazon is a successful monopoly, or that Amazon is even with massive expenses still not a full monopoly?

Walmart competes with Amazon.

taxi apps, delivery apps, social media apps—all of these require a market that's extremely expensive to build but is also extremely lucrative to exploit and difficult to unseat. You see this same model with big-box stores displacing local stores. The secret to making a lot of money under capitalism is to have a lot of money to begin with.

Taxis are a government created monopoly.

None of the big-box stores have created a monopoly.

Amazon unseated behemoth Walmart with a mere $300,000 startup capital.

Musk founded his empire with $28,000.


> Taxis are a government created monopoly.

Taxi apps—uber & lyft. They moved into an area (often illegally); spent a shit-ton of money to displace local legal taxis, and then jacked up prices when the competition ceased to exist. Now I can't hail a taxi anymore if I don't have a phone.

> None of the big-box stores have created a monopoly.

They do in my region. Mom and pop shops are gone.

> Amazon unseated behemoth Walmart with a mere $300,000 startup capital.

We've been over this—they occupy different markets.

> Musk founded his empire with $28,000.

Sure. It would have been far easier to do with more capital.


Uber and Lyft compete with each other. The higher prices resulted from government mandates on pay for the drivers.

Amazon and Walmart do compete with each other. Neither has a monopoly. Nor have I noticed jacked up prices from them.


Amazon

See Walmart

People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.

They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.


those gross profit margins aren't that useful since training at fixed capacity is continually getting cheaper, so there's a treadmill effect where staying in business requires training new models constantly to not fall behind. If the big companies stop training models, they only have a year before someone else catches up with way less debt and puts them out of business.

Only if training new models leads to better models. If the newly trained models are just a bit cheaper but not better most users wont switch. Then the entrenched labs can stop training so much and focus on profitable inference

If they really have 40-60% gross margins, as training costs go down, the newly trained models could offer the same product at half the price.

Well thats why the labs are building these app level products like claude code/codex to lock their users in. Most of the money here is in business subscriptions I think, how much savings would be required for businesses to switch to products that arent better, just cheaper?

I think the real lock-in is in "CLAUDE.md" and similar rulesets, which are heavily AI specific.

Why would they be heavily "AI specific", when we're being told these things are approaching AGI and can just read arbitrary work documents?

> Openai and Anthropic have 40-60% gross margins.

Stop this trope please. We (1) don't really know what their margins are and (2) because of the hard tie-in to GPU costs/maintenance we don't know (yet) what the useful life (and therefore associated OPEX) is of GPUs.

> If they stopped training and building out future capacity they would already be raking in cash.

That's like saying "if car companies stopped researching how to make their cars more efficient, safer, more reliable they'd be more profitable"


It will be genuinely interesting to see what happens first, the discovery of such a model, or the bubble bursting.

A significant number of AI companies and investors are hoping to build a machine god. This is batshit insane, but I suppose it might be possible. Which wouldn't make it any more sane.

But when they say, "Win the AI race," they mean, "Build the machine god first." Make of this what you will.


On the edge of my seat waiting to see what hits us first, a massive economic collapse when the hype runs out, or the Torment Nexus.

It really seems like the market has locked in on one of those two things being a guaranteed outcome at this point.

It’s a graft to keep people distracted and allow for positioning as we fall off the end of the fossil energy boom.

It’s a framing device to justify the money, the idea being the first company (to what?) will own the market.

Being too far ahead for competitors to catch up, similar to how google won browsers, amazon won distribution, etc

I’m not certain spacex is generating much cash right now ?

Starship development is consuming billions. F9 & Starlink are probably profitable ?

I’d say this is more shifting of the future burden of xAI to one of his companies he knows will be a hit stonk when it goes public, where enthusiasm is unlikely to be dampened by another massive cash drain on the books.


That may be the plan, but this is also a great way for GDPR's maximum fine, based on global revenue, to bite on SpaceX's much higher revenue. And without any real room for argument.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: