Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How different is this from rental car companies changing over their fleets? I don't know, this is a genuine question. The cars cost 3-4x as much and last about 2x as far as I know, and the secondary market is still alive.




> How different is this from rental car companies changing over their fleets?

New generations of GPUs leapfrog in efficiency (performance per watt) and vehicles don't? Cars don't get exponentially better every 2–3 years, meaning the second-hand market is alive and well. Some of us are quite happy driving older cars (two parked outside our home right now, both well over 100,000km driven).

If you have a datacentre with older hardware, and your competitor has the latest hardware, you face the same physical space constraints, same cooling and power bills as they do? Except they are "doing more" than you are...

Would we could call it "revenue per watt"?


The traditional framing would be cost per flop. At some point your total costs per flop over the next 5 years will be lower if you throw out the old hardware and replace it with newer more efficient models. With traditional servers that's typically after 3-5 years, with GPUs 2-3 years sounds about right

The major reason companies keep their old GPUs around much longer with now are the supply constraints


The used market is going to be absolutely flooded with millions of old cards. I imagine shipping being the most expensive cost for them. The supply side will be insane.

Think 100 cards but only 1 buyer as a ratio. Profit for ebay sellers will be on "handling", or inflated shipping costs.

eg shipping and handling.


I assume NVIDIA and co. already protects themselves in some way, either by the fact of these cards not being very useful after resale, or requiring them to go to the grinder after they expire.

In the late '90s, when CPUs were seeing the advances of GPUs are now seeing, there wasn't much of a market for two/three-year old CPUs. (According to a graph I had Gemini create, the Pentium had 100 MFLOPS and the Pentium 4 had 3000 MFLOPS.) I bought motherboards that supported upgrading, but never bothered, because what's the point of going from 400 MHz to 450 MHz, when the new ones are 600 or 800 MHz?

I don't think nVidia will have any problem there. If anything, hobbyists being able to use 2025 cards would increase their market by discovering new uses.


Cards don't "expire". There are alternate strategies to selling cards, but if they don't sell the cards, then there is no transfer of ownership, and therefore NVIDIA is entering some form of leasing model.

If NVIDIA is leasing, then you can't get use those cards as collateral. You can't also write off depreciation. Part of what we're discussing is that terms of credit are being extended too generously, with depreciation in the mix.

The could require some form of contractual arrangement, perhaps volume discounts for cards, if they agree to destroy them at a fixed time. That's very weird though, and I've never heard of such a thing for datacenter gear.

They may protect themselves on the driver side, but someone could still write OSS.


Dont they own socket for enterprise cards? I can't see consumers buying these card unless they are PCIE at the very least.

Rental car companies aren’t offering rentals at deep discount to try to kickstart a market.

It would be much less of a deal if these companies were profitable and could cover the costs of renewing hardware, like car rental companies can.


I think it's a bit different because a rental car generates direct revenue that covers its cost. These GPU data centers are being used to train models (which themselves quickly become obsolete) and provide inference at a loss. Nothing in the current chain is profitable except selling the GPUs.

> and provide inference at a loss

You say this like it's some sort of established fact. My understanding is the exact opposite and that inference is plenty profitable - the reason the companies are perpetually in the red is that they're always heavily investing in the next, larger generation.

I'm not Anthropic's CFO so i can't really prove who's right one way or the other, but I will note that your version relies on everyone involved being really, really stupid.


The current generation of today was the next generation of yesterday. So, unless the services sold on inference can cover the cost of inference + training AND gain money, they are still operating at loss.

“like it's some sort of established fact” -> “My understanding”?! a.k.a pure speculation. Some of you AI fans really need to read your posts out loud before posting them.

You misread the literal first snippet you quoted. There's no contradiction in what you replied to.


Or just "everyone" being greedy

> the secondary market is still alive.

this is the crux. Will these data center cards, if a newer model came out with better efficiency, have a secondary market to sell to?

It could be that second hand ai hardware going into consumers' hands is how they offload it without huge losses.


The GPUs going into data centers aren't the kind that can just be reused by putting them into a consumer PC and playing some video games, most don't even have video output ports and put out FPS similar to cheap integrated GPUs.

And the big ones don't even have typical PCIe sockets, they are useless outside of behemoth rackmount servers requiring massive power and cooling capacity that even well-equipped homelabs would have trouble providing!

Don’t underestimate a homelaber’s intention to cosplay as a sysadmin or ability to set their house on fire ;)

I wonder if people will come up with ways to repurpose those data center cards.


I would presume that some tier shaped market will arise where the new cards are used for the most expensive compute tasks like training new models, the slightly used for inference, older cards for inference of older models, or applied to other markets that have less compute demand (or spend less $ per flop, like someone else mentioned).

It would be surprising to me that all this capital investment just evaporates when a new data center gets built or refitted with new servers. The old gear works, so sell it and price it accordingly.


Data centre cards a don’t have fans and don’t have video out these days.

i dont mean consumer market for video cards - i mean a consumer buying ai chips to run themselves so they can have it locally.

If i can buy a $10k ai card for less than $5000 dollars, i probably would, if i can use it to run an open model myself.


At that point it isn't a $10k card anymore, it's a $5k card. And possibly not a $5k card for very long in the scenario that the market has been flooded with them.

Ah well yes to a degree that’s possible but at least at the moment you’d still be better off buying a $5k Mac Studio if it’s just inference you’re doing

How many "yous" are there in the world? Probably a number that can buy what's inside one Azure DC?

Why would you do that when you can pay someone else to run the model for you on newer more efficient and more profitable hardware? What makes it profitable for you and not for them?

Control and privacy?

You need the hardware to wrap that in, and the power draw is going to be... significant.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: