It is highly unlikely Consumer GPU will use HBM any time soon. At least I dont see it happening before 2030 or 2033. HBM is expensive, anywhere between 3 - 8x the cost of GPDDR and GDPPR already being more expensive than LPDDR. And that is without factoring in current DRAM pricing situation.
That is a value for the entire gpu, what about the memory part itself? Also consumers don't need 300GB of it (yet).
But to answer - memory is progressing very slowly. DDR4 to DDR5 was not even a meaningful jump. Even PCIe SSDs are slowly catching up to it which is both funny and sad.
As for the usecase - I use my memory as a cache for everything. Every system in the last 15-20 years I used I maxed out memory on, I never cared much about speed of my storage, because after loading everything into RAM, the system and apps feel a lot more responsive. The difference on older systems with HDDs were especially noticeable, but even on an SSDs, things have not improved much due to latencies. Of course using any webapp connecting to the network will negate any benefits of this, but it makes a difference with desktop apps.
These days I even have enough memory to be able to run local test VMs so I don't need to use server resources.
I went into reading the article thinking "why should this be interesting for me, it will only benefit AI bros anyway, but eh it's not like I got something better to read", and lo and behold...
> Mian Quddus, chairman of the JEDEC Board of Directors, said: “JEDEC members are actively shaping the standards that will define next generation modules for use in AI data centers, driving the future of innovation in infrastructure and performance.”
It's nice to see that there still is progress to be made given that a lot of modern semiconductor technology is at the edge of what plain physics and chemistry allow... but hell I can't say I'm happy that it, like with low-latency/high bandwidth communications and HFT, it will again be only the uber rich that can enjoy the new and fancy stuff for years. It's not like you can afford an average decent mid/upper range GPU these days thanks to the AI bros.
In ~2016 I have written on HN how current Foundry progress would stop in around 3nm or GAA time frame and we will slow down to 3 years cadence node improvements by 2020 - 2023. It was AI and GPGPU that single handedly push the technology progress forward to what we are having today. Including PCI-Express 8.0, Muti layer packaging, Inter optical connection etc. A lot of these will filtered down to consumer market usage or benefits.
> A lot of these will filtered down to consumer market usage or benefits.
Yeah, maybe in a decade. And the "benefits" will be a metric shit ton of job losses plus a crash that will make 2000's dotcom plus 2007ff real estate/euro combined look harmless...
Why do you feel entitled to top-of-market products in this space? Are the nicest houses or cars commercially available to you? It's fine to have products outside the limited financial reach of mere mortals.
> It's not like you can afford an average decent mid/upper range GPU these days thanks to the AI bros.
I mean, Nvidia was greedy even before then and AMD just did “Nvidia - 50 USD” or thereabout.
Intel Arc tried shaking up the entry level (retailers spit on that MSRP though) but sadly didn’t make that big of a splash despite the daily experience being okay (I have the B580). Who knows, maybe their B770 will provide an okay mid range experience that doesn’t feel like being robbed.
Over here, to get an Nvidia 5060 Ti 16 GB I'd have to pay over 500 EUR which is fucking bullshit, so I don’t.
The Intel–Nvidia collaboration has just received the green light from the competition authority, with Nvidia purchasing a 4% stake.
Nvidia is expected to sell GPU intellectual property at a bargain to the entry-level segment, making it unprofitable for Intel to develop a competitive product range. This way, Intel would lack both the competence and the infrastructure internally to eventually break Nvidia’s market share in the higher segments.
> Intel Arc tried shaking up the entry level (retailers spit on that MSRP though) but sadly didn’t make that big of a splash
The Intel Arc B60 probably would have made a splash if they had actually produced any of the damn things. 24GB vram for low prices would have been huge for the AI crowd, and there was a lot of excitement and then Intel just didn't offer them for sale.
The company is too screwed up to take advantage of any opportunities.
Hmm, duopolies don't work you say? I doubt 3 will make any difference (see memory manufacturers). Then again looking at market share nvidia is a monopoly in practice.
The bad part is everyone wants to be on the AI money circle line train (see the various money flow images available) and thus everything caters for that. At this point i'd rather have nvidia and amd quit the gpu business and focus on "ai" only, that way a new competitor can enter the business and cater the the niche applications like consumer gpus.
https://news.ycombinator.com/item?id=46302002
https://morethanmoore.substack.com/p/solving-the-problems-of...