Steam reached a new peak of 42 million concurrent players today [1]. An average/mid-tier gaming PC uses 0.2 kWh per hour [2]. 42 million * 0.2 gives 8,400,000 kWh per hour, or 8,400 MWh per hour.
By contrast, training GPT3 was estimated to have used 1,300 MWh of energy [3].
This does not account for training costs of newer models, nor inference costs. But we know inference costs are extraordinarily inexpensive and energy efficient [2]. The lowest estimate of energy cost for 1 hour of Steam's peak concurrent player count uses 6.5x more energy than all of the energy that went into training GPT3.
I was skeptical of the LLM energy use claim. I went looking for numbers on energy usage in a domain that most people do not worry about or actively perceive as a net negative. Gaming is a very big industry ($197 billion in 2025 [1], compare to the $252 billion in private AI investment for 2025 [2]) and mostly runs on the same hardware as LLMs. So it's a good gut check.
I have not seen evidence that LLM energy usage is out of control. It appears to be much less than gaming. But please feel free to provide sources that demonstrate this lie.
The question is whether claims of AI energy use have sustenance, or if there are other industries that should be more concerning. People are either truly concerned about the cost of energy or it's a misplaced excuse to reinforce their negative opinions.
I see no point in making this a numbers game. (Like, I was supposed to say "five" or something?)
Let's make it more of a category thing: when AI shows itself responsible for a new category of life-saving technique, like a cure for cancer or Alzheimer's, then I'd have to reconsider.
(And even then, it will be balanced against rising sea levels, extinctions, and other energy use effects.)
Search through github for commits authored by .edu, .ac.uk etc emails and spend a few days understanding what they’ve been building the past few years. Once you’ve picked your jaw off the floor, take another 10 minutes to appreciate that this is just the public code by some researchers, and is crumbs compared to what is being built right now behind closed doors.
Tenured professors are abdicating their teaching positions to work on startups. Commercial labs are pouring billions into tech that was unreachable just a few years ago. Academic labs are downscaling their interns 20x. Historically hermit companies are opening their doors to build partnerships in industry.
The scale of what is happening is difficult to comprehend.
Local LLMs that you can run on consumer hardware don't really do anything though. They are amusing, maybe you could use them for basic text search, but they don't have any real knowledge like the hosted ones do.
Gemma 3 27B, some smaller models in the 8-16B size range, and up to 32B can be run on hardware that fits in the "consumer" bracket. RAM is more expensive now, but most people can afford a machine with 32GB and maybe a small graphics card.
Small models don't have as much world knowledge as very large models (proprietary or open source ones), but it's not always needed. They still can do a lot of stuff. OCR and image captioning, tagging, following well-defined instructions, general chat, some coding, are all things local models do pretty well.