Hacker Newsnew | past | comments | ask | show | jobs | submit | subroutine's commentslogin

Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?

In short, yes.

A modern laptop is running almost fanless, like a 486 from the days of yore.

A single H200 pumps out 700W continuously in a data center, and you run thousands of them.

Also, don't forget the training and fine tuning runs required for the models.

Mass transportation / global logistics can be very efficient and cheap.

Before the pandemic, it was cheaper to import fresh tomatoes from half-world away rather than growing them locally in some cases. A single container of painting supplies is nothing in the grand scheme of things, esp. when compared with what data centers are consuming and emitting.


This argument is so flawed that its conclusion almost loops back around to being correct again:

No, in terms of unit economics, I'm almost certain that the painting supplies have a bigger ecological/resource footprint than an LLM per icon generated, and I'm pretty sure the cost of shipping tomatoes does not decrease that footprint, even if it possibly dwarfs it.

But yes, due to Jevon's paradox, the total resource use might well increase despite all that. I, for example, would have never commissioned a professional icon for my silly little iOS shortcuts on my homescreen, so my silly icon related carbon footprint went from exactly zero to slightly above that.


these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).

so if power were plentiful and environmental you'd be onboard with it?


> these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).

Please see my other comment about energy consumption and connect the dots with how open loop DLC systems are harmful to fresh water supplies (which is another comment of mine).

> so if power were plentiful and environmental you'd be onboard with it?

This is a pretty loaded way to ask this. Let me put this straight. I'm not against AI. I'm against how this thing is built. Namely:

    - Use of copyrighted and copylefted materials to train models and hiding under "fair use" to exploit people.
      - Moreover, belittling of people who create things with their blood sweat and tears and poorly imitating their art just for kicks or quick bucks.
    - Playing fast and loose with environment and energy consumption without trying to make things efficiently and sustainably to reduce initial costs and time to market.
    - Gaslighting the users and general community about how these things are built, and how it's a theater, again to make people use this and offload their thinking, atrophying their skills and making them dependent on these.
I work in HPC. I support AI workloads and projects, but the projects we tackle have real benefits, like ecosystem monitoring, long term climate science, water level warning and prediction systems, etc. which have real tangible benefits for the future of the humanity. Moreover, there are other projects trying to minimize environmental impact of computation which we're part of.

So it's pretty nuanced, and the AI iceberg goes well below OpenAI/Anthropic/Mistral trio.


> I support AI workloads and projects, but the projects we tackle have real benefits [...]

As opposed to the illusory/fake/immoral benefits of using LLMs for entertainment purposes (leaving aside all other applications for now)?

How do you feel about Hollywood, or even your local theater production? I bet the environmental unit economics don't look great on those either, yet I wouldn't be so quick to pass moral judgement.

Why not just focus on the environmental impact instead of moralizing about the utility? It seems hard to impossible to get consensus there, and the impact should be able to speak for itself if it's concerning.


This is a plainly dishonest comparison. A single H200 does not need to run continuously for you to generate a dozen pictures. And then you immediately pivot to comparing the paint usage against "the grand scheme of things"- 700W is nothing in the grand scheme of things.

In fact it's pretty fair.

Many people think that when a piece of hardware is idle, its power consumption becomes irrelevant, and that's true for home appliances and personal computers.

However, the picture is pretty different for datacenter hardware.

Looking now, an idle V100 (I don't have an idle H200 at hand) uses 40 watts, at minimum. That's more than TDP of many, modern consumer laptops and systems. A MacBook Air uses 35W power supply to charge itself, and it charges pretty quickly even if it's under relatively high stress.

I want to clarify some more things. A modern GPU server houses 4-8 high end GPUs. This means 3KW to 5KW of maximum energy consumption per server. A single rack goes well around 75KW-100KW, and you house hundreds of these racks. So, we're talking about megawatts of energy consumption. CERN's main power line on the Swiss side had a capacity around 10MW, to put things in perspective.

Let's assume an H200 uses 60W energy when it's idle. This means ~500W of wasted energy per server for sitting around. If a complete rack is idle, it's 10KW. So you're wasting energy consumption of 3-5 houses just by sitting and doing nothing.

This computation only thinks about the GPU. Server hardware also adds around 40% to these numbers. Go figure. This is wasting a lot for cat pictures.

And, these "small" numbers add up to a lot.


Definitely worth considering in a world in which there are any H200s idling in data centers.

Now that's one fine No True Scotsman.

    A: GPUs use a lot of power!
    B: Not all of them are running 100% continuously, eh?,
    A: They waste too much power when they're idle, too!
    C: None of the H200s are sitting idle, you knob!
I mean, they are either wasting energy sitting idle or doing barely useful work. I don't know what to say anymore.

We'll cook ourselves, anyway. Why bother? Enjoy the sauna. ¯\_(ツ)_/¯


B is supposed to be me? I said the H200 doesn't need to be running continuously to generate a dozen images. If a million people generate a dozen images, it no longer makes sense to compare to the costs of a single artist for 6 hours. I really don't understand why this is hard and that makes this feel very uncharitable.

I'm not saying that this isn't "true idling", I'm saying that idling H200s simply don't exist, i.e., I disagree with B. Do you, A, even disagree?

> they are either wasting energy sitting idle or doing barely useful work

Now here's a true (inverse) scotsman, or more accurately, a moved goalpost: Work on things you don't deem valuable is basically the same thing as idling?

> We'll cook ourselves, anyway. Why bother? Enjoy the sauna. ¯\_(ツ)_/¯

I'm very concerned about that too, but I don't think we'll avoid the sauna with fatalism or logically unsound appeals to morality about resource consumption.


Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.

I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.


Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.

You'd want to compare against the fraction of training attributable to the image

Wow, do you hold a degree in false dichotomies?

Has anyone tested the system from the other end... sending a prompt and getting a response?

Copy?

SIGINT

Gmail is free. How much customer support resources should someone reasonably expect a company to dedicate towards their free-of-charge services?

Increasingly of the opinion that "free service with no support that's structurally essential for an economy" is some kind of trap. Possibly just the most comfortable kind of trap, a local optimum from which it's difficult to escape.

This is starting to become important as countries (very unwisely!) start tying things like national ID and banking to smartphones.


I don't know if it's that simple. As a litmus test, try to set up your own mail server. See how many milliseconds it takes for it to be blacklisted by gmail. And then observe the response time for their support, when you try to clear up the confusion that google has about your intentions.

I run my own mail server, not blacklisted. Now I'm a bit of a special case, I know mail well.

But when a moderately technical colleague wanted to do the same, I told her to use Mox, she set it up and Gmail doesn't block her either.

So... would you please elaborate?


I find there are three peopls who comment about hosting email. A small group like us who set it up correctly and never have problems. A larger group who set it up but get the dns wrong and warn people not to. And a third bigger group who never tried but listen to the second group and always comment that you'll have 1% deliverability

It is different than it once was.

It was dead-nuts simple in the 1990s: Just learn enough about DNS to put in an MX record that points to an A record, get sendmail working, and have it begin delivering mail. The end. (Open relay? No spam filter? No virus scanning? No nothin'? Yeah, that kind of was the style at the time...)

It's got a lot more steps today, but it's still do-able. Operationally, keeping a mail server online and treated well just takes one or two people to spend a little bit of time occasionally to stay proactively ahead of new expectations and requirements instead of reacting to them after things change.

It also helps if Carla, from marketing, doesn't wake up one day and decide to spam the entire customer list without asking for guidance first. Maybe I should have put some automatic mitigation into place for that, but whatever: We chatted about that and it never happened again.

(Or at least, I find that to be true with smaller companies. Bigger ones obviously may require more elaborate systems to handle more volume and/or provide better uptime. But the requirements of keeping the reputation up are about the same regardless of scale, and that still only takes one or two people to pay attention to things sometimes. [And the only reason two might be required is in case one of them gets hit by a bus.])


"Blacklisted" probably doesn't have a sufficiently clear definition. I don't even run my own server, just use a custom family domain that is served by protonmail, and discovered when trying to go through foster licensing that virtually all of the agencies were not reading my e-mails because Microsoft and Google alike were routing them into the spam folder, but they weren't being blocked or bounced. I wouldn't have even known if I hadn't called a few and asked them to check.

I am definitely not being flagged for any actual spam-like behavior. I might send out 40 e-mails a year, and even though it's a "family" domain, I'm the only one who has ever used it, ironically enough, as part of my decade-old effort to de-Google.


I'm curious, what about Microsoft/Outlook?

I also have my own MTA. No problems with anyone ..... except Microsoft, who (silently) never delivers the mail.


Microsoft is the worst. I think our emails usually go through to them now, but they've been blocking our emails when no one else was.

I've built mail servers before Gmail existed that lasted long enough to get blacklisted by Gmail.

Fixing it was always pretty simple -- or at least, non-mysterious. They'd bounce some things, I'd look at the headers of the bounced messages, and therein were links to instructions there that showed how to resolve whatever issue it was this year.

Just follow the steps, implement the new thing, and stuff started flowing again in rather short order. Not so bad.

IIRC, the only time it ever cost us any money was when the RBLs started keeping track of dynamic IP pools and we needed to finally shift over to something actually-static.


It’s free, but it’s not like they’re running Gmail as a charity, either. It has revenue and contributes to their other businesses.

Google’s support for paying customers isn’t much better unless you’re spending well into the millions per year.

AWS, on the other hand has proven willing to move mountains for me as a $15/mo customer.


If it didn't provide value it wouldn't exist.

Maybe it's only legacy, but gmail brings customers to Google and their related services. Escalation then brings them on as paying Customers. As loss leader may make a loss if looked at in a bubble, but if looked at as part of the "Customer Lifecycle" then other areas of profit would likely be much smaller without the free gateway.

It takes me active resistance to avoid Google's paid services, and I'm staunchly independent in relatively rare air. The minor capitulation required to turn into a paying Customer would capture a good percentage of their erstwhile-free gmail users (I would think. Yes, conjecture, interested in explanations of alternative theories).


> How much customer support resources should someone reasonably expect

Zero. OTOH, since I'm sure they are training on emails and archiving/profiling everything forever even if we delete messages.. those constant threats to become a paying customer before hitting some arbitrary small quota are still villainous


We might not be paying money, but we don't know what happens to our private data. Maybe it's not used at all, maybe used just internally, maybe could be even sold. Data of millions of users is very very valuable, even just thinking about how much targeted adverts could be placed with it.

It isn't sold directly. There are robust internal controls so random employees can't just snoop on eg ex girlfriends' email or be fired.

Source: Used to work there.


Gmail shows ads to make money so it is not loss making. Google Workspace charges money per user (and still offers abysmal support).

Gmail is profitable. How much harm should profitable services be allowed to perpetuate in the world to enable their profit?

Enough that they're not facilitating abuse.

Some items on thingsverse provide .obj files; like the king in this chess set...

https://www.thingiverse.com/thing:1078513/files

or this army tank...

https://www.thingiverse.com/thing:4618182/files

(n.b. under the main image viewer click the "files" tab to explore individual files/extensions)


From the Wikipedia article...

For planning Operation Epic Fury, the US military utilized the Maven Smart System, an artificial intelligence software designed to streamline the targeting process and greatly reduce the amount of personnel involved in it. Capable of producing 1,000 target packages in one hour, with the use of the system the US military said it had struck 6,000 targets in Iran during the first two weeks of the war.

...it goes on to say...

The [NYT] inquiry suggested that the school was likely targeted due to outdated coordinates provided by the Defense Intelligence Agency

Advanced rockets bolted onto mainframes guided by data from Palantir.

https://en.wikipedia.org/wiki/Project_Maven#Technology


They finally did release 2.0 under the MIT license. That was the last version (a 1.5-billion-parameter model) they would release open source. GPT3 for comparison has 175 billion parameters.


No. That’s not true. https://huggingface.co/openai/gpt-oss-120b

Was released after.



It was not the last version they released open source. GPT-OSS-120B was released open source after it.


Oh duh, you are absolutely right. I thought you were saying they never released GPT-2.


For clarity here, "after" was 6 years later, once Meta and then the Chinese labs had already established the ecosystem


At 20 min per task you might as well code it yourself. Bill James needs to write a book on saber-metrics for LLM benchmarks.


See French Decimal Time:

https://en.wikipedia.org/wiki/Decimal_time

Not to be confused with Metric Time:

https://en.wikipedia.org/wiki/Metric_time

Timekeeping units of measurement:

https://en.wikipedia.org/wiki/Unit_of_time


It's such an odd request to make something less enjoyable. If the EU wants a time limit on app use they should just impose it themselves.


I think you dont consider that this is politics and why it s conducted through press releases


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: