Hacker Newsnew | past | comments | ask | show | jobs | submit | kingstnap's commentslogin

Interest is a percentage of debt.

Income through taxes is roughly a percentage of GDP.

You could also just compare interest spending vs budget, and lots of people do. Spending on interest is roughly $1T out of a total $7T with income of $5.23T

$1T of an incoming $5.23T is pretty concerning. Especially given projections that the $1T is likely to go up significantly over the next decade.


It's not hard to imagine how this happens. I assume most people here have used these models extensively.

The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

The system prompt includes statements about how it doesn't have tools for managing funds.

A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.


> The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?

> The system prompt includes statements about how it doesn't have tools for managing funds.

Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?

What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. Clearly those decisions resulted in poor support bot system prompts, too.

To wit: this would likely not have happened if the prompt included something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human", and if Anthropic had not decided to have dysfunctional support and customer service.

The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, please stop making poor decisions that repeatedly, invariably, lead to unforced errors like the one in TFA'.


Of the things you could complain about in modern cars as being too complicated, you chose turning on seat heating???

Like you push the seat heating button if your seat feels cold. What is there to think about?


On an electric car that yells at you your range left and that you won't make it to your destination unless you charge, if you turn on the seat warmers, that range goes down so you have to think about if you'd rather have a toasty butt and have to stop and charge, or just be colder and get there sooner. But you have to charge anyway.

That sounds like a problem with whatever brand of car that is. Is it one made by a certain white supremacist perhaps? That could be the problem.

Using the heated seats will cause you to loose range on every car, not just electric

Ohrly? Are you reading HN or pretending to be stupid?

I’m often stupid, but usually not on purpose. What’s your point?

In an ICE powered car, running the heater doesn't have the same effect on range. Because an ICE is hot due to how it works, sending hot air to the car's interior is basically free because the heater uses waste heat from the engine.

We're talking about the seat heaters though. I'm pretty sure heated seats use resistive heaters and not waste heat from the engine

They were loosing money giving absurdly generous agentic usage on expensive models to people with $10 to $40 flat rate subscriptions.

They weren't selling inference.


It's unclear what he means so it would he good to clarify. Neither the Axios article nor this one provide the details.

> For my team, the cost of compute is far beyond the costs of the employees," Bryan Catanzaro, vice president of applied deep learning at Nvidia, told Axios

He me means they spend more on in house GPUs then employees because of experiments and research thats one thing.

If he means they are running up an OpenAI/Anthropic API bill on coding agents that would be surprising.


Knowing what his team does, I am quite confident that it is the former.

Even years ago Nvidia gave their engineers generous access to some of their own internal GPU farms, so that they could run all sorts of different experiments for software and hardware features. You can look into his team's publications, if you want to learn more about the sort of thing they do.


The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.

You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.


> The real question is what do you get out of advertising to people who don't have any money?

Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.


A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.

Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.


There's lots of people who are willing to spend a lot of money on 'real things' while not spending anything on bytes. It's the tech companies which have created this expectation of free services. Many non-tech people I know are relatively wealthy and think likes this.

This is like asking why you'd advertise on YouTube to people who aren't paying for YouTube Premium.

The exception being blamed on "stupidity" makes it sound like an oversight.

It was not an oversight. It was corruption.


There would need to be another paradigm shift if they wanna keep inflating AI usage.

We went from simple chatbots to thinking models which massively exploded token utilization.

We then went from simple thinking models to tool calls and agents. Agents, and particularly long horizon agents, burn truly insane numbers of tokens blowing thinking models well out of the water.

People are trying to do agentic swarms as the next step but I don't think those make sense as of right now. Particularly they are just too insanely expensive and not that useful.

Plus right now the models just aren't good at it. It's like early agents when they first started making tool calls.

Agents are really quite bad at using subagents. They don't really internalize how to deploy them and they also don't utilize them in the ways that make sense (produce planning documents, have verifiable artifacts, break down tasks in ways that minimize risk, recognize model limitations in instruction following, iterate on results, etc).


So there needs to be a new paradigm shift every few months or so? Because I remember people hailing AI reaching a new level of capability less than half a year ago, and saying it’d still be so much worth it even at ten times the price. And that already has lost momentum? If that’s the case, then AI companies are hugely overvalued. These contrasts are just wild to me.

Your last paragraph is also striking in that it exemplifies how far away from general intelligence they still are.


> AI companies are hugely overvalued

Most of everything tends to suck. Most projects go nowhere, most companies fail, most scientific papers are garbage.

> how far away from general intelligence they still are

Economically the real question is to what extent can these systems replace or augment human labour. And I think right now the extent is pretty shocking if not currently very well integrated.

Scientifically the fact they are bad at using subagents is sort of expected. How to use agents effectively is still a bit of an open question. A human from mid 2025 would be bad at it. Why should a model trained on data from 2025 be good at it?

If these things were to be generally intelligent they need feedback and retraining. Which persumable the Labs will do once these sorts of questions start having good answers and we can create good benchmarks and measures for meta orchestration.


> Most of everything tends to suck. Most projects go nowhere, most companies fail, most scientific papers are garbage.

Umm, whats your point? We arent spending 1.4t on other shitty things that are tipping to fail


I just started trying these out.

Claude uses up its 6 hour or whatever quota in a couple coding prompts. Buying extra credits for the same amount as a monthly subscription and it's used up in 3 hours.

Kimi gives me about double what Claude does per window but uses up its entire weekly quota in the same time, for the same price as Claude. And I get worse results.

Gemini worked OK for a day or two and now is running one tool every 30m and getting nothing done, apparently they've been in constant outage status for for nearly a month: https://aistudio.google.com/status

I haven't tried ChatGPT because of ethical issues but well, I'm not sure that makes any sense.

Four prompts a day isn't something where I go, wow, this has revolutionized my programming. I might very well be getting more done if I wasn't fighting with the constant CLI bugs and work left half finished for 3h to 5 days when my quota is used up.


Grabbing git repos instead of just tarballs is useful.

A) You can update them, because you can git pull to fetch changes.

B) If you want to apply patches on top, its better to have version control so you can keep track of what you changed, especially useful if you want to rebase.


A) only valid if you want to stay with the devel version

B) See A

I use OpenBSD and before that, I was on alpine, debian, and arch. Of it was a software I want to try, I downloaded the tarball. if it’s something I wanted to keep for longer, I created a port or a custom packages.


You should invert your framing.

It's only *not valid* if you intend to use a fixed version forever. Otherwise you might as well include versioning for any other case.


> Otherwise you might as well include versioning for any other case.

It’s easier to version a port and the patches than to try keeping a series and patches on top of a dev branch. Not saying that your use cases are invalid, but the point of the thread was using git for building software. If you’re not developing the software, there’s no need to go from something that is working well to an unstable build every week.


Of course it's valid for release versions too: just fetch and checkout the release tag you want. I do this all the time.

Juggling multiple directories and tarballs is a pastime from a bygone era. It's even more commands if you want to reuse the existing directory!


It made more sense in the ye old days where a request was basically just a chat message in a sidebar and it could also edit code. Then saying someone can use 300 chat messages a month kinda makes sense.

Turns out when a request can spawn tens of subagents and use millions of tokens over many turns of toolcalls then suddenly github copilot has a massive financial problem on their hands.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: