Hacker Newsnew | past | comments | ask | show | jobs | submit | rafram's commentslogin

That, and:

- Every competitor is planning for the demand to be much higher in a few years than it is now, and aiming to capture as much of that as they can, which starts by getting companies hooked on their models now

- The data center capacity will get used no matter who captures the most demand


I can somewhat understand companies getting users depentant on their harnesses or workflow, but model vendors as in this deepseek case, I have absolutely 0 model loyalty when it's a simple config change away, and will always optimize for either capability or price (or whatever !/$ metric you can determine).

Both of them say they use Gemini Nano.

I can’t believe I’m defending a Yahoo tech blogger, but this is unnecessarily rude. You don’t need to be an attorney to report on legislation, and being from Portland has absolutely no bearing on his qualifications. You’re just appealing to lazy stereotypes.

It’s not just reporting, this is a legal analysis that is in my view not very good.

It really isn’t legal analysis, unless you think all reporting on legislation is legal analysis.



You're getting downvoted, but it's unmistakably written by AI.

Not if you add it to the price.

The red one goes faster. I mean costs more.

All of these were frankly terrible. I guess Grok’s “informal” version sounded the most like a real human, but only because it reads exactly like an Elon tweet (including his favorite emoji!). It’s obvious what they’ve been training on.

There’s one obvious alternative:

   fetch("https://api.openai.com/v1/chat/completions", { ... });

Right and that means people have to send their data to an external service.

Give it X months (or years??) and people will realize this is actually a privacy/data autonomy issue.

It's just dominated right now by the anti-AI/anti-technology sentiment in the west. That will gradually go away as more people use AI and robotics and realize how wrong they were about it.


>Right and that means people have to send their data to an external service.

Nothing in this proposal claims it has to be a local AI. That just happens to be the implementation by Chrome and Edge (for now at least, I'd imagine Google will eventually start moving this API towards hosted Gemini).


That's an important aspect of this that should really be part of the discussion on GitHub. But I've been told I'm not qualified to interject so I am not going to bother.

I will use WebLLM if I want something like this (with local AI guaranteed).


The trees are unambiguously blooming earlier because of climate change.

As a human, I do tend to mostly care about the period of the Earth's history that has allowed humans to exist. I'm sure the Mesozoic was nice, but I wouldn't want to live there!

Will this be any less ridiculously loud than the conventional helicopters that fly over Brooklyn all day ferrying people to JFK?

According to the guardian’s report[0], yes. 45dB compared to 100 dB for a heli.

[0]: https://www.theguardian.com/us-news/2026/apr/28/electric-air...


> [...] but it does generate a significant amount of noise when it takes off and lands. The company didn’t share information on that, but it was certainly enough to make one wince – even if it nowhere near approaches the sensory assault of a regular helicopter.

Could be worse, I guess.


Notably that's 45dB when it's 500 meters away.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: