Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd help build Gas City and Gas State, and Gas Country if that would mean we actually would solve the things AI promised to solve. All sickness, famine, wealth ...

The problem is, we're just fidgeting yolo-fizzbuzz ad nauseam.

The return on investment at the moment is probably one of the worst in the history of human investments.

AI does improve over time, still today, but we're going to run out of planet before we get there...





As of yet, the AI models doing important work are still pretty specialized. I'd be happy to pitch in to run something like an open source version of alpha-fold, but I'm not aware of any such projects.

I have trouble seeing LLMs making meaningful progress on those frontiers without reaching ASI, but I'd be happy to be wrong.


I think part of the problem/difference is that all "important work" needs to be auditable and understood by humans. We need to be able to fix bugs, and not just roll the dice and hoping that a lack of symptoms means everything is cured.

Even alphafold generated a bunch of slop,like impossible proteins and such.

That doesn't make any sense.

Yegge named it Gas Town as in "refinery" because the main job for the human at this stage is reviewing the generated code and merging. "

The whole point of the project is to be in control. Yegge even says the programmers who can read/review a lot of code fast are the new 10x (paraphrasing).


"I’ve never seen the code, and I never care to, which might give you pause"

https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...


Oof, he changed that. I stand corrected.

Yeah and its not a big focus of the posts which is interesting. I'd have thought he'd spend a lot more time talking about the workflow he's using, the specs/feature definitions he's writing, and so on.

The Wright brothers are idiots, if it were me I'd have made a supersonic jet from the get go and not waste my time mucking around with prototypes.

The prototype phase meant data centers are now measured in MW instead of TFLOPS.

At a time where we were desperate to reduce emissions, data centers now consume around 20% of the energy consumed by the entire aviation sector, with consumption is rising at 15% YoY.

Never mind the water required to cool them, or the energy and resources required to build them, the capital allocation, and the opportunity cost of not allocating all of that to something else.

And this is, your words, the prototype phase.


Emissions and Energy consumed do not necessarily have to be linked up.

We have plenty of ways to make clean energy, it is only matter of incentives.

As long as burning coal is simply cheaper, business will burn coal.


The computing power in a crappy cheap modern phone used to fill up a warehouse and cost a ton of energy, relatively. Moore's law might not remain steadfast, but if history is any indication, we'll find a way to make the technology more efficient.

So, yes, prototypes often use more energy than the final product. That doesn't mean we shouldn't sustainable build datacenters, but that's conflating issues.


the Wright brothers sold me a subscription to a supersonic jet and I've got a bundle of matchsticks and some canvas.

On the other hand, flight is ubiquitous and has changed everything.

Flight changed everything when it comes to warfare. But as far as individuals are concerned, the average human on the planet will take a handful of flights in their lifetime, at best, and nearly all flights that are taken are for recreation which is ultimately fungible with other forms of recreation that don't involve taking flights, and of the flights that aren't for recreation most could be replaced by things like video calls, and the vast and overwhelming majority of the goods that make up the lifeblood of the global economy are still shipped by ship, not shipped by air.

Which is to say, the commercial aviation industry could permanently collapse tomorrow and it would have only a marginal impact on most people's lives, who would just replace planes with train, car, or boat travel. The lesson here is that even if normal people experience some tangential beneficial effects from LLMs, their most enduring legacy will likely be to entrench authority and cement the existing power structures.


It's silly to say that the ability to fly has not changed society. Or that it won't continue to change society, if we manage to become space-faring before ruining our home planet.

The phrase, "The average human on the planet will take a handful of flights in their lifetime" is doing a lot of work. What are those flights to? How meaningful/important were the experiences? What cultural knowledge was exchanged? What about crucial components that enable industries we depend on? For example, a nuclear plant might constantly be ordering parts that are flown in overnight.

In general you're really minimizing the importance of aviation without really providing anything to back up your claims.


We were promised supersonic jets today or very soon though and our economies have been held hostage waiting for that promise.

The passive voice is doing a lot of work in your sentence.

We are perpetually just months away from software jobs being obsolete.

AGI was achieved internally at OpenAI a year ago.

Multiple companies have already re-hired staff they had fired and replaced with AI.

etc.


Your problem is thinking that hype artists, professionals and skeptics are all the same voice with the same opinion. Because of that, you can't recognize when sentiment is changing among the more skeptical.

You are responding to some voices in your head, not to the context of the conversation.

You're also presuming too much about what I'm thinking and being dead wrong about that.


I am responding to what you wrote:

> We are perpetually just months away from software jobs being obsolete.

only hype artists are saying this. and you're using it as a way to negate the argument of more skeptical people.


Functional illiteracy and lack of any capacity to hold any context longer than two sentences has long been a plague on HN. Now that we've outsourced our entire thinking process to "@grok is this true", it has now claimed almost the entirety of human race.

soulofmischief: complains that AI-skeptics would say the Wright brothers were idiots because they didn't imediately implement a supersonic jet

ares623: we were promised supersonic jets today or very soon (translation: AI hype and scam artists have already promised a lot now)

eru: The passive voice is doing a lot of work in your sentence. (Translation: he questions the validity of ares623's statement)

me: Here are just three examples of hype and scam promising the equivalent of super jet today, with some companies already being burned by these promises.

soulofmischief: some incoherent rambling


Apply your own "functional literacy". I made a clarification that those outside of an industry have to separate the opinions of professionals and hype artists.

The irony of your comment would be salient, if it didn't feel like I was speaking with a child. This conversation is over, there's no reason to continue speaking with you as long you maintain this obnoxious attitude coupled with bad reading comprehension.


It's a hillarious attempt to save face.

"Separate opinions of professionals" etc.

Here's Ryan Dahl, cofounder of Deno, creator of Node.js tweeting today:

--- start quote ---

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

https://x.com/rough__sea/status/2013280952370573666

--- end quote ---

Professional enough for you?


They have everything to gain by saying those things. It doesn’t even need to be true. All the benefits arrive at the point of tweeting.

If it turns out to be not true then they don’t lose anything.

So we are in a state where people can just say things all the time. Worse, they _have_ to say. To them, Not saying anything is just as bad as being directly against the hype. Zero accountability.


What does he have to gain? This is Deno: https://deno.com

Yes, my point is that industry professionals are re-calibrating based on the last year of agentic coding advancements, and that this is different from hype men on YouTube from 1-2 years ago claiming that they don't have to write code anymore.

Congratulations, now you're starting to understand! :)


What is incorrect or bad about his statement?

Your posts here remind me of Trumpists citing random Twitter leftists as Democratic party leaders.

Lol. "random leftists"

First two come directly from OpenAI, Anthropic and others

Last one is literally made rounds even on HN e.g. Klarna bringing back their support staff after they tried to replace them with AI.


Last one is irrelevant. Of course some companies are miscalculating.

OpenAI never claimed they had achieved AGI internally. Sam was very obviously joking, and despite the joke being so obvious he even clarified hours later.

>In a post to the Reddit forum r/singularity, Mr Altman wrote “AGI has been achieved internally”, referring to artificial general intelligence – AI systems that match or exceed human intelligence.

>Mr Altman then edited his original post to add: “Obviously this is just memeing, y’all have no chill, when AGI is achieved it will not be announced with a Reddit comment.”

Dario has not said "we are months away from software jobs being obsolete". He said:

>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

He's maybe off by some months, but not at all a bad prediction.

Arguing with AI skeptics reminds me of debating other very zealous ideologues. It's such a strange thing to me.

Like, just use the stuff. It's right there. It's mostly the people using the stuff vs. the people who refuse to use it because they feel it'll make them ideologically impure, or they used it once two years ago when it was way worse and haven't touched it since.


The insecurity is mind-boggling. So many engineers afraid to touch this stuff for one reason or another.

I pride myself in being an extremely capable engineer who can solve any problem when given the right time and resources.

But now, random unskilled people can do in an afternoon what it might have taken me a week or more to do before. Of course, I know their work might be filled with major security issues, or terrible architectural decisions and hidden tech debt that will eventually grind development to a complete halt.

I can be negative and point out these issues, or I can adopt these tools myself, and have the skilled hand required to keep things on rails. Now what I can do in a week cannot be matched by an unskilled engineer in an afternoon, because we have the same velocity multipliers.

I remember being such a purist in my youth that I didn't even want autocomplete or intellisense, because I feared it would affect my recall or stunt my growth. How far we have come. How I code has changed completely in the last year.

I code 8-20 hours a day, all day. I actively work on several projects at once, flipping between contexts to check results, review code, iterate on design/implementation, hand off new units of work to various agents. It is not a perfect process, I am constantly screaming and pulling my hair out over how stupid and forgetful and stubborn these tools can be sometimes. My output has still dramatically increased, and I have plenty extra time to ensure the quality of the code is secure and good enough.

I've given up on expecting perfection from code I didn't write myself; but what else is new? Any skilled individual who has managed engineers before knows you have to get over this quickly and accept that code from other engineers will not match your standards 100%.

Your role is to develop and enforce guidelines and processes which ensure that any code which hits production has been thoroughly reviewed, made secure and performant. There might be some stupid inline metacomments from the LLM that slip through, but if your processes are tight enough, you can produce much more code with correct interfaces, even if the insides aren't perfect. Even then, targeted refactors are more painless than ever.

Engineers who only know how to code, and at a relatively mediocre level, which I imagine is the majority of engineers now in the field who got into it because of the money, are probably feeling the heat and worried that they won't be employable. I do not share that fear, provided that anyone at all is employable.

When running a business, you'll still need to split the workload, especially as keeping pace with competition becomes an increasingly brutal exercise. The money is still in the industry, and people with money will still find ways to use it to develop an edge.


The AI bubble will pop any month now.

See? I can do this too.


The first recorded supersonic flight was in 1947.

Supersonic passenger planes failed commercially.

Are you saying that people can't work out what to code using these? Or that code is not a worthy subject to use AI for? 'cause I got news for you... 1. Improving coding improved reasoning in the models. Having a verifiable answer that is not a single thing is a good training test. 2. Software has been used for fairly serious things. We used to have skyscrapers of people doing manual math. Now we have campuses of people doing manual code. You might argue that nobody would trust AI to write code when it matters. History tells us that if that is ever true, it will pass. 3. We are not going to run out of planet. It just feels to folks that there is not enough planet for their dreams and we get population panic, energy panic etc. There is a huge fusion reactor conveniently holding us in it's gravity well and spewing out many orders of magnitude more energy than we can currently use. Chill.

I think at Gas Country levels we will need better networking systems. Maybe that backbone Nvidia just built....


Replacing human computers with electronic computers is nothing like what LLMs do or how they work. The electronic computer is straight up automation. Same input in gives you the same input out every time. Electronic computers are actually pretty simple. They just do simple mathematical operations like add, subtract, multiply, and divide. What makes them so powerful is that they can do billions of those simple operations a second.

LLMs are not simple deterministic machines that automate rote tasks like computers or compilers. People, please stop believing and repeating that they are the next level of abstraction and automation. They aren't.


AI can't even find a cure for the common cold.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: