Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After reading the comments, the themes I'm seeing are:

- AI will provide a big mess for wizards to clean up

- AI will replace juniors and then seniors within a short timeframe

- AI will soon plateau and the bubble will burst

- "Pshaw I'm not paid to code; I'm a problem solver"

- AI is useless in the face of true coding mastery

It is interesting to me that this forum of expert technical people are so divided on this (broad) subject.



To be honest, HN is about this with any topic. In the domain of stuff I know well, I've seen some of the dumbest takes imaginable on HN, as well as some really well-reasoned and articulated stuff. The limiting factor tends to be the number of people that know enough about the topic to opine.

AI happens to be a topic that everyone has an opinion on.


The biggest surprise to me (generally across HN) is that people expect LLMs to develop on a really slow timeframe.

In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible.

But everyone seems to evaluate LLMs like they're fixed at today's capabilities. I keep seeing "10-20 year" estimates for when "LLMs are smart enough to write code". It's a very head in the sand attitude to the last 2 years trajectory.


Probably because we see stuff like this every decade. Ten years ago no one was ever going to drive again because self-driving cars were imminent. Turns out a lot of problems can be partially solved very quickly, but as anyone with experience knows, solving the last 10% takes at least as much time as solving the first 90.


> Ten years ago no one was ever going to drive again because self-driving cars were imminent

Right.. but self driving cars are here. And if you've taken Waymo anywhere it's pretty amazing.

Of course just because the technology is available doesn't mean distribution is solved. The production of corn has been technically solved for a long time, but doesn't mean starvation was eliminated.


>And if you've taken Waymo anywhere it's pretty amazing.

Yeah, about that: https://ca.news.yahoo.com/hilarious-video-shows-waymo-self-1...


You can’t extrapolate the future trajectory of progress from the past. It comes in pushes and phases. We had long phases of AI stagnation in the past, we might see them again. The past five years or so might turn out to be a phase transition from pre-LLM to post-LLM, rather than the beginning of endless dramatic improvements.


It would be different too if we didn't know the secret sauce here is massive amounts of data and the jump process was directly related to a jump in the amount of data.

Some of the logic here is akin to how I have lost 30lbs in 2024 so at this pace I will weigh -120lbs by 2034!


> It comes in pushes and phases. We had long phases of AI stagnation in the past

Isn't that still extrapolating the future from the past? You see a pattern if pushes and phases and are assuming that's what we will see again.


I am not a software engineer and I made working stock charting software with react/python/typescript in April 2023 when chatGPT4 came out, without really knowing typescript almost at all. Of course, after awhile it was impossible to update/add anything and basically fell apart because I don't know what I am doing.

That is going to be 2 years ago before you know it. Sonnet is a better at using more obscure python libraries but beyond that the improvement over chatgpt4 is not that much.

I never tried chatGPT4 with Julia or R but the current models are pretty bad with both.

Personally, I think OpenAI made a brilliant move to release 3.5 and then 4 a few months later. It made it feel like AGI was just around the corner at that pace.

Imagine what people would have thought in April 2023 if you told them that in December 2024 there would be a $200 a month model.

I waited forever for Sora and it is complete garbage. OpenAI was crafting this narrative about putting Hollywood out of business when in reality these video models are nearly useless for anything much more than social media posts about how great the models are.

It is all besides the point anyway. The way to future proof yourself is to be intellectually curious and constantly learning, no matter what field you are in or what you are doing. Probably have to reinvent your career a few times if you want to or not too.


"In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible."

Illegally ingesting the Internet, copyrighted and IP protected information included, then cleverly spitting it back out in generic sounding tidbits will do that.


Even o1 just floored me. I can put in heaps of c++ code and some segfault stacktraces and it gave me an actual cause and fix.

I gave it 1000s lines of C++ and it did point the problem.


Many commenters suffer the first experience bias, they tried ChatGPT and it was "meh" so they see no impact.

I have tried cursor.ai, agent mode, and I see a clear big impact.


As soon as you replace the subject of LLMs with nebulous “AI” you have ventured into a la la land where any claim can be reasonably made. That’s why we should try and stick to the topic at hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: