Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The only plausible explanation for the amount of resources poured into these language models is the hope that they somehow become the origin of AGI, which I think is pretty fanciful.

I can feel the cold wind of the next AI winter coming on. It's inevitable. Computers are good at emulating intelligent behavior, people get excited that it's around the corner, and the hype boils over. This isn't the last time this will happen.



I think the amount of money is explained in part by hubris. People in high positions think they're at least what they earn more smart and capable than people at the bottom of the org. So its reasonable, expected, borderline obvious that a computer bot can replace that person. So you're betting on the ability of it to get rid, if not of your junior devs at least the majority of your customer support staff.

In reality people doing "menial" jobs are smart and learn and operate with a lot of nuance than people ignore given unfamiliarity or just prejudice. Do you prefer to talk to a chatbot or a real human when you have a problem, how confident are you really, that even if the bot knows what the problem is it would be able to solve it.

Lots of problems with customer care is anchored in the issue that support staff is not allowed to fix or resolve problems without escalation or attempts at keeping you from costing more money. The bot might be better at it for the company because it will frustrate you enough to give up that 30 bucks refund, idk.

Ai seems to change a lot the dynamics of corporate jobs but I haven't seen yet anything that would be a game changer outside of it. Its great for searching company unorganised and messy knowledge bases.


I think this still applies https://x.com/dwarkesh_sp/status/1888164523984470055, LLMs now are useful but we need something else for AGI.


Didn't take long for this comment to age poorly :) https://news.ycombinator.com/item?id=43102528


Fun! I'll try it out, being a scientist with some LLM.


I can't feel any cold right now at all.

On all corners people work on so many small pieces advancing what we have.

And plenty of obvious things are not here yet like a full local dev cycle aka ai uses the IDE to change code them executes it, fixes compiler issues and unit tests and then opens a pr.

Local agents/ agents having secure and selected access to our data like giving my agent read only access to my bank account and a 2factor way for it to send money.

Deepseek's reinforcement learning is also a huge new lead.

Nonetheless in parallel robots are coming too.

GenAI is getting better and better. Faster and better video and cheaper. 3D meshes, textures first GenAI ads


I predict this comment will age very, very poorly. Bookmarked.


I feel like 50/50 chance of his or your comment aging poorly.


I feel there's a high probability your comment doesn't mean what you think it does (unless you truly believe both outcomes are as likely).


Not sure how else could I have meant that.


It seemed like you intended to present your comment as a tautology (e.g. "I feel there's a 100% chance of his or your comment aging poorly"), but I'll give you the benefit of the doubt!


Yeah, that's a good point. I just think it can go either way. I remember in 2015 how hyped we were around self-driving cars and thought "in 10 years there will be majority of cars like that". Right now we may see steady increase in capabilities of AI for years to come, or we may see it plateauing.


Cool. Invest in it then. That way you get paid instead of saying "I told you so" to some screen name.


I think the snag I feel in your argument comes from

>Computers are good at emulating intelligent behavior

Which implies that the brain is some kind of transcendent device that can backdoor physics to output incredible intelligence unique to it's magical structure.

Maybe LLMs aren't the key, but as far as we can tell the brain is also just another computer.


Holy strawman batman.


Care to differentiate intelligence from emulating intelligence?


No, it would be very hard and you already showed to not be arguing in good faith so I don't want to invest the time and effort.

And let me be very clear on why, because I love having conversations about this theme: it promises to be an adversarial and frustrating exchange.


Everyone seems to have a different definition for AGI. Is there some kind of standard there?


No- but the main issue is that all reasonable ones I can conceive lead inevitably to the Singularity technologically, and pretty quickly since we seem determined to throw as much silicon as possible at the problem. Hopefully the final step is intractable.


precisely; however this time we will have tangible results from the ongoing AI summer; that would be generative art, and coding/writing/journalist assistants.


There are always dividends. We got a lot of interest in Lisp from the first summer, and it arguably informed all currently used programming languages.


though the dividends were not obvious to the lay people vs now. Which means that upcoming winter won't be as cold.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: