Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is that whenever there is a news about AI, its either a new scam or something vile. Like all this harm being done to environment, people's sanity and lives, just so companies can pay less to their employees. Great work.




"Local man uses AI to try slightly different casserole recipe" just doesn't have that click-driving wow factor.

Unless the AI's suggestion to add glue kills him:

https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...


Lots of kids used to eat glue they turned into alive adults eventually. No promises as to status though.

It's very similar to what europeans did to american indian tribes a few centuries ago: they gave them alcohol. A neutral substance by itself, which could be used for disinfection or to light up fires. But it has a destructive side too and if the populace isn't resistant to it, they'll stand no chance. AI is very similar: some positive potential with a powerful destructive potential. Human tribes, as we know, have loose morals and thus aren't resistant at all to the AI's destructive side. We are like those american indians now, unable to resist the temptation.

Because it has a lot of potential for abuse.

BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:

“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”

You can probably see multiple versions in this thread or the sibling post just next to it on HN front page: https://news.ycombinator.com/item?id=46603535

It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.

Why?

I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?


At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.


Mass participation in systems can create emergent effects larger than the net sum of the parts. I opt out because first movers are unfairly advantaged; and because lacking proper safeguards, my participation would implicitly support those participants who profit from producing misery. I don't want to accidentally launder the profits from human trafficking nor commit my labor to build my own prison. The rhetoric promoting Web3 as an engine of progress and freedom simply oversold the capabilities of its initial design. That underlying long term vision may still be viable.

We can't rebuild the economy without also rebuilding the State, and that requires careful nuanced engineering and then the consent of the governed.


There are plenty of posts critical of AI on HN that reach the front page, and even more threads filled with AI criticism whether on-topic or not.

What you're noticing is a form of selection bias:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


> BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…

It took a few years for that to happen.

Plenty of folks here were all-in on NFTs.


Good news doesn't get clicks. Usually doesn't even get reported.

News is whatever people would care about reading.

Also, saving me a bit of time in coding is objectively not a good trade if the same tool very easily emboldens pedophiles and other fringe groups.

Media in the US is obsessed with fear mongering:

https://flowingdata.com/2025/10/08/mortality-in-the-news-vs-...

If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!

Such an unserious joke of a society.


This is purely economic. Fear mongering gets clicks, which boosts ad revenues.

I've read statistics to the effect that bad news (fear or rage bait) often gets as much as 10,000X the engagement vs good news.


So do you dispute that this is happening? And its all over my country too.

Expecting tech bros to take responsibility for what they have unleashed is asking too much I suppose.


The early days of the internet were mostly about how they enabled porn, spam, and scams...just so people could order things onlnie.

We are now talking about AI in how it enables porn, spam, and scams....


That is news in general, nothing special about AI.

The era of new technologies being used to work for us rather than net against us is something we took for granted and it's in the past. Those who'd scam or enshittify have the most power now. This new era of AI isn't unique in that, but it's a powerful force multiplier, and more for the predatory than the good.

there's literally a NFT/crypto scammer occupying the oval office

can't wait until he figures out AI


Can be said about so many things in life. It's almost like we don't learn and just repeat in loops.

Sounds like confirmation bias you are not interested in challenging

What's worse is a significant number of folks here seem to be celebrating it. Or trivializing what makes us human. Or celebrating the death of human creativity.

What is it, do you think, that has attracted so many misanthropes into tech over the last decade?


Well, it's one thing AI actually revolutionized.

Don't discount the fact that bad new sells.

Are you suggesting people shouldn't develop AI because it's basically just produces unemployment and scams? Like that they should just be good people and stop, or government should ban the development of AI?

I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?


"Guns don't kill people, I do."

Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.

People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.


People without guns kill less.

The statistics are that our car use, pollution, and many other problems kill far more people.

But they have greater benefits like mobility and the bad things are a side effect of the use

But for weapons death is the result of their purpose.


> What do you think should be done in light of that?

you suggested it:

> government should ban the development of AI?

works for me!


If the harm outweighs the benefits stopping should be an option, don’t you think?

I dont think AI just brings scams and unemployment.

Therefore the "outweighs"

>I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture.

What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.

I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.


I mean from your perspective it just sounds like it should be stopped somehow. Either people collectively decide its a waste of time or something. I guess im very surprised to hear someone things it brings no value. I can relate to some of the negative outcomes but to not see any significant value seems kind of crazy to me.

No, its the best way to burn money so no reason to stop it. But would like if people use it less.

>What else, let me guess, slop in software

Are you a developer? If so does this mean you have not been able to employ AI to increase the speed nor quality of your owrk?


Yes, But I am talking about slop in all the software I use, not just what I make. Every app is trying to do everything. Every place we have summarise button, some cobbled together AI gen features. Software continuously fails, and companies provide no support as that is all automated to save money.

Because news about scams or something vile using AI gets you to click and read.

About all of the good news, once you read a little bit more, are all due to traditional ML and all are in medical imagery field. Then OpenAI tries to take credit and say "Oh look AI is doing that too", which is not true. Go ahead and read deeper on any of those news and you would quickly find LLMs haven't done much good.

They helped me make some damn good brownies and be a better parent in the last month. Maybe I should write a blog for all of the great things LLMs are doing for me.

Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.


What makes you think you couldn't have made brownies without LLMs. Go to google and just scroll 20cm and there it is, a recipe, the same one chatGPT gave you. I wont comment on rewriting an app, because LLMs can definitely do that.

Because, "Why are the edges burnt and the middle is too soft? How are these supposed to actually look? I used a clear 8"x8" pan, and I'm in Utah, which is at 4,600 ft elevation"

Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!

Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.


You make it sound like brownie making is a scientific endeavour. I wouldn't think its hard but I guess I haven't made brownies in all conditions.

All baking is a scientific endeavor in my house! You should try my brownies! :D

What makes you think you couldn't have made brownies without Google. Just go to your local library and find the first baking cookbook you can find. And there it is, a better recipe than Google without all the SEO blog spam.

To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.


fwiw modern recipe sites are awful - you have to scroll down literal minutes until you get to the recipe. LLMs give you the answer you want in seconds.

I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away


I doubt this bonanza is gonna last... These chatbots, feeding from the very source that can't seem to surface quality stuff by the way, will likely degrade just like those searches have for the last 20 years. There will be ads, there will be manipulation and deception, there will be pointless preambles and they will spit out even more wrong instructions and unusable garbage, and on top of it all it won't take 20 years this time do degrade, it's rather likely that it will take less than 5 years.

Maybe open source models will hold these accountable, or maybe they will degrade too somehow. Or maybe the world will be going through a hard collapse for any of us to care.


The model weights for the leading open source offerings are already downloaded by thousands, if not millions, of times. There's no unsqueezing that tube of toothpaste.

For me personally, LLMshave helped me learn 10x faster than I would be able to otherwise. IMO, in 15 years, teachers with university degrees at all wll be as rare as teachers with PhDs today, because the actual teaching will be left to the LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: