Why is that whenever there is a news about AI, its either a new scam or something vile. Like all this harm being done to environment, people's sanity and lives, just so companies can pay less to their employees. Great work.
It's very similar to what europeans did to american indian tribes a few centuries ago: they gave them alcohol. A neutral substance by itself, which could be used for disinfection or to light up fires. But it has a destructive side too and if the populace isn't resistant to it, they'll stand no chance. AI is very similar: some positive potential with a powerful destructive potential. Human tribes, as we know, have loose morals and thus aren't resistant at all to the AI's destructive side. We are like those american indians now, unable to resist the temptation.
BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized with a version of the comment:
“This has always been the case. AI doesn’t do anything new. This is a nothingburger, move on.”
It comes up so often as to be systematic. Both downvoting Web3 and upvoting AI. Almost like there is brigading, or even automation.
Why?
I kept saying for years that AI has far larger downsides than Web3, because in Web3 you can only lose what you volunarily put in, but AI can cause many, many, many people to lose their jobs, their reputations, etc. and even lives if weaponized. Web3 and blockchain can… enforce integrity?
At this point I think HN is flooded with wannabe founders who think this is "their" gold rush and any pushback against AI is against them personally, against their enterprise, against their code. This is exactly what happens on every vibe coding thread, every AI adjacent thread.
Mass participation in systems can create emergent effects larger than the net sum of the parts. I opt out because first movers are unfairly advantaged; and because lacking proper safeguards, my participation would implicitly support those participants who profit from producing misery. I don't want to accidentally launder the profits from human trafficking nor commit my labor to build my own prison. The rhetoric promoting Web3 as an engine of progress and freedom simply oversold the capabilities of its initial design. That underlying long term vision may still be viable.
We can't rebuild the economy without also rebuilding the State, and that requires careful nuanced engineering and then the consent of the governed.
> BUT, notice the absolutely opposite approach to AI and Web3 on HN. Things that highlight Web3 scams are upvoted and celebrated. But AI deepfakes and scams at scale are always downvoted, flagged and minimized…
If they reported on heart disease people might get healthy. But it's instinctual understanding that people dying all over just improves journalists odds in our society. Keep them anxious with crime stats!
The era of new technologies being used to work for us rather than net against us is something we took for granted and it's in the past. Those who'd scam or enshittify have the most power now. This new era of AI isn't unique in that, but it's a powerful force multiplier, and more for the predatory than the good.
What's worse is a significant number of folks here seem to be celebrating it. Or trivializing what makes us human. Or celebrating the death of human creativity.
What is it, do you think, that has attracted so many misanthropes into tech over the last decade?
Are you suggesting people shouldn't develop AI because it's basically just produces unemployment and scams? Like that they should just be good people and stop, or government should ban the development of AI?
I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture. What do you think should be done in light of that?
Blaming the technology for bad human behavior seems an error and it's not clear that the GP made it.
People could and likely will also increase economic activity, flexibility, and evolve how we participate in the world. The alternative would get pretty ugly pretty quick. My pitchfork is sharp and the powers that be prefer it continues being used on straw.
>I mean you are clearly equivocating AI with unemployment and scams, which I think is a very incomplete picture.
What else, let me guess, slop in software, ai psychosis, environmental concerns, growing wealth inequality. And yes may be we can write some crappy software faster. That should cover it.
I have no suggestions to on how to solve it. Only way is to watch openAI/Claude lose more money and then hopefully models are cheaper or completely useless.
I mean from your perspective it just sounds like it should be stopped somehow. Either people collectively decide its a waste of time or something. I guess im very surprised to hear someone things it brings no value. I can relate to some of the negative outcomes but to not see any significant value seems kind of crazy to me.
Yes, But I am talking about slop in all the software I use, not just what I make. Every app is trying to do everything. Every place we have summarise button, some cobbled together AI gen features. Software continuously fails, and companies provide no support as that is all automated to save money.
About all of the good news, once you read a little bit more, are all due to traditional ML and all are in medical imagery field. Then OpenAI tries to take credit and say "Oh look AI is doing that too", which is not true. Go ahead and read deeper on any of those news and you would quickly find LLMs haven't done much good.
They helped me make some damn good brownies and be a better parent in the last month. Maybe I should write a blog for all of the great things LLMs are doing for me.
Oh yeah, and one rewrote the 7-minute-workout app for me without the porn ads before and after the workout so I can enjoy working out with one of my kids.
What makes you think you couldn't have made brownies without LLMs. Go to google and just scroll 20cm and there it is, a recipe, the same one chatGPT gave you. I wont comment on rewriting an app, because LLMs can definitely do that.
Because, "Why are the edges burnt and the middle is too soft? How are these supposed to actually look? I used a clear 8"x8" pan, and I'm in Utah, which is at 4,600 ft elevation"
Oh, it's a higher elevation, I need to change the recipe and lower the temperature. Oh, after it looked at the picture, the top is supposed to be crackly and shiny. Now I know what to look for. It's okay if it's a little soft while still in the oven because it'll firm up after taking them out? Great!
Another one, "Uh oh, I don't have Dutch-processed baking power. Can I still use the normal stuff for this recipe?" Yeah, Google can answer that, but so can an LLM.
What makes you think you couldn't have made brownies without Google. Just go to your local library and find the first baking cookbook you can find. And there it is, a better recipe than Google without all the SEO blog spam.
To avoid my comment just being snarky, I agree that there's a difference between comparing Google to LLMs, and the library to Google... but still I hope you can acknowledge that LLMs can do a lot more than Google such as answering questions about recipe alterations or baking theory which a simple recipe website can't/won't.
fwiw modern recipe sites are awful - you have to scroll down literal minutes until you get to the recipe. LLMs give you the answer you want in seconds.
I’m certainly no LLM enthusiast but pretending they are useless won’t make the issues with them go away
I doubt this bonanza is gonna last... These chatbots, feeding from the very source that can't seem to surface quality stuff by the way, will likely degrade just like those searches have for the last 20 years. There will be ads, there will be manipulation and deception, there will be pointless preambles and they will spit out even more wrong instructions and unusable garbage, and on top of it all it won't take 20 years this time do degrade, it's rather likely that it will take less than 5 years.
Maybe open source models will hold these accountable, or maybe they will degrade too somehow. Or maybe the world will be going through a hard collapse for any of us to care.
The model weights for the leading open source offerings are already downloaded by thousands, if not millions, of times. There's no unsqueezing that tube of toothpaste.
For me personally, LLMshave helped me learn 10x faster than I would be able to otherwise. IMO, in 15 years, teachers with university degrees at all wll be as rare as teachers with PhDs today, because the actual teaching will be left to the LLMs.