Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I will not allow AI to be pushed down my throat just to justify your bad investment.

Pretty much my sentiment too.



The neat thing about all this is that you don’t get a choice!

Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.

It’s happening, with you or to you.


I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.


Sadly, almost none of my friends care or understand (older family members or non-tech people). If I tried to convince friends to move to Signal because of my disdain for AI profiteering, they'd react as if I were trying to get them to join a church.


Reasonably far off topic:

Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.

Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?


If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?


> what difference does it make

Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.


One hallucinates data, one does not?


I run a small online software business and I am continually getting cards refused for blue chip customers (big companies, universities etc). My payment processor (2Checkout/Verifone) say it is 3DS authentication failures and not their fault. The customers tell me that their banks say it isn't the bank's fault. The problem is particularly acute for UK customers. It is costing me sales. It has happened before as well:

https://successfulsoftware.net/2022/04/14/verifone-seems-to-...


I've recently found myself having to pay for a few things online with bitcoin, not because they have anything to do with bitcoin, but because bitcoin payments actually worked and Visa/MC didn't!

For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.

Kind of the same as Windows getting so bad it got worse than Linux, actually...


Even if my favorite service is so irreplaceable, I still can use it without touching AI part of it. If majority who use a popular service never touch AI features, it will inevitably send a message to the owner one way or another - you are wasting money with AI.


Nah the owner will get a filtered truth from the middle managers that present them with information that everything's going great with AI, and the lost money is actually because of those greedy low level employees drinking up all the profit by working from home! The entire software industry has a massive Truth-To-Power problem that just keeps getting worse. I'd say the software industry in this day and age feels like Lord of the Flies but honestly feels too kind.


Exactly this. "AI usage is 20% of our customer base" "AI usage has increased 5% this quarter" "Due to our xyz campaign, AI usage has increased 10%"

It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.


almost the same as RTO mandates:

we’ll force you to come back to justify sunk money in office space.


I personally think all the gains in productivity that happened with WFH were just because people were stressed and WFH acted like a pressure relief. But too much of a good thing and people get lazy (seeing it right now, some people are filling full timesheets and not even starting let alone getting through a day of work in a week), so the right balance is somewhere in the middle.

Perhaps… the right balance is actually working only 4 days a week, always from the office, and just having the 5th day proper-off instead.

I think people go through “grinds” to get big projects done, and then plateau’s of “cooling down”. I think every person only has so much grind to give, and extra days doesn’t mean more work, so the ideal employee is one you pay for 3-4 days per week only.


We just need a metric that can't be gamed which will reliably show who is performing and who is not, and we can rid ourselves of the latter. Everyone else can continue to work wherever the hell they want.

But that's a tall order, so maybe we just need managers to pay attention. It doesn't take that much effort to stay involved enough to know who is slacking and who is pulling their weight, and a good manager can do it without seeming to micromanage. Maybe they'll do this when they realize that what they're doing now could largely be replaced by an LLM...


Not for nothing did the endless WSJ and Forbes articles about "commuting for one hour into expensive downtown offices is good, actually" show up around the same time RTO mandates did.


Don't forget about the poor local businesses. Someone needs to pay to keep the executives' lunch spots open.


Well, not if rents crash because all the offices moved out from the area, and the lunch spot can afford to stay open and lowers prices.

We don't talk enough about how the real estate industry is a gigantic drag on the economy.


Hey now. Little coffee shops and lunch spots and dry cleaners are what make cities worth living in in the first place.


It really gives me the same vibes as the sort of products that go all in on influencer marketing. Nothing has made me less likely to try "Raid Shadow Legends" than a bunch of youtubers faking enthusiasm about it.

It's a sort of pushiness that hints not even the people behind the product are very confident in its appeal.


I see comments like this one* and I wonder if the whole AI trend is a giant scam we're getting forced to play along with.

* https://news.ycombinator.com/item?id=46096603




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: