Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Example from a few days ago, the first time I decided to use the AI Google Search: I Googled for how to reset the time on my Nixon Ripley watch (I forget the exact search term I used). Google "AI" helpfully brought up a list of instructions, prefaced with a helpful "to reset the time on a Nixon Ripley watch ..." But the instructions referred to a mode that's not on this watch, which was suspicious. It helpfully included a link to a website with a set of instructions that looked like what the AI had generated. Except that the website didn't reference the Nixon at all, but some other watches (Casio gShock, I think), and so the instructions of course didn't work.

So I went back to the "old fashioned" way of searching the links for the actual Nixon website, finding the watch manual, with the correct steps.



That seems pretty par for the course with Google these days. “Hmm, you want to know something about a specific watch? Well, I know something else about watches!”


It seems like LLM is good at broad and generic things but fail miserably at precision. And instead of admitting it doesn't know something, it confidently responds with nonsensical answers.


I thought the commenter was talking about Google Search. It sometimes pushes similar sites you don’t want that might generate more ad revenue. Now, they have two products that do this.


They were talking about something they did through Google Search, but it hasn't been just search for a long time. Pretty often it'll add a section on top of the search results that attempts to answer a question directly, using data pulled from sites, instead of linking to another site.

This feature has existed since long before LLMs, but it sounds like they may have mixed that into there too.


the summaries have been around for a long time but this was a new “AI” section i haven’t seen before.


My favorite part of Copilot is when it auto-completes a call to a function that does exactly what I need to do, like magic!

Except that function doesn't exist and never did.

LLMs don't know what they don't know, so they just make something up because they have to say something. The danger is that most people don't understand that's how they work and don't known when to call BS.

This is where I think companies have a responsibility. To ensure that _every_ response has a disclaimer that the answer from their AI could be right or completely wrong and it's up to the user to figure that out, because AI can't at the moment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: