Hacker Newsnew | past | comments | ask | show | jobs | submit | AnimalMuppet's commentslogin

LLM output is also an echo.

Since rafram is not the only one confused, yes, you really do.

It isn't that hard to understand:

> Just wait until there are entire classes of vulnerabilities related to LLM usage

This is a valid concern.

There are going to be a new class of vulnerabilities which an LLM is involved which are going to be discovered and it will make it possible to cause catastrophic damage to a company; very easily.

This won't be surprising since we have companies building casual remote code execution tools for "agents" waiting to be hijacked.


I understand that. What about that relates specifically to the Android CLI? That was rafram's question, and mine, and as far as I can tell still hasn't been answered.

I mean, I guess if you're going to say "don't use LLMs", then you also don't want to let agents use the Android CLI, but it seems like raising an awfully general concern in a discussion about a very specific article.


On the flip side, I cause a medium panic in my daughter when I text "please call me when you can" without a why attached. She assumes someone's in the hospital or dying or something.

Yes like those people who send meeting invites with generic or useless title and no agenda or topic text in the invite. I'm not attending.

My mom had to lay down a rule that if I called her at a weird hour I needed to open with whether or not I was okay. Almost 30 now and still do the same thing.

How far do you take that? One medical condition that has been considered militarily relevant is flat feet. If the state can draft people, can they make you show them your feet? If you can't avoid the draft but can avoid the medical exam, isn't that a way of avoiding the draft?

> can they make you show them your feet?

I see a subtle difference between being barefoot and showing cock and balls to half dozen strangers. I know of at least one American president who avoided being drafted because of his "feet situation".


And to not go in a vehicle with a license plate that is traceable to you.

"For the little stealing, they give you prison, soon or late. For the big stealing, they names you emperor, and puts you in the hall of fame when you croaks. If there's one thing I've learned from from twenty years on the Pullman cars listening to the white quality talk, it's dat same fact."

From "The Emperor Jones", quoted from memory.


I read that in Jar Jar Binks voice. :D

I think this means that if lawyers use it, they have also lost confidentiality. That could be a significant issue in a big case.

[Edit: Or maybe not, legally. But they have definitely lost confidentiality in the "corporate secrets" sense, and that may still matter.]


If lawyers use it, they may have the ability to claim work product exemption, although this itself is going to be dependent on a lot more factors I can't analyze.

This is really the question. Conversely, why would an attorney get to have privilege over chatbot interactions in a manner that an individual using a chatbot for self-defense not have such privilege?

"Industrial" cannot rely on any one individual. You have to be able to scale your process (or whatever), duplicate your process, have your process survive multiple people leaving, and so on.

Which means that any true agile cannot be industrial. And therefore any industrial agile cannot be true to the principles of the Agile Manifesto.


It sounds like you don't have motivation to keep up with the changes of the AI state of the art. That's fine. Don't.

But motivation in general? Let it find you. When you see "hey, I really want to do that", well, that that's motivation. You want to do that. You don't have to manufacture motivation - you have motivation.

And once you have motivation, if you want to experiment with using an AI as you do it, that's fine. If you don't, that's fine too.


There are two errors.

Error 1: "Something must be done, this is something, therefore this must be done." Yeah, but "this" is something stupid, with no real-world chance of working.

Error 2: "We will not do anything until we can prove that it will work." You can analyze things to death and waste years in the process, and never do anything.

Somewhere in between is the right answer. You see plausible success, but still far from certain. Then you experiment.

Now, is New York striking the right balance? I have no idea. I'm not privy to their internal discussions. But I know that, if it fails, everyone is going to mock them for trying. That is, everyone is going to assume they fell into error 1. But did they? Getting the balance right still means that the experiments fail a fair amount of the time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: