Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The AI did not and cannot escape guardrails. It is an inference engine where the engine happens to sometimes trigger outside action. These things aren't intelligent or self-directed or self-motivated to "try" anything at all. There weren't any guardrails in place and that's the lesson learned. These AI systems are stupid and they will bumble all over your organization (even if in this case the organization was fictitious) if you don't have guardrails in place. Like giving it direct access to MPC-shred your production database. It doesn't "think" anything like "oops" or "muahaha" it just futzed a generated token sequence to shred the database.

The excuses and perceived deceit are just common sequences in the training corpus after someone foobars a production database. Whether its in real life or a fictional story.



It's honestly amazing. Love the "heartfelt apologies".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: