Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jtokoph
27 days ago
|
parent
|
context
|
favorite
| on:
Google Antigravity exfiltrates data via indirect p...
The prompt injection doesn’t even have to be in 1px font or blending color. The malicious site can just return different content based on the user-agent or other way of detecting the AI agent request.
pilingual
26 days ago
[–]
AI trains people to be lazy, so it could be in plain sight buried in the instructions.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: