Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the LLM can parse out total garbage in and understand the intent of the writer? I know when I'm vague with an LLM I get junk or inappropriate output.




As an optimist I would say that it could be better at teasing out your intent from you in an interactive way, then producing something along those lines. People aren't ashamed to answer questions from AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: