Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I pretty much agree with this, having some way to indicate model boundaries in an LLM parameter space to create back pressure on token generation would help a lot here.

For me though the interesting bits are how the lack of understanding surfaces as artifacts in the presentation or interaction. I'm a systems person who can't help but try to fathom the underlying connections and influences that are driving the outputs of a system.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: