Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Confidence levels aren't necessarily low for incorrect replies, that's the problem. The LLM doesn't "know" that what it's outputting is incorrect. It just knows that the words it's writing are probable given the inputs; "this is how answers tend to look like".

You can make improvements, as your parent comment already said, but it's not a problem which can be solved, only to some degree be reduced.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: