Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Getting rid of bias in LLM training is a major research problem and anecdotally e.g., to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about; by extension it’ll have many other assumptions about race, nationality, political views, etc.


> to my surprise, Gemini infers gender of the user depending on the prompt/what the question is about

What, automatically (and not, say, in response to a "what do you suppose my gender is" prompt)? What evidence do we have for this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: