Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you have any luck mitigating this problem?


Sure, to force chat gpt to role play I've found that giving it a character to play and then prefixing your query with underscores for meta-roleplay stuff works well.

Ie if the LLM is playing a character "Jack" and you are playing the character "James" your query might be "_only reply for your character Jack_ James: I pick up the sword and then turn towards Jack".

It can also be used to influence behaviour as LLMs often get stuck repeating (not word for word) the same events/description, ie greeting your character over and over again and not moving on, LLMs aren't great at having autonomy/agency in the flow of a conversation, I think this is best done by not providing the entire history of the conversation but instead distilling it to relevant information for the current query.

But that can also be mitigated manually by bumping their character with underscores ie "_Jack asks what James wants to order_ James: I return Jack's greeting and peruse the Tavern's menu board".


I've run into this issue a lot with ChatGPT, and almost never with GPT-4. I know it isn't always possible, but just using GPT-4 prevents this 99% of the time (basically 100% with proper prompting).


You make the other side of the conversation the stopword. i.e. "Jill:" for Jack and Jill.


It's relatively simple to detect this type of defect and handle it during/after generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: