It's always so weird to me that this works at all. There is no 'you'. It's weights in an impossibly complex network. It seems to me that there surely must be another approach to prompt-making that would be more effective than 'this is another intelligence like me, I will tell it how I want it to act'. It's really not, it's something else.
> It's always so weird to me that this works at all. There is no 'you'. It's weights in an impossibly complex network. It seems to me that there surely must be another approach to prompt-making that would be more effective than 'this is another intelligence like me, I will tell it how I want it to act'. It's really not, it's something else.
Yes, but that "something else" is designed (both via architecture and training data) to predict the language response from humans of language used by humans to communicate with humans, so addressing it like a human addresses a human doesn't just work well coincidentally, but by design.
Although you're correct that it's not exactly 'another intelligence like me,' what it IS is an algorithm that's trained to respond in the way that another intelligence like you would respond. In the corpus of human text, second person instructions are generally followed by text that adheres to the instructions.
There is an alternative that I've found has tradeoffs, where you give it its instructions in third person, e.g. 'Sam is an intelligent personal assistant. The following is a discussion between Sam and Max --- Max: [question]? --- Sam:' You tend to get slightly more coherent responses with that format, because you've hooked into the part of its mind that knows how text looks in textbooks and guides, which are usually well-edited. However, it often gives more 'dry' responses, because you've moved away from the part of its mind that's familiar with human-to-human forum RP.
Ah, that's interesting. So you're able to lean it towards particular contexts by the way you frame the prompt? That would follow, and makes sense.
Implies that the system's behavior isn't only controlled by the prompt, but by how you ADDRESS the prompt. So, start believing that it's a person and address it as such, and it's going to lean towards engaging with you as if it is a person, further misleading you.
> there surely must be another approach to prompt-making that would be more effective than 'this is another intelligence like me, I will tell it how I want it to act
I don't think that this is especially beneficial for the LLMs, the benefit of chat interface is that humans are social animals with lots of experience forming prompts like this.