What is the current hypothesis on if the context windows would be substantially larger, what would this enable LLMs to do that is beyond capabilities of current models (other than the obvious the
now getting forgetful/confused when you’ve exhausted the context)?
I mean, not getting confused / forgetful is a pretty big one!
I think one thing it does is help you get rid of the UX where you have to manage a bunch of distinct chats. I think that pattern is not long for this world - current models are perfectly capable of realizing when the subject of a conversation has changed