In the article, a "LLM knows" if it is able to answer correctly in the right circumstances. The article suggests that even if a LLM answers incorrectly the first time, trying again may result in a correct answer, and then proposes a way to pick the right one.
I know some people don't like applying anthropomorphic terms to LLMs, but you still have to give stuff names. I mean, when you say you kill a process, you don't imply a process is a life form. It is just a simple way of saying that you halt the execution of a process and deallocate its resources in a way that can't be overridden. The analogy works, everyone working in the field understands, where is the problem?
I prefer a formulation closer to the mathematical representation.
With "kill", there is not a lot of space for interpretation, that is why it works.
Take for example the name "Convolutional Neural Networks". Do you prefer that, or let's say "Vision Neural Networks"?
I prefer the first one because it is closer to the mathematical representation. And it does not force you to think that it can only be used for "Vision", which would be biasing the understanding of the model.
This kind of complaint makes it look like you stopped at the title and didn't even bother with the abstract, which says this:
> In this work, we show that the internal representations of LLMs encode much more information about truthfulness than previously recognized. We first discover that the truthfulness information is concentrated in specific tokens, and leveraging this property significantly enhances error detection performance.
"LLMs encode information about truthfulness and leveraging how they encode it enhances error detection" is a meaningful, empirically testable statement.
If I look at the comments of this post my conclusion is that using "true" and "know" is hiding what the actual result is from a lot of people. So people get stuck on meaningless discussions. Seeing this makes me conclude that it is a bad choice from the knowledge transfer point of view for the scientific community. It would be better to use more objective/less emotionally charged words (which is what I understand by scientific language).
If I now talk about my personal experience, when I read this article I have to "translate" in my head every appearance of those words to something that I can work with and is objective. And I find that annoying.
Is "LLMs know" a true sentence in the sense of the article? Is it not? Can LLMs know something? We will never know.