What I mean is that the current generation of LLMs don’t understand how concepts relate to one another. Which is why they’re so bad at maths for instance.
Markov chains can’t deduce anything logically. I can.
A consequence of this is that you can steal a black box model by sampling enough answers from its API because you can reconstruct the original model distribution.
The definition of 'Markov chain' is very wide. If you adhere to a materialist worldview, you are a Markov chain. [Or maybe the universe viewed as a whole is a Markov chain.]
> Which is why they’re so bad at maths for instance.
I don't think LLMs currently are intelligent. But please show a GPT-5 chat where it gets any math problem wrong, that most "intelligent" people would get right.
It wouldn't matter if they are both right. Social truth is not reality, and scientific consensus is not reality either (just a good proxy of "is this true", but its been shown to be wrong many times - at least based on a later consensus, if not objective experiments).
Markov chains can’t deduce anything logically. I can.