"using periodic features with dominant periods at T=2, 5, 10" seems inconsistent with "platonic representation" and more consistent with "specific patterns noticed in commonly-used human symbolic representations of numbers."
Edit: to be clear I think these patterns are real and meaningful, but only loosely connected to a platonic representation of the number concept.
The "platonic representation" argument is "different models converge on similar representations because they are exposed to the same reality", and "how humans represent things" is a significant part of reality they're exposed to.
I don't think this is a correct formulation of the platonic representation argument:
different models converge on similar representations because they are exposed to the same reality
because that would be true for any statistical system based on real data. I am sure the platonic representation argument is saying something more interesting than that. I believe they are arguing against people like me, who say that LLMs are entirely surface correlations of human symbolic representation of ideas, and not actually capable of understanding the underlying ideas. In particular humans can speak about things chimpanzees cannot speak about, but that we both understand (chimps understand "2 + 2 = 4" - not the human sentence, but the idea that if you have a pair of pairs on one hand, and a quadruplet on the other, you can uniquely match each item between the collections). Humans and chimps both seem to have some understanding of the underlying "platonic reality," whatever that means.
"Not actually capable of understanding" is worthless unfalsifiable garbage, in my eyes. Philosophy at its absolute worst rather than science.
Trying to drag an operational definition of "actual understanding" out of anyone doing this song and dance might as well be pulling teeth. People were trying to make the case for decades, and there's still no ActualUnderstandingBench to actually measure things with.
No, it is partially falsifiable. LLMs clearly don't understand the concept of quantity. They fail at tests designed to assess number understanding in dogs and pigeons; in fact they are quite likely to fail these tests, because they are wildly out of distribution.
We don't know how to demonstrate actual understanding, but we sure can demonstrate a lack of it. When it comes to abstract concepts like "three" or even "more," LLMs have a clear lack of understanding. Birds and mammals do not.
you're right, its just that 'platonic' is an argument that numbers exist in the universe as objects in and of themselves, completely independent of human reality. if we don't assume this, that numbers are a system that humans created (formalism), then sure, we can be happy that llms are picking common representations that map well into our subjective notions of what numbers are.
FWIW it's objectively false that numbers are a system humans created. That's almost certainly true for symbolic numbers and therefore large numbers ( > 20). But pretty much every bird and mammal is capable of quantitative reasoning; a classic experiment is training a rat to press a lever X times when it hears X tones, or training a pigeon to always pick the pile with fewer rocks even if the rocks are much larger (i.e. ruling out the possibility of simpler geometric heuristics). Even bees seem to understand counting: an experiment set up 5 identical human-created (clearly artificial) landmarks pointing to a big vat of yummy sugar water. When the experimenters moved the landmarks closer together, the bees undershot the vat, and likewise overshot when the landmarks were moved further apart.
And of course similar findings have been reproduced etc etc. The important thing to note is how strange and artificial these experiments must seem for the animals involved - maybe not the bees - so e.g. it seems unlikely that a rat evolved to push a lever X times, it is much more plausible that in some sense the rat figured it out. At least in birds and mammals there seems to be a very specific center of the brain responsible for coordinating quantitative sensory information with quantitative motor output, handling the 1-1 mapping fundamental to counting. More broadly, it seems quite plausible that animals which have to raise an indeterminate number of live young would need a robust sense of small-number quantitative reasoning.
It is an interesting question as to whether this is some cognitive trick that evolved 200m years ago and humans are just utterly beholden to it. But I think it requires jumping through less hoops to conclude that the human theory of numbers is pointing to a real law of the universe. It's a consequence of conservation of mass/energy: if you have 5 apples and 5 oranges, you can match each apple to a unique orange and vice versa. If you're not able to do that, someone destroyed an apple or added an orange, etc. It is this naive intuitive sense of numbers that we think of as the "platonic concept" and we share it with animals. It seems to be inconsistent and flaky in SOTA reasoning LLMs. I don't think it's true that LLMs have stumbled into a meaningful platonic representation of numbers. Like an artificial neural network, they've just found a bunch of suggestive and interesting correlations. This research shows the correlations are real! But let's not overinflate them.
Regardless of whether the convergence is superficial or not, I am interested especially in what this could mean for future compression of weights. Quantization of models is currently very dumb (per my limited understanding). Could exploitable patterns make it smarter?
Edit: to be clear I think these patterns are real and meaningful, but only loosely connected to a platonic representation of the number concept.