"The question [whether computers can think] is just as relevant and just as meaningful as the question whether submarines can swim." -- Edsger W. Dijkstra, 24 November 1983
I don't agree with the parallel. Submarines can move through water - whether you call that swimming or not isn't an interesting question, and doesn't illuminate the function of a submarine.
With thinking or reasoning, there's not really a precise definition of what it is, but we nevertheless know that currently LLMs and machines more generally can't reproduce many of the human behaviours that we refer to as thinking.
The question of what tasks machines can currently accomplish is certainly meaningful, if not urgent, and the reason LLMs are getting so much attention now is that they're accomplishing tasks that machines previously couldn't do.
To some extent there might always remain a question about whether we call what the machine is doing "thinking" - but that's the uninteresting verbal question. To get at the meaningful questions we might need a more precise or higher resolution map of what we mean by thinking, but the crucial element is what functions a machine can perform, what tasks it can accomplish, and whether we call that "thinking" or not doesn't seem important.
Maybe that was even Dijkstra's point, but it's hard to tell without context...
To be more clear about why I disagree the cases are parallel:
We know how a submarine moves through water, whether it's "swimming" isn't an interesting question.
We don't know to what extent a machine can reproduce the cognitive functions of a human. There are substantive and significant questions about whether or to what extent a particular machine or program can reproduce human cognitive functions.
So I might have phrased my original comment badly. It doesn't matter if we use the word "thinking" or not, but it does matter if a machine can reproduce the human cognitive functions, and if that's what we mean by the question whether a machine can think, then it does matter.
"We know how it moves" is not the reason the question of whether a submarine swims is not interesting. It's because the question is mainly about the definition of the word "swim" rather than about capabilities.
> if that's what we mean by the question whether a machine can think
That's the issue. The question of whether a machine can think (or reason) is a question of word definitions, not capabilities. The capabilities questions are the ones that matter.
> The capabilities questions are the ones that matter.
Yes, that's what I'm saying. I also think there's a clear sense in which asking whether machines can think is a question about capabilities, even though we would need a more precise definition of "thinking" to be able to answer it.
So that's how I'd sum it up: we know the capabilities of submarines, and whether we say they're swimming or not doesn't answer any further question about those capabilities. We don't know the capabilities of machines; the interesting questions are about what they can do, and one (imprecise) way of asking that question is whether they can think
> I also think there's a clear sense in which asking whether machines can think is a question about capabilities, even though we would need a more precise definition of "thinking" to be able to answer it.
The second half of the sentence contradicts the first. It can't be a clear question about capabilities without widespread agreement on a more rigorous definition of the word "think". Dijkstra's point is that the debate about word definitions is irrelevant and a distraction. We can measure and judge capabilities directly.
And the often missed caveat is that we should only care about whether the software does what it is supposed to do.
Under that light, LLMs are just buggy and have been for years. Where is the LLM that does what it says it should do? "Hallucination" and "do they reason" are distractions. They fail. They're buggy.