What can I say? You are mistaken. Computation is as much about physics as it is about math. It is the study of what can actually be done in this universe with real hardware (including human brains). If you doubt this, read the opening paragraph of this paper:
The only reason that P=NP? matters at all (let alone why it is a foundational question) is because the theory of computation concerns itself with what can be done with actual physical hardware in actual physical time.
> Per analog and quantum, indeed, and they are also mathematics.
No, they aren't. Analog computers are machines. Quantum computers are machines (or at least they will be if we ever actually manage to build one). We can describe the behavior of these machines mathematically, but that is not why they matter. They matter because we can actually build them.
> In 1900, David Hilbert challenged mathematicians to design a “purely mechanical procedure” to determine the truth or falsehood of any mathematical statement. That goal turned out to be impossible. But the question — does such a procedure exist, and why or why not? — helped launch two related revolutions that shaped the twentieth century: one in science and philosophy, as the results of G ̈odel, Church, Turing, and Post made the limits of reasoning itself a subject of mathematical analysis; and the other in technology, as the electronic computer achieved, not all of Hilbert’s dream, but enough of it to change the daily experience of most people on earth.
> Although there’s no “purely mechanical procedure” to determine if a mathematical statement S is true or false, there is a mechanical procedure to determine if S has a proof of some bounded length n: simply enumerate over all proofs of length at most n, and check if any of them prove S.
This method, however, takes exponential time. The P ?= NP problem asks whether there’s a fast algorithm to find such a proof (or to report that no proof of length at most n exists), for a suitable meaning of the word “fast.” One can think of P ?= NP as a modern refinement of Hilbert’s 1900 question. The problem was explicitly posed in the early 1970s in the works of Cook and Levin, though versions were stated earlier—including by G ̈odel in 1956, and as we see above, by John Nash in 1955.
> Think of a large jigsaw puzzle with (say) 101000 possible ways of arranging the pieces, or an encrypted message with a similarly huge number of possible decrypts, or an airline with astronomically many ways of scheduling its flights, or a neural network with millions of weights that can be set independently. All of these examples share two key features:
> (1) a finite but exponentially-large space of possible solutions; and
> (2) a fast, mechanical way to check whether any claimed solution is “valid.”
I agree that much of the motivation comes from real world imperatives, but, for example, P=NP is of deep mathematical interest in its own right, and if/when mathematicians solve it, they will move on and let us sort out the details.
We can actually build stuff (beyond some somewhat blessed prototypes) often only when we've understood the mathematics behind it - edit: or is that the other way around??
I'm not sure that any rigorous discipline can consider itself outside or unbound by mathematics.
> P=NP is of deep mathematical interest in its own right
Sure. Computation and math are closely related, but they are not identical. Physics is described almost exclusively by differential equations, but differential equations and physics are nonetheless two distinct fields of study.
The complexity topic from which we have P = NP is not actually about time, but number of steps.
Of course that matters physically because if you only have a machine that performs one step at a time (or a somewhat better one that performs N steps at a time, for some fixed N), then the number of steps does translate to amount of time.
If you have access to unlimited parallelism, then, for instance, some recursive algorithms that completely process a tree structure can drop from linear time to logarithmic.
> Of course that matters physically because if you only have a machine that performs one step at a time (or a somewhat better one that performs N steps at a time, for some fixed N), then the number of steps does translate to amount of time.
It's much more fundamental than that. If you have any finite machine then the amount of time it takes to perform N steps will be proportional to N for sufficiently large N.
> If you have access to unlimited parallelism...
And if you had some magic pixie dust...
Sorry to be the one to break this to you but you live in a finite universe.
For sufficiently large N, all computation that isn't O(1) takes infinite time.
Complexity analysis goes out to infinity, but the way we use it as a tool is to inform us about what happens with our practical, finite inputs.
We know that something requires a non-polynomial number of steps, it quickly becomes untractable for small inputs. (So if we code that in a practical program, we need some justification that either the inputs won't occur, or we actively guard against them.)
Whether the entire universe is actually finite is not known, and not really relevant because that subset of it which is available to us as resources is vastly tiny.
The universe is vast, and vastly parallel: things are happening all over it at once, with more objects than have ever been crammed into any of our computing machines.
> Sorry to be the one to break this to you but you live in a finite universe
i don't think cosmologists make that claim in such a strong way. its mainly worded something along the lines of, "given our current understanding, its most likely to be finite"
"You live in a finite universe" is not the same as "the universe is finite". You live in a light cone, and unless you are immortal that light cone comprises a finite amount of space-time.
https://www.scottaaronson.com/papers/pnp.pdf
The only reason that P=NP? matters at all (let alone why it is a foundational question) is because the theory of computation concerns itself with what can be done with actual physical hardware in actual physical time.
> Per analog and quantum, indeed, and they are also mathematics.
No, they aren't. Analog computers are machines. Quantum computers are machines (or at least they will be if we ever actually manage to build one). We can describe the behavior of these machines mathematically, but that is not why they matter. They matter because we can actually build them.