The cool thing about Conway's Game of Life is that you can't predict it unless you do the full recursion, there is no shortcut. It relates to external undecidability of recurrent processes.
"Computation irreducibility"[0] is Mr. Wolfram's word for it and I believe it has some relationship to his CA physics, but I won't pretend I understand.
Yes. He relates an interpretation/definition of the second law of thermodynamics- the increasing entropy thing- to his irreducibility. And builds a computational theory about the role and importance of the physics "observer" in these analyses. Computational irreducibility is basically a statement both about the intrinsic requirement for computation to arrive at the future, and also about the computational capability of the "observer"- a model, or our brains- to arrive, or not, at the future more efficiently.
I am a computational person, not a scientist, and I think science people find him to be speaking total garbage. That seems a correct assessment to me. His model of the world from physics perspective seems wrong. Nevertheless personally I find his computational lens/bias to be useful.
> I am a computational person, not a scientist, and I think science people find him to be speaking total garbage. That seems a correct assessment to me. His model of the world from physics perspective seems wrong.
I don't think it's that he is speaking garbage, he is basically talking about digital physics which is a real theory being considered and researched, not pseudoscience.
But he doesn't work with the scientific community at all, he just writes his long essays and uses his own terms and ignores anyone else doing similar work. He then gets upset when scientists don't just defer to him.
Though most academic disciplines have a strong "not invented here" bias. It makes you ignore anything outside your citation bubble or from different fields with somewhat different conventions and terms. Even if the other guys are academics as well.
Kaggle's competitions [0] do pull a lot of impressive little pieces of code, a number of which actually do take shortcuts. They do define a few things to make it more possible, and there is luck involved not just deterministic results.
Wouldn't every Turing complete cellular automaton have this property? What would be an example of a nontrivial (i.e., sufficiently expressive) CA that is "predictable"?
One example would be a CA that takes an exponential number of steps to emulate n steps of a Turing machine. That allows you to predict exponentially far into the future by running the TM machine instead.
This insight is why I stopped trying to use CA as my underlying computational substrate in genetic programming experiments. It is much, much cheaper to run something like brainfuck programs than it is to simulate CA on a practical computer.
A switch statement over 8 instructions contained in a contiguous byte array will essentially teleport its way through your CPU's architecture.
I feel like CA (single, or multi-state) would work quite well on dedicated hardware, how big could the grid even be? I may be missing the obvious, but it does seem easier to scale compared to cores and manual dispatch.
But otherwise yeah, not the most efficient on current CPUs.
To be fair, a one-dimensional CA is effectively a sort of UTM with a weird program counter and instruction set. I think the more useful CAs will tend to be of the higher dimensional variety (beyond 2D/3D). Simulating a CA in hyperspace seems problematic even if you are intending to purpose build the hardware.