You might be interested in a fascinating game called TIS-100 by Zachtronics. The game is a lot of things, but a core premise is that it imagines a computer from a time approximately parallel to the PDP-11 that took the form of small compute components that were connected to each other instead of a monolithic central processor. It's a fun game, and it sort of raises the question of which is the "abomination[s] forced on us as an accident of history." Because when you look around at most of the biological world, you see heavily distributed systems with some centralization, but computers (man-made things that they are) are heavily centralized, clock-locked, deterministic... And energy-intensive. And slow.
History is arbitrary but not random, and it's fun to think about how things might have been different if the first machines started embarrassingly parallel with follow-up work to consolidate the data instead of embarrassingly centralized with us now in the era of how to make the monoliths fast. It's interesting to think about what's "ideal" about a machine that supports arbitrary jumps (and the global address space that demands, and the sequential execution necessary to prevent decoherence, and the memory protection demanded because some addresses are not executable code and should never be executed, etc., etc.).
> and it sort of raises the question of which is the "abomination[s] forced on us as an accident of history."
What I meant by that is the legacy of AMD64 having thousands of instructions, many of them with arbitrary opcode encoding, half-assed SIMD ISA instead of a proper vector ISA, and the ability to emulate an 8086, all purely for reasons of backwards compatibility. If you started designing a computing ecosystem completely from scratch, surely you wouldn't end up with an AMD64-based IBM PC descendant as your best idea you could come up with?
This would be kind of a sad direction to go in if we didn’t have any constraints, since we already have new architectures going in this direction anyways. When you take the crust of the ISA off though all processors look similar and that’s where the interesting bits lie.
History is arbitrary but not random, and it's fun to think about how things might have been different if the first machines started embarrassingly parallel with follow-up work to consolidate the data instead of embarrassingly centralized with us now in the era of how to make the monoliths fast. It's interesting to think about what's "ideal" about a machine that supports arbitrary jumps (and the global address space that demands, and the sequential execution necessary to prevent decoherence, and the memory protection demanded because some addresses are not executable code and should never be executed, etc., etc.).