> And then going on to make more ludicrous statements…
> > Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly.
> You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.
The only explanation I can think of is that he thinks the proposal is to scan this brain so that it can be duplicated and run as another fleshy brain. But obviously, the idea is to simulate the entire brain on a computer, so that simulating the brain faster than real life is just a matter of having a fast computer. If he missed this, it makes me a bit skeptical of his other criticisms.
Yes, it was a flawed analogy (aren't they all...).
However, these DOS-games usually fail in sped up emulations because they make assumptions about external inputs such as the Real Time Clock.
I think the point of OP was that we can't reasonably speed up the "RTC" in a brain emulation if you want it to interact with the real world, because that would break all sorts of hardwired assumptions.
For a simple example, if you ran your brain at 4x speed then it would perceive everything in super-slow-motion. At that speed it would already have difficulties to understand when you speak to it (at the least it will have to be a very patient brain).
At higher speeds pretty much all cognitive functions would probably break down - unless you feed it recorded inputs that have been accelerated to match the brain-speed.
That is an issue, but there is a huge difference between having to slow the brain back down in certain situations and being unable to speed it up at all.
I don't expect to play an entire game on fast forward, after all.
Edit: new line about breakdown: I just assumed that whatever input was given would be sped up too. That part of the project seems far less complicated than the brain simulation itself.
My knowledge of brains is limited, but I'd think the issue remains the same even if you cut off all external inputs.
Basics like memory decay are also tied to the system clock. So if you run your brain at 1000x speed then it would probably simply forget everything almost immediately.
And if you make a "simple" patch that prevents it from ever forgetting anything then it would be overwhelmed because it is only wired to deal with a certain amount of memories at a time.
In terms of the DOS-Game analogy: We may be able to patch a game that originally ran in 256kb of Ram to run in 2GB and actually fill that up (because we disabled the garbage collector). But the game probably uses algorithms that break down when faced with such a large dataset.
At this point we're down to having to actually understand the game (or brain) in detail, in order to make the changes required for running at higher capacity.
Actually having a higher capacity will be tricky, yes. But at least there won't be cell decay in the scientists working 4000 hour weeks to figure it out.
I agree, but you could always simulate the brain's environment as well. Then you could speed up the environment with the brain. Bridging the gap might be annoying, for the brains, waiting to communicate with the glacial pace of the squishy rubbish real world humans, but I'm sure they'd get over it.
Although it is also a very good point that even with modern hardware many orders of magnitude faster than the emulated hardware, most emulators have to resort to timing hacks to make everything run smoothly, and because of timing inconsistencies when locking, parallelism is of limited use even when emulating multiple hardware components that originally ran in parallel. To emulate a human brain, we're probably either going to need far more sensitive locking across multiple cores than is currently even imagined, or we're going to have to emulate the whole massively parallel thing on a single thread on a CPU much, much more powerful than a brain.
My interpretation of this paragraph:
Brain function depends intimately, not just on the relative timing of things internal to brain, but also on the timing of inputs to the brain. It is unreasonable to assume that we can speed up sensory information without affecting brain function. If this is the case, it doesn't matter how fast you can run the simulated brain, you won't get meaningful outputs unless it's run at a `normal' rate.
One interesting thing about brains is that they are rarely found by themselves foraging in the wild. Usually they are attached to things. In fact, they are involved in many feedback loops involving sensory input and various effectors. As well as being directly attached to peripheral nervous systems and encased in massive bodies with various physical constraints, circulating hormones, social contexts...
It is hard to imagine raising a baby brain to chess-playing maturity without tons of informational input acquired by interaction with the world. (Even just keeping normal children in the basement is a profound intervention, and they still actually experience quite a bit). So I suppose you will have to speed up your realistic world simulation as well.
The brain is not a personal computer and its development is not a matter of factory production because it is not a piece of technology designed by people.
I agree, assuming that you can completely isolate the system time from real time and that we'll have powerful enough machines to be able to dial it up.
Everything else he said is on the mark, though. We're nowhere close to even understanding everything that's going on inside a brain, let alone migrating it to a completely different medium.