IMHO, this guy is 6 different kinds of crazy. Parallelism is of course more efficient. Parallel algorithms are about determining which steps in an algorithm can be executed independently of one another to take full advantage of multiple CPUs. Some algorithms can be decomposed this way, some can't. As we hit the upper limit of single processing we have to scale out into multiple cores to get more speed. My impression was this guy never really understood how parallel algorithms work, so he decided to hate them.
I don't think you really read the article since the author never said that parallelism is inefficient. He's saying, among other things, that decomposition (which you seem to favor) is not the right way to design parallel programs. In other words, if sequential objects are used to create sequential programs, it follows that parallel objects should be used to compose parallel programs. Sequential order thus becomes a consequence of signaling, i.e., communication between parallel objects.
The problem is coming up with these "parallel objects" through anything other than decomposition. For example, in the quick sort demo you linked to, his first step is to decompose quick sort into 5 sections which can run in parallel. But then in this article, he claims that decomposition is somehow "fake".
Message passing is a great idea, check out Erlang if you want more. Reading this guy's articles, it seems like he is half way towards something interesting, but has blown it all out of proportion. He's at war with his daemons, he even lists some of them on his "Enemies" page.
Very well put, I get the same impression about decomposition. The bottom line is each CPU core can execute some sequence of instructions. If there's more than one core, there can be more than one sequence. Threads are the best current abstraction for that, and this guy goes off on how threads are evil and all this. At the heart of it, there are no objects just data flowing from disk to ram to registers, calculations being performed in the core and data flowing back.
There are always monads in Haskell for really putting concurrency into the forefront...
It could really use some examples of what the hell he's talking about.
Wide ranging examples, too. I'm sure he could come up with one or two, but can he come up with a whole bunch, which span a reasonably wide swath of algorithm space?
Show me how to, say, invert a matrix, which is a possible-yet-annoyingly-hard problem to parallelize ordinarily, and I'll take it seriously.
If I understand correctly, the way to build a parallel system is to have everything concurrent by default and then specify the specific parts that are sequential. Just like to build a secure system, you have to disallow everything by default and then specify the specific operations that are allowed.
Sorta. I'd have preferred to see an example or two.
The problem is that "sequential" vs. "concurrent" isn't solely an implementaion choice. It's a property of the problem. Some problems are inherently sequential, and can't be meaningfully scaled to an arbitrary number of cores. So color me skeptical, basically. Concurrency is hard because it is hard. While there may be a paradigm out there that makes it less so, I'm going to need to see it in operation before I make any judgements.
In the frame of reference of the ants, all clocks agree with one another. This means that timing is global within their FOR. There is no problem in determining simultaneity because there is only one FOR in question. And even in multiple FORs, their relative speed would have to approach that of light for any ant to notice a difference between local times. Deny at your own detriment.
Frankly, I think you've brought up the speed of light as a distraction.
More interesting than whether they are moving at the same speed, is whether they are thinking at the same speed, particularly, whether they think with a common tick. Since time appears for the most part to be continuous, that's not going to get you a tick, nothing for the ants to synchronise their clocks to.
If being in the same frame of reference was enough, then you could just put two computers in the same room and call it a synchronised system.