I've been excited about STMs since I read "Composable Memory Transactions" back in 02005, shortly before it was published, and I still believe in the glorious transactional future, but it's difficult to adopt an STM piecemeal; it kind of wants to own your entire program, the same way that garbage collection or nonblocking I/O do, and more so than multithreading with locks. You kind of have to commit entirely to an STM. The efforts in C# to adopt an STM ending around 02010 were a disaster as a result.
The article says a couple of things about STMs that are not true of STMs in general, just true of the Haskell STM the author is familiar with, like a small Brazilian child confidently telling you that almost everyone speaks Portuguese.
One of these is, "STM is an optimistic concurrency system." The possibility of making your concurrency 100% lock-free is one of the most appealing things about STMs, and I think it could be a key to solving the UI latency problem, which just keeps getting worse and worse. Actors and CSP don't normally help here; an Actor is just as "blocking" as a lock. But you can implement an STM with partly pessimistic concurrency, or purely pessimistic, and it might even be a good idea.
Another is, "One last benefit of STM which we haven't yet discussed is that it supports intelligent transaction retries based on conditions of the synchronized data itself." This was an innovation introduced by "Composable Memory Transactions", and many STMs do not support it, including Keir Fraser's awesomely fast version. I am even less certain that it is the correct tradeoff for all uses than I am about purely optimistic synchronization.
But all of this is why I'm rereading Gray and Reuter's Transaction Processing right now after 25 years. With the benefit of 35 years of hindsight, it's a frustrating mix of inspiring long-range vision and myopic boneheadedness. But it shares a lot of hard-won wisdom about problems like long transactions that pop up in a new guise in STMs.
I think writing concurent programs will always be a hard problem, relative to the difficulty of writing non-concurrent programs, and the only "solution" is to isolate, minimize, and regulate contention. The implementation details of TM, locks, monitors, semaphores, actors, message queues, transactions, etc., are at best "distractions", at worst hindrances. I think a good model of a concurrent program, one that lends itself to writing the program simply, will be applicable across many different implementations. Anything that obscures the development of such a model is harmful. Worst of all is the sheer prevalence of shared resources (especially shared memory). Sharing brings contention, so control sharing.
I don't agree that whether you're using TM, shared-memory monitors, or actors with message queues is an implementation detail or that there is a better programming model that hides the differences between them. You can implement any of them on top of any of the others, but you're still programming to whatever model you put on top.
In the implementation that runs, yes, you have to choose something. However, I think the fundamental design is independent of those options, and probably should be developed independently.
There is surely some sense in which that's true; the choice of concurrency primitives, even among such radically different choices, won't change literally every design decision in your system. But it is very pervasive, and it regularly provokes failures that are visible to users.
On the problems with STM in C#, see https://joeduffyblog.com/2010/01/03/a-brief-retrospective-on... (I can't believe nobody else has posted this link yet). As with the Chris Penner article, there are a lot of things described as features of STMs in general which are actually just properties of the STM he worked on, which explains some of the things that sound like nonsense if you've only worked with Haskell's STM or Clojure's. (Duffy is much better about delineating the boundaries of the systems he's talking about, though, because he knows there are alternatives.)
interesting, the notation also implies 2005 and 2010 A.D and not B.C, or maybe the notation is about exactly A.D? either way, interesting choice if it was intentional. we say “year 515” without disambiguation right
I wish people would comment about transactions, optimistic synchronization, CSP, actors, priority inversion, Fraser's astounding code (https://www.cl.cam.ac.uk/research/srg/netos/projects/archive...) etc., but I guess we each do what we can, and you have to meet people where they are. I probably couldn't have posted a comment any better than yours when I was 12, and maybe you're 12, so maybe you can't do any better. Hopefully, eventually, you will.
The article says a couple of things about STMs that are not true of STMs in general, just true of the Haskell STM the author is familiar with, like a small Brazilian child confidently telling you that almost everyone speaks Portuguese.
One of these is, "STM is an optimistic concurrency system." The possibility of making your concurrency 100% lock-free is one of the most appealing things about STMs, and I think it could be a key to solving the UI latency problem, which just keeps getting worse and worse. Actors and CSP don't normally help here; an Actor is just as "blocking" as a lock. But you can implement an STM with partly pessimistic concurrency, or purely pessimistic, and it might even be a good idea.
Another is, "One last benefit of STM which we haven't yet discussed is that it supports intelligent transaction retries based on conditions of the synchronized data itself." This was an innovation introduced by "Composable Memory Transactions", and many STMs do not support it, including Keir Fraser's awesomely fast version. I am even less certain that it is the correct tradeoff for all uses than I am about purely optimistic synchronization.
But all of this is why I'm rereading Gray and Reuter's Transaction Processing right now after 25 years. With the benefit of 35 years of hindsight, it's a frustrating mix of inspiring long-range vision and myopic boneheadedness. But it shares a lot of hard-won wisdom about problems like long transactions that pop up in a new guise in STMs.