I'd really like someone on any side of this debate (and there are certainly more than two; for example, some people are advocates of "FP in the small, OO in the large") to write an article that does describe how their approach handles the challenges of designing and maintaining a large system.
I think such articles are rare because they're much harder to write than something like this. In complex systems it becomes very difficult to explain all of the tradeoffs and constraints that have led to the design you've ended up with, and it becomes harder to evaluate that design.
FWIW, at this point I agree with the author that modern FP has strong advantages over OO. One reason I feel this way is because extensive experience with large OO systems has shown me lots of ways in which OO causes problems. However, I admit that I don't have a similar level of experience with large FP systems. I'm sure that as FP gets more popular and more large FP systems get built, we'll find plenty of things to complain about on that side of the fence too.
So strong, pure FP coding will lead to a naturally decomposed system of small pieces -- once the re-factoring is done. There are no large pieces. That's the beauty of it. I believe that the premise of your question is in error.
The sucky part is that there is no guarantee that you will ever get there. A bad programmer or two and you've got a mess. Large FP systems crucially depend on high-quality coding. There is no place for everything to go, that's what the coders figure out!
Contrast that to an OO system, where things go where they naturally belong, but you really don't know what the algorithm is. Hell, you can spend days just wiring stuff up and putting stuff in place before the actual "real" code finds a home. But you always have a plan for where things go.
I don't think you can find a large, complex FP project because I think all the good complex FP projects are clusters of small executables.
What was the premise of my question that you thought was in error? I'm guessing the answer is in your last paragraph: there are no large, complex FP projects because such a large project inherently isn't good FP. We'll have to agree to disagree on that; in my opinion, some projects just don't cleanly decompose into a set of small, manageable subprojects.
So moving beyond that: if it's really true that large FP systems depend on high quality coding, I think FP is doomed.
One aspect of large systems is that you're no longer able to depend on consistently high-quality coding, because even if all the coders involved are highly skilled, there are new people being added to the project all the time and old people leaving. Knowledge and context gets lost, and new people write code that makes sense locally but doesn't fit the needs of the project as a whole. That's just reality. And even the experienced coders on the project lose the ability to consider the whole thing at once after a while. There is a limit to how much modularity and encapsulation can help with that, although they're very useful tools.
In a large scale project, it's really important to consider how features of the language and tooling and ecosystem help or hinder you in managing those kinds of problems. That's the sort of thing I think we could use more discussion about. And I feel completely opposite from you here - when it comes to dealing with imperfect coding and imperfect coders, I believe that modern FP languages have better solutions than modern OO languages. I think FP's popularity is only going to grow, exactly for that reason. But I also know there are places where current FP languages need work, or where the paradigm may be a poor fit, and I think it won't be clear where all of the weak points are until we've got more experience as a community with large FP projects than we have right now.
I'm not sure what you mean by "large" but Jane Street has apparently millions of lines of OCaml written. Of course, I'd argue that the fact that PHP is inherently unsuitable for anything complex doesn't keep Facebook from having a gigantic amount of it. The difference is that they had to write a type checker for it :)
I think the truth is more simple. Traditional OO is what is taught, traditional OO languages have tons of libraries, and there are tons of legacy code in traditional OO. You can easily find OO programmers. Nobody ever got fired for making OO systems, even when they end in barely-maintainable horrors full of mutable state.
> I'm not sure what you mean by "large" but Jane Street has apparently millions of lines of OCaml written.
Ocaml is a nice, pragmatic hybrid of imperative, OO and FP. Adding sporadic side-effects to some component (1) will not force the rest of your program interacting with it into some monad. I guess there's a reason they didn't use Haskell :P
(1) For example, you want to compute on-line summary statistics, where the input is run-time configurable, i.e., items can come from a file, network or memory stream.
That's great - if you can do it. The Unix design philosophy has held up well over the years.
But what you're doing is building small pieces that communicate with each other (via pipes, files, databases, or something similar). That looks almost like an OO design (pieces that communicate with each other over defined interfaces, hiding their internals from each other), except that the inter-object communication channel is both more inefficient and more impoverished in what it can express.
The rich typing of most OO languages and frameworks means that the "defined interfaces" are usually many and varied, and the system is less composable and reconfigurable as a result.
Unix pipes work so well in part precisely because the medium of exchange is so unstructured, with every "module" speaking the same language. You may need to massage the medium between two modules, but guess what, we have other modules like cut and sed and awk, that are not only able to transform the medium so that modules can be attached to one another, but themselves only had to be written once.
I think the Unix pipe pattern of architecture works very well in the large, and you see things very close to it elsewhere. C#'s Linq is fundamentally based on transforming iterator streams - little different, architecturally, than Unix pipes. The Rack middleware stack in Rails has a similar structure - every module has a single method, and recurses into the next step in the pipeline, and gets a chance to modify input on the way in and output on the way out. Both get their power by using fundamentally the same "type" on all the boundaries between modules, rather than module-specific types. It's the very antithesis of a language like Java, which even wants you to wrap your exceptions in a module-specific type.
I found your comment accidentally extremely funny. It's also illustrative of the problem here. I decided to reply not in order to goad you but to try to make some sense to the other OO folks reading along. Hopefully I can disagree and add some nuance without sounding like an asshole.
"That looks almost like an OO design"
Yes. Yes it does. You can only move data so many ways. I've got pipes, you've got messages. Life is good.
"except that the inter-object communication channel is both more inefficient and more impoverished in what it can express"
Really wanted to call bullshit on you here. If it's working, then somehow the efficiencies and paradigm of construction has overcome all these limitations, no? Lot of loaded words here. Are OO paradigms richer in terms of expressiveness? Gee, I don't know. You could say so. But in my mind it's an uniformed opinion. It's all pretty much the same.
Many times OO folks get really frustrated when they start learning FP. I know I did. The sample code did silly things like sort integers. Everything was simple, trivial, academic. Where's the real code? I would wonder. I'd read three books and we'd never get around to building a system.
Looking back, what I missed was that I was already looking at the real code. It was my mindest of wanting all of this expressivness, efficiency, and richness of expression that was preventing me from seeing a very important thing: we were solving the important problem!
Instead, I had a very fine-tuned idea of how things should look: this goes here, that goes there. This is obviously an interface, we should always use SOLID, and on and on and on and on. I had a feel for what good OO looks like. It's a beautiful, rich thing. Love it.
But this kind of thinking not only was not useful in solving FP problems, it consistently led me down the wrong path in structuring FP solutions, which was weird. I would look at things as all being the same -- when I should have been looking at the data and the functions.
Guy I know asked online the other day "What's the difference between microservices and components?" My reply "Everything is the same, but there's a difference in how you think about them. A component plugs in, usually through interfaces. A service moves things, usually through pipes."
If you're looking at a service as being another version of a set of objects passing messages, you're thinking about system construction wrong. Wish I could describe it better than that. It was something I struggled with for a long time.
> Hopefully I can disagree and add some nuance without sounding like an asshole.
I think you succeeded.
And I think you're right that OO thinking is probably not going to lead you to a good FP design. Why should we expect it to? (And you're probably also right that OO programmers, unthinkingly, do expect it to.)
Perhaps what I should have said is this: The architecture you're coming up with looks somewhat like Object-Oriented Analysis and Design (OOAD), even if it's implemented with FP rather than OOP.
On to this line: "except that the inter-object communication channel is both more inefficient and more impoverished in what it can express."
There's two kinds of efficiency in play here: programmer efficiency and machine efficiency. In many cases, it makes more sense (now) to worry about programmer efficiency - we're not pushing the machines all that hard. But if I do care about machine efficiency, I can get more of it with a single app than I can with a series of apps connected by pipes, because I don't have to keep serializing and de-serializing the data. Should you care? Depends on your situation. So that's the efficiency argument.
Expressiveness: This chunk of code is expecting a floating-point number here. If it gets that via a (typed) function call, it can demand that the caller supply it with a floating-point number. If it gets it via a pipe, it can't. All it can do is spit out an error message at runtime.
I think such articles are rare because they're much harder to write than something like this. In complex systems it becomes very difficult to explain all of the tradeoffs and constraints that have led to the design you've ended up with, and it becomes harder to evaluate that design.
FWIW, at this point I agree with the author that modern FP has strong advantages over OO. One reason I feel this way is because extensive experience with large OO systems has shown me lots of ways in which OO causes problems. However, I admit that I don't have a similar level of experience with large FP systems. I'm sure that as FP gets more popular and more large FP systems get built, we'll find plenty of things to complain about on that side of the fence too.