Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Selective Applicative Functors (veritates.love)
31 points by ibobev 38 days ago | hide | past | favorite | 11 comments


This is an interesting concept, but it is presented so abstractly as to be opaque to even most functional programmers, I think.

The basic idea, as I understand it, is a typeclass that is more powerful than an applicative, but still less powerful than a monad, called a "selective applicative". I would summarize them like this:

* Applicative: Fixed computation graph, but no conditional structure.

* Selective applicative: Fixed computation graph with some conditional "branches".

* Monad: Dynamic computation graph and control flow, can generate new structure on the fly.

I'm sure I'm still missing a lot, but I think that's the 10,000 foot view.


That matches what I managed to glean, though I didn't get much further.

It started making more sense though when I managed to fully understand the AST comparison that was being made. Specifically, this approach lets you do the LISPy "code is data" thing where you can construct your program within your program and then run it, but instead does it via "data is execution+control-flow". Thus gaining the benefits of static analysis on the constructed program since you wrote it all out in the "normal"/static order rather than the "nested"/dynamic view of a program that monads give.

At least that's the gist I got, though take it with a grain of salt, the article went very over my head at times.


https://chrispenner.ca/posts/expressiveness-spectrum has some good examples showing why one might want selectives (then continues to explore arrows https://chrispenner.ca/posts/arrow-effects )


Where do Hughes's Arrows fit in?


In this particular case, Hughes’ arrows are a practical implementation of a Profunctor Categorical Structure. They are roughly a generalization of what arrows (as in function types or more accurately relations) are.

In the article, author is pointing out that the selective applicative doesn’t seem to work correctly (in a categorical sense) for functions, but when generalizing to profunctors a near semi-ring structure appears and works for the SApplicative.

I am pretty sure I’m reading TFA correctly here, but I’ll check when off mobile and edit if I still can.


The article seems to assume readers are already familiar with the context, or maybe that they'll stop to read the original paper. For those who aren't familiar, Selective Applicative Functors were presented as part of the development of the Haxl library at Facebook (after they hired a series of prominent Haskellers like author of "Real World Haskell", Bryan O'Sullivan, and GHC Co-Developer Simon Marlow). Haxl is a batching framework for Haskell (to, e.g., solve the "N+1 database query problem"), which later inspired the various DataLoader libraries in other language ecosystems.

In Haskell, there's a lot of desire to be able to write effectful code as you normally would, but with different types to do things like restrict the available actions (algebraic effects) or do optimizations like batching. The approaches generally used for this (Free Monads) do this by producing a data structure kind of like an AST; Haskell's "do" notation transforms the sequential code into Monadic "bind" calls for your AST's type (like turning .then() into .flatMap() calls, if you're from Javascript), and then the AST can be manipulated before being interpreted/executed. This works, but it's fundamentally limited by the fact that the "bind" operation takes a callback to decide what to do next - a callback is arbitrary code - your "bind" implementation can't look into it to see what it might do, so there's no room to "look ahead" to do runtime optimization.

Another approach is to slide back to something less powerful than Moands, Applicative Functors, where the structure of the computation is known in advance, but the whole point of using Monads is that they can decide what to do next based on the runtime results of the previous operation - that they accept a callback - so by switching to Applicatives, by definition you're giving up the ability to make runtime choices like deciding not to run a query if the last one got no results.

Selective Functors were introduced as a middle ground - solidifying the possible decisions ahead of time, while still allowing decisions based on runtime information - for example, choosing from a set of pre-defined SQL queries, rather than just running a function that generates an arbitrary one.


Well... I enjoyed the author's enthusiasm. What I read was interesting, but I didn't read all of it. Why not? I wasn't sure where it was going. I think there is a tighter post to be written that explains the new formulation without so much wandering around in abstractions. I also think the post uses jargon where it isn't necessary. I could just about follow it, but it made reading unnecesarily hard work. For me a better post would explain the new selective functor in the most concrete terms possible, and only then talk about the abstract things it is related to.


I hear you but I think you are simply asking for an entirely different blog post. I don't think Verity's aim here is to give an introduction to `Selective`, but rather to introduce a formalization for it; something which has been notably missing for those who think about these sorts of things.


I understand the original Selective Functor, so an introduction to that is not what I'm after. I want to understand this new formalization, because it's the kind of thing I use, but I'm not a theoretician. If the goal of this post is simply to explain the formalization to the small number of people who are already deep into (category) theory, I guess it does a fine job. However, I think a better post would be more accessible.

I think the blog post does a good job describing the idea of Selective ("finite-case" etc.) but for me it falls apart shortly afterwards. If I was writing it, from what I understood I would start with the overview, then describe `CaseTree`, and then go into what abstractions this is an instance of.

As a small example of how I think the writing could be improved, take this sentence:

"This is in contrast to applicative functors, which have no “arrow of time”: their structure can be dualized to run effects in reverse because it has no control flow required by the interface."

This uses jargon where it's not necessary. There is no need to mention duality, and the "arrow of time" isn't very helpful unless you've had some fairly specific education. I feel it's sufficient to say that applicatives don't represent any particular control-flow and therefore can be run in any order.


I'm going to have to read through this again to really grasp it

I believe there's a minor error in the "Tensorful" section. When describing that `CaseTree` is a profunctor, the type of the contravariant map over `CaseTree` is written as `(i' -> i) -> CaseTree f i r -> CaseTree i' f r`. I believe the last term should be `CaseTree f i' r`


Don’t know why, but this was an approachable intro to FP for me, just the right level of detail for me to grok why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: