Days 12, 13 and 14 - Functors, Applicatives, Monads

Yes, I’m trying to cramp in 3 days of updates in one blog post. So this is going to be more of an update. Lots of reasons as to why I’m doing this, most prominent being lack of time. And I ran out of time because it was birthday week (both for me and for Alvina!) and because I’ve been playing Pokemon Go. Grinding Magikarps right now, almost 300 candies, just a few more days of grinding.

On the birthday side, Alvina gave me books! She got me Super Freakonomics, and I’m pretty excited to read it since I really love the first book. She also got me the Soviet Chess Primer! Yes, I used to play chess pretty seriously. Lots of great things have been said about this book, even from the likes of Karpov, Kasparov and Dvoretsky. So I wanna see what makes this book so dear to such chess personalities.

I ordered a Hoya RM-72 49mm Infrared Lens filter for Alvina from Amazon but the package hasn’t arrived yet. Hopefully it gets here in a few days. I can’t wait to see her in action with it. And as soon as she does, we’ll be able to update this blog with some of her stuff.

Back to the title of the post, even with birthday week around, I tried to sneak in a few hours to study the Haskell Book. Like I said previously, I’ve been very eager to read these three chapters because for me, and a lot of other struggling programmers trying to grasp Haskell, these topics are the meat of our learning process and often where we feel like giving up (and I have in the past). It seems like a huge obstacle and from my previous experience with learning Haskell, I know it feels very overwhelming and confusing because there’s just so many ways to interpret these concepts.

On top of that, none of our previous programming experience actually helps. In fact, it can be slightly counterintuitive for a few things. For example, the return keyword in Haskell has a completely different meaning from the same keyword in other, more conventional languages such as C or Python. These concepts can be approached mathematically from the viewpoint of abstract algebra and also from a more programmatic viewpoint directly through their Haskell implementations. And often a mixture of these two combined with different analogies of burritos, space suits, boxes and computations lead to a very frustrating learning experience.

The approach that this book takes is kinda unique, at least for me. I’m pretty sure that the only thing that has changed between me understanding from this book and not from my past learning material is just the perspective. After all, I’m learning the same thing, maybe just from a more accessible perspective with respect to my intelligence.

The book builds up from Functor to Applicatives to Monads, through the concept of function application through structure. I won’t say I understand everything completely yet, because I’m sure there are loopholes in my understanding right now. But the book goes something like this.

DISCLAIMER: I sincerely believe that no one other than myself will understand the explanation below. I’m writing this as an exercise for myself to make sure I understand it. It’s very Zen-like, everyone has their own way of understanding this, this is mine.

Functors

A Functor basically gives us an operation, called fmap which takes a function a -> b and applies it inside a structure.

By structure, we just mean values being inside some kinda data constructor. For example, [1, 2, 3] is a list. The data constructor is [] and it contains the values 1, 2, 3. We can say that those values are inside the [] structure. Similarly, for Just 5, 5 is the value and Just is the structure.

Functor’s fmap basically lets us take a normal function and apply that function to the value inside some structure, without touching the structure itself.

fmap (+1) [1, 2, 3] -- [2, 3, 4]

In the example above, +1 is the function we are passing to fmap. The list is the structure. fmap took our function and applied it inside our structure/list, giving us back a structure/list but with the function applied to the values inside it.

Similarly, for Maybe we have the following -

fmap (+1) $ Just 5 -- Just 6

Here, fmap applied the function +1 to the value 5 which was inside the structure Just, resulting in Just 6, which has the same structure as the original argument but with the function applied to the value inside it.

We generalize this in the type of fmap as:

fmap :: Functor f => (a -> b) -> f a -> f b

The f in the type declaration represents the structure inside which the a and b values lie. The structure must obviously have an instance of Functor to be able to use fmap.

Applicatives

Applicative is similar to Functor, I like to think of it as sort of an upgrade.

It’s defined something like this:

class Functor f => Applicative f where
    pure :: a -> f a
    (<*>) :: f (a -> b) -> f a -> f b

From the class declaration, it’s clear that all applicatives must also have an instance of Functor.

The pure function is very simple, it just takes a value and returns it inside the concerned f structure. So for Maybe, pure 5 would return Just 5.

The second function, <*>, also called ‘apply’ I think, is very similar to fmap. We can see it if we compare them type-wise side by side:

fmap :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b

They are very similar; they both apply a function to values inside some structure and return the results of that function application inside that structure. The only difference is that in the case of Applicative, the function that’s being applied is also inside some structure. In the case of fmap however, the function being applied wasn’t in any structure.

[(+1), (+2)] <*> [1, 2, 3] -- [2, 3, 4, 3, 4, 5]

There is also a monoidal part to applicatives. Think of it this way. Our ‘apply’ function has the arguments f (a -> b) and f a. It returns f b. When we used fmap, we could return the f or the structure unchanged because there was only one of it. But here, we need to make sure we are combining the structures of the two f properly. Of course the two f will have the same type. But they do not necessarily have to be the same data constructors. In such a case, we have to make sure that the structures are combined properly, and usually if they have a Monoid instance, we can make use of that to combine them. This is why applicatives are also called monoidal functors. We apply the function inside the structure functorially like fmap and then combine the structures for the result monoidally.

A good example of the monoidal combining is with the two-tuple applicative. The instance for it is defined something like this:

instance Monoid a => Applicative ((,) a) where
 ...
 ...
 ...

The first argument of (,) is already included in the instance declaration. So when we use the two-tuple applicative, the second argument of the tuple is produced functorially by applying the function, but the first argument of the tuple has to be combined monoidally. An example for better clarity because I know it sounds confusing -

("lel", (+1)) <*> ("lol", 1) -- ("lellol", 2)

Notice how the +1 has been applied to the second argument and how the first arguments of the tuple magically combine. It’s basically the list monoid in play, monoidally combining “lel” and “lol” to produce the first value in our resulting tuple. If you recall, ++ is the mappend for lists. And here, we have just that, a concatenation of two lists.

So we can think of applicatives as function application inside a structure similar to fmap from Functor but with the function itself being inside the structure as well, we also need to make sure that the structure is combined properly in the result. However, we need to be mindful about how the structures combine as sometimes a type may have more than one monoid instance. Also, for types like Maybe, the Applicative and Monoid instances aren’t guaranteed to have the same monoid of structure.

Monads

Finally, monads, the most complicated thing in the world, except it’s not. Like fmap and <*>, monads are all about function application in a structure as well. We’ll see what’s different in just a bit. Let’s start off by checking a basic Monad implementation:

class Applicative m => Monad m where
    (>>=) :: m a -> (a -> m b) -> m b
    (>>) :: m a -> m b -> m b
    return :: a -> m a

Let’s get the easy stuff out first, which is return. It’s the same as pure from Applicative. It takes a value of type a and puts it in a structure, giving back m a.

Also, notice that Monad has a constraint of Applicative. So all monads are applicatives as well.

Next, is the >> operator. And it does exactly what the type signature says. It takes two monads, or let’s say two values which are not necessarily of the same type, wrapped in some structure m, and returns the second one of the two. What use is that? It’s useful when we only want the side effect of the first monad but not what it returns. It’ll get clearer when we check the next operator. The >> operator is called the sequencing operator, because all it does is sequence two monads.

The star of the Monad typeclass is the bind operator, >>=. To put it into our function application over structure perspective, we can say that the bind operator takes a value inside a structure, m a and a function a -> m b, which maps a value to another value (not necessarily of the same type) put inside a structure, and returns that value, m b as the final result. Sounds simple, but let’s compare with fmap.

fmap :: (a -> b) -> f a -> f b
(>>=) :: f a -> (a -> f b) -> f b
(=<<) :: (a -> f b) -> f a -> f b

I’ve changed the m in the Monad class to f so we can see clearly in the comparison. The =<< operator is the same as the bind operator except with the arguments flipped, returns the same result though.

As you can see from the comparison above, the bind function differs from fmap in the sense that the function passed as argument to the bind function creates more structure. In fmap, we have a -> b. Simple, plain old function application, that is applied to values inside the structure. But if we were to say, fmap a function of type a -> m b inside a structure, we’ll have more structure inside it, giving us something of type m (m b) in the result. For example:

fmap (\x -> [x+1]) [1, 2, 3] -- [[2], [3], [4]]

See the example above, our function passed to fmap creates more structure. What we want is m a, but because our function creates more structure inside our original structure, we get m (m a), or in our specific case nested lists. What we want is to be able to map a function that creates more structure, over a structure, and then remove the nested levels of structure from that result, leaving us with only one layer of structure. Essentially, in the example above, we want to return [2, 3, 4], without the nested lists from the current result.

And that is exactly what a monad is. It is function application over structure where the function itself creates more structure, but the result is collapsed to have only one layer of structure. Or, we can say that the additional layers of structures are discarded.

How would we collapse the layers though? In the case of our list example, we can use concat, which has the following type:

concat :: [[a]] -> [a]

It takes a list of list of type a and returns a list of type a. But it works only for lists. Fortunately, Control.Monad comes with a function called join that is the same as concat but generalized for monads, with the following type:

join :: m (m a) -> m a

If you look at the type signature of concat and join, you’ll see that concat is the same as join, just specialized for lists.

So we could change our original fmap to act like a monad like so:

join $ fmap (\x -> [x+1]) [1, 2, 3] -- [2, 3, 4]

There, we have successfully applied a function that creates more structure and managed to remove the extra layers to get the result with only one layer of structure around it.

And the bind operator does exactly that for us, applies the function that creates more structure, to the value inside a structure, collapses everything till one layer of structure remains and returns that result.

So, in summary, monad is about functorially applying a function that itself creates more structure and then reducing the nested structure that results.

[1..10] >>= \x -> if x*2 < 10 then [x] else [] -- [1, 2, 3, 4]
[1..10] >>= \x -> [x*2] >>= \y -> if y < 10 then [y] else [] -- [2, 4, 6, 8]

The examples above are usage of monads with the structure being the list. In both cases, our m a is the list [1..10]. The function in the first example doubles its argument, checks if it’s less than 10, if it is less than 10, it returns that item in a list else returns an empty list. Notice that the function returns the result wrapped in the structure, much like the type signature we saw previously, a -> m b. We are assured that the nested lists will be taken care of by the bind implementation of list.

The second example shows how monadic computations can be chained. The result from the first function is passed off to another function. The first function doubles its argument and returns it in a list, the second checks if that doubled element is less than 10 and returns it in a list if it is, else returns an empty list. Again, bind ensures that the nested lists are flattened out in the end. This example shows how monads can be chained together to form longer computations.

The do notation is nothing but syntactic sugar on top of bind. For example, we can write the previous examples like so with do notation:

func1 = do
    x <- [1..10]
    if x*2 < 10
      then [x]
      else []

func2 = do
    x <- [1..10]
    y <- [x*2]
    if y < 10
      then [y]
      else []

There’s a lot of stuff to cover regarding monads, but I think I’ll stop with my messy attempt at trying to explain it here. I feel like I’m getting the hang of it but the rough edges at the moment are very clear. I hope to keep getting better as I continue practicing more Haskell.

Conclusion

Functors allow us to apply a normal function over a structure leaving the structure unchanged.

Applicatives allow us to apply a function which is wrapped in a structure itself, over a structure, leaving the outer structure unchanged or monoidally combined.

Monads allow us to apply a function that creates more structure, over a structure, and reducing the resulting nested structure by using join so we’re left with one layer of structure.

I know all of this sounds confusing, and I know that no one except for me will ever understand what I wrote in this post. But that’s just how most monad tutorials go, and you can say I’m just following an age old tradition.

Up next, we’ll be looking at Foldables and Traversables.

Good luck and cheers to everyone!

comments powered by Disqus