Maybe not “like I’m five” but could someone explain monads to me?
Monads are definitely “functional” but not only limited to functional programming languages.
This is a fairly good example of Monad on Stack Overflow, and surrounding explanation as to what that means. The code is Ruby.
This is what I’ve been calling the “Monad Tutorial Challenge”.
I heard someone say that there’s a kind of inescapable curse that strikes a person immediately upon learning them, forever preventing the ability to explain them to anyone else.
The challenge is to be the first one to ever avoid the curse. This person would be the greatest gift to all of education.
I haven’t learned monads yet, but I know I use them all the time. I think it’s like cooking, you can do it just fine while remaining oblivious to the details of the underlying chemistry, etc.
I admit to being guided by a possibly ignorant fantasy that when the time is right, if the stars happen to align… I wouldn’t be so arrogant to think I could be The One, but I do find it to be a useful source of inspiration for becoming a more effective teacher.
Monad tutorials have spawned a cottage industry in fp languages. I apologize in advance if my explanation contributes to the confusion or is redundant!
A monad is an abstraction which lets us chain together “effectful” computations.
By “effectful computation” we mean that the function changes something or relies on something outside of its immediate scope  for example, if I have a function that takes in a value and increments a global counter, the function is impacting the state of the entire program, not just something within its definition, likewise if a function reads from a file, it is relying on the external state of the file system and could actually fail because the file is not defined or is corrupted. In typed functional languages, it’s common to represent these effects with types that decorate the function signatures, i.e. IO File
or State MyCounter
.
A monad is an “abstraction” because it’s not exactly one concrete “thing”  you can’t define the one true Monad, but monads describe a common “shape” of a computation which involves effects. You might have noticed in functional languages that there are a lot of datatypes that have a flatMap function defined for it. FlatMap is a way of understanding the “shape” of a monad.
In Unison, the signature for List.flatMap is:
List.flatMap : (a > List b) > (List a) > List b
Compare this to the signature for flatMap on Optional (a datatype which wraps values which could return null pointer exceptions)
Optional.flatMap : (a > Optional b) > Optional a > Optional b
What’s similar in both of these functions? Both take a function which operates on a plain element of type a
but that function transforms the element and returns it in some context. The ability to chain together values which need to be expressed in some context like “List” or “Option” ends up being pretty powerful!
Look at the following series of type signatures for finding a dog (without monads):
findPet : Pound > Pet
isDog : Pet > Dog
adoptDog : Dollars > Dog > Dog
Simple enough go to the pound, see if there are any dogs, attempt to adopt dog. You can even call it in one line with a pipe operator findPet pound > isDog > adoptDog dollars
because the types all match up!
However, at any point in this series of events, actually something could go wrong and we could end up with no dog! The local pound could not have any dogs or I could have zero monies, etc. As a functional programmer, rather than doing nullchecks or trycatch blocks at each step, I’d chose to express the possibility of failure with the Optional data type and use flatMaps.
findPet : Pound > Optional Pet
isDog : Pet > Optional Dog
adoptDog : Dollars > Dog > Optional Dog
At any point, if one of these functions returns None
(the null case of the Optional datatype) the chain of computations will stop (no sense trying to call adoptDog
if the pound has no pets today) so we’ve bought ourself some safety. The fact that the Optional datatype is monadic lets us retain the ability to chain together these functions with the addition of flatMap
, something like : findPet pound > flatMap isDog > flatMap adoptDog dollars
.
So, monads are a useful abstraction! They describe the composition of functions that would otherwise not “fit” together because of the extra types needed to represent a computational effect.
I see no mention of functors as yet
@axtens Make sure you grok functors first. All monads are functors, and this is relevant.
Maybe you’ll like Functors and Monads For People Who Have Read Too Many “Tutorials”. Also, see the Typeclassopedia. It is a bit dense, but contains a lot of useful information.
All in all, a monad us just a data structure m
that
 allows trivial injection of a value into the structure (
pure :: a > m a
), and  allows elimination of nesting (
join :: m (m a) > m a
) (e.g. aTree (Tree a)
could be made into a singleTree a
),  provided that these two operations are lawful. The
join
version of the laws is: Given
xss :: m (m a)
,join (fmap pure xss) == xss
, i.e. adding anm
layer on the inside and then immediately merging the layers changes nothing  Given
xs :: m a
,join (pure xs) == xs
, i.e. adding anm
layer on the outside and then immediately merging the layers changes nothing  Given some
xsss :: m (m (m a))
, it doesn’t matter whether you merge the inner two layers first, or the outer two:join (fmap join xsss) == join (join xsss)
 Given
Now, the question remains why this is useful. The answer is that join
(combined with fmap
) allows us to define >>= :: m a > (a > m b) > m b
(“bind” / “flatMap”), i.e. ‘chaining’ of effectfull computations. The definition is xs >>= f = join (fmap f xs)
.
To simply put:
Monads are a mathematical concept that most developers don’t need to understand.
What you need to take from it is this:
 To make your type a monad you need to implement some dependant interfaces.
 You have to adhere to some contracts while doing so.
Considering this, there is not much difference to a IQueryable
in the java world…
This view on monads gives me enough understanding to be able to read some Haskell programs, but not necessarily to do a deep reasoning about them.
@rlmark’s most concrete examples are both data structures. The monad interface is much broader though: some control structures are monadic as well, such as functions, and coroutines, and…
A notdatastructure that is familiar to most and that is actually a monad is the statement. (Fun fact: statements were the motivating example for the monad interface!)
Statements may produce two kinds of things:
 side effects (variable bindings, printing to screen, receiving user input, …)
 a result value
For simplicity, let’s suppose they always do both.
We can compose two statements f()
and g()
with the ;
operator: f() ; g()
is the statement that has the side effects of f()
, followed by the side effects of g()
, and that produces as a resulting value the value produced by g()
.
What if we want to compose more than two statements? f() ; g() ; h()
is ambiguous, much like a + b + c
is ambiguous: is (a + b) + c
meant, or a + (b + c)
? We’ll come back to this below.
We can bind the result of a statement to a variable, like this: x = f()
. When we do, the variable is available for use by (“following”) statements composed on the right.
An special kind of statement is the return
statement, e.g. return 5
, which has no side effects whatsoever (or rather: the neutral effect) but does produce a result value.
Now, statements follow a few laws:

“Left Identity”
The composite statement
{ x′ < { return x }; f x′ }
does exactly the same thing as the simple statement
{ f(x) }

"Right Identity
The composite statement
{ x = m() ; return x }
does exactly the same thing as the simple statement
{ m() }

“Associativity”
This law is about that ambiguity mentioned earlier. Suppose we have
{ x = m(); y = f(x); g(y) }
Should we interpret this as
y = { x = m(); f(x) }; g(y)
or as
x = m(); { y = f(x); g(y) }
?
The law of assiciativity says that it doesn’t matter: the statements denoted by both interpretations have identical side effects and identical result values.
@axtens does the above make sense?
It turns out many more constructs than statements alone satisfy these laws. To circle back to lists and Optional
, compare
# statements
{ x = m();
y = f(x);
g(y) }
# lists
[ foreach x in m()
foreach y in f(x)
extend with g(y) ]
# Optional
supposing m() producs some x
supposing f(x) produces some y
the result of g(y)
(and Nothing otherwise)
With a change of notation, the same laws apply.
This is the way: the laws are the way they are precisely so that stuff works as naively expected.
I quite liked this series on F# For Fun and Profit, which tries to explain monads but in a somewhat storydriven way that might appeal more to the reader than a purely mathematical viewpoint.
I saw this come up on my Mastodon feed today. I haven’t watched it yet because I’m observing a strict nomedia policy to save precious data, but it was shared by someone who i know would not waste our time:
I’ll be preparing a transcript later using youtubetranscriptapi and perhaps I’ll come back here with a link to a gist or something
EDIT: Transcript: YOW! 2013 Philip Wadler  The First Monad Tutorial #YOW (github.com)
Wow, thank you very much!
Phil Wadler is the one who braught monads into the collective functional programming consciousness.
By the way:
The question “please explain monads” comes up a lot, but I do not have much insight into what sort of explanations actually work. I’m be interesting in learning about this. Pray tell.
I mentioned above,
“I haven’t learned monads yet, but I know I use them all the time.”
It’s worth expanding on this because it’s somewhat ironically what has gotten me to the point of almost understanding them.
I was at a Clojure conference and someone gave a “lightning talk” about how we are already using monads and never even knew it.
It was already helpful because this was in person, so this combined with the unique angle of relating it to what we already use made it very unthreatening. A lot of it seems to be getting over a mental block where the complexity of a concept gets in the way of our own learning. We have a reflexive aversion to taking something in that we’re unable to connect to our current knowledge that seems to make our brains immediately “tune out”. That’s my experience anyway… I hear “something something monoids… something something functors…”, and the silly voices in my head start singing “La la la la! Meow meow meow!”
I talked to him afterwards and told him how helpful it was for me, and he ended up going home and refining the talk and recording a YouTube video!
One of the reasons I love Clojure is that Rich Hickey made a conscious decision to “protect” us from advanced concepts of functional programming that are indeed an essential part of the paradigm, but not necessary to understand in detail to use them. So they are never mentioned anywhere, and kind of remain as “implementation details” Continuations are another one. I don’t even know what they are, because he decided for me that I don’t need to know! Some people have even asked, “Why doesn’t Clojure have continuations”? And he responded by effectively saying “Of course Clojure has continuations, you can’t have a Lisp without them! But the user doesn’t need to know about them”.
Here is the video:
Here’s another good one. While these seem to be Clojurespecific, i don’t think you actually need to know Clojure to get the idea. Both speakers made a specific effort to make their talks accessible to a general audience.
This is a brilliant explanation of how a monadic parsing library works.
Teaser: it was written by a child. Mark Engelberg is a software veteran who was a contributor to the mp3 specification, a pioneer of the early VR industry, and an engineer for Sierra Online (the 90s point and click adventure games). This library was the result of him teaching Monads to his son Alex.