12 Comments

You can think of a monad as a design pattern with a clear contract. Basically, to have a monad, you must implement bind (also known as flatMap) and the other operations from Functor and Applicative, and that's it.

If you want, you can just ignore category theory, although it provides a nice framework for thinking about problems in abstract terms.

P.S.: One interesting fact I recall that might clarify the mathematical inclination of functional programming is related to the origins of programming itself. Programming languages were conceived before computer systems, in order to solve the decidability problem posed by Hilbert's program. Three programming formalisms were devised at the time: a general recursive function definition by Gödel, Turing machines by Turing, and the lambda calculus by Church. It’s therefore only natural to use mathematics and logic for abstraction, and I think category theory is a nice fit, even for imperative programming.

If you’ve used Rust’s Option and Result types, they can be thought of as categories, or more specifically, as coproducts and monads. But it doesn’t matter if you understand the math behind them; you can use these types without ever worrying about that aspect. You can just view them as design patterns. The benefit is that, because they were developed with mathematical rigor, they are all well-defined and obey specific laws. In contrast, while many object-oriented patterns are useful, not all of them are well-defined or must obey rigid laws.

Expand full comment

Thanks for your honest and objective remarks. I can relate to your remarks on lack of materials on tooling. However, I don't agree with your view that FP is trying to impose a mathematical view on top of a "fundamentally imperative" execution model. Every abstraction layer in a computing is just that - an abstraction layer - and there is nothing more "fundamental" about the imperative model. After all, the hardware doesn't start at the assembly level and even something like the program counter must be implemented using logic gates. Modern CPUs actually spend a lot of effort to paralelise execution of machine instructions while preserving the illusion of sequenciality.

Expand full comment

Thanks for taking the time to comment!

Just because CPU instructions are parallelized or pre-emptively executed doesn't mean they're no longer imperative. Also, every imperative language has ways of executing code in parallel (e.g. using threads or coroutines).

What makes the CPU instructions imperative is the fact that they're commands that tell the CPU to do something (per the definition of "imperative programming").

I've programmed microcontrollers, DSPs and more general CPUs in the lowest level languages possible for those devices. The instruction sets for all of them are imperative. None of them are inherently functional, declarative, etc.

Every higher level language needs to ultimately be translated into a specific CPU's (imperative) instruction set so that it can actually be executed.

Expand full comment

Yes, CPUs are imperative, but in many cases, higher level code can make it easier for the compiler to translate it into optimized machine code, precisely because there are fewer assumptions about how it will be executed from the programmer’s perspective. A common first step in most imperative compilers is to make the code more pure by translating it to Single Static Assignment form. What the compiler really wants to know is the flow of data, not a sequence of instructions that it first has to translate into that data flow graph.

I’m not saying that Haskell is the sweet spot to make optimization easy, far from it, just that moving away from what machines do isn’t necessarily a bad thing for compilers. Something like Single-Assignment C is more likely to be the sweet spot (which is also possible to compile to GPUs and FPGAs). Ignoring GC, GHC can often compile pure code into something very similar to the corresponding optimized C code. (but it can also be way way worse in other cases where it’s more difficult for the compiler to detangle the code)

Expand full comment

It's not always necessary to translate a high-level language into an imperative instruction set. Have a look at the CLash hardware description language (https://clash-lang.org/) for programming FPGAs.

Expand full comment

Firstly, I'm not saying it's not possible to interact with electronic hardware in non-imperative ways. What I am saying is that modern general-purpose CPUs' instruction sets are imperative.

Secondly, FPGAs are not CPUs, just like quantum computers aren't at all like general purpose CPUs. They're also only programmed by a tiny minority of engineers, compared to the general purpose CPUs of today's modern computers (the intended audience for my post).

Expand full comment

I agree with Thane. At some point, you must translate declarative, functional code into a set of imperative instructions. However, I would argue that functional code is more akin to how humans think, because mathematics was designed for us to understand, whereas imperative code is more like how machines operate. While both are abstractions, the former can be considered a more natural way of thinking for humans.

In my opinion, most people don’t like the functional approach simply because they are accustomed to imperative programming—it’s more a matter of culture than anything else. In the end, that is what matters subjectively, and for me, the criticisms he makes about Haskell are fair, but they are more related to what he is used to than with the language itself.

Expand full comment

Regarding the Set/Map lookup, that’s a compromise for the sake of purity/immutability. If you want O(1) lookup, you’ll either need a mutable data structure or you’ll need to copy the entire structure for each change (O(n)). Both options are available in Haskell (in the hashtables package), but then you can only change it without a full copy in the IO or ST monad.

Expand full comment

Monads are generic types that have a lawful (contract obeying) implementation into the Monad typeclass, or interface.

It's just really really abstract, the three essential methods are fmap, pure, and join (>>= is fmap then join), with laws needing to be observed, i.e, fmap, pure, and join are defined by their types and lawfulness, not any specific implementation.

As an example of how abstract it is, Proxy a is a monad, even though it's a type with only one value.

Haskellers make a big deal out of monads because they can be used to model side effects and provide control of side effects, while also allowing for monad generics (mtl monad classes, effect systems), and the basic universal IO type supports the monad interface.

You can think of it as "funky imperative programming" due to the do-notation desugaring, Reader gives me read-only state, Writer write-only state, State read-write state, etc. Lists will be executed per item in the list, and so on.

Effect system libraries allow me to limit a monad block's access to IO, like, I can have code that is allowed to write to console, but not make a network call, and so on.

The problem with monads is that they impose a performance penalty and they're often verbose, with classy generic monads having huge type signatures.

For smaller apps, you can just stick to basic IO and call it a day. Larger apps, classy monads improve maintainability more than they clunk up the code.

***

As for Haskell as a problem solving language, use let expressions and where clauses a lot, with the foo = error "todo" idiom for parts you can't figure out how to express.

The language is asking you, basically, to define your problem, and the functions are the ones that actually solve it.

Working out problems, write the problem out in your native language, and then define the terms anf subterms you use for your definition repeatedly.

Then use equational reasoning to build a more performant and readable solution.

Expand full comment

Maps and Sets have log n complexity, because they’re implanted as balanced trees. If you want faster Hash based lookup you should use HashMap/Set from unordered containers.

And you really really don’t need category theory to use monads. I barely know what a Monad in Category theory is but use Haskell daily. No problem.

I think what would help more is get a basic understanding of Lambda Calculus. Lambda calculus and a Turing machine are computationally equivalent, which is the whole foundation to functional programming.

After realising that both modules of computation can express the same ideas, it became clear to me that imperative programming is not „the right way“ but is just one way we choose to do things. We could express the same concepts with beta reduction and rewriting rules like the lambda calculus does. And in fact GHC does a lot of those things for optimisation etc.

But I can understand why people might not like to program in Haskell. It’s very different to mainstream imperative programming. Probably as far opposite as possible. Maybe something less different might be the sweet spot for you? Ocaml or Scala?

Expand full comment

I cannot help but notice that you do not mention the type system of Haskell in any detail. You say nothing about polymorphism. In my opinion, you are missing out on something absolutely essential. Also, it is the type system that is key to understanding monads, functors and applicatives in Haskell – not category theory.

Also I wonder why you claim to like mathematics but are still of the opinion that referential transparency is strange. It is precisely because side effects exist in imperative programming languages that reasoning about the behaviour of imperative programs becomes so difficult. Contrast this with the way we reason in "ordinary" mathematics.

Haskell is not "a clumsy version of C, the real thing" just as Spanish is not a clumsy version of English.

Expand full comment

I did actually mention the type system a few times in the post.

Expand full comment