Why Functional Programming? The Benefits of Referential Transparency

Having covered what functional programming is, I wanted to spend a minute or two discussing why I want to learn functional programming in the first place. I’m sure we have all heard vague things about “side-effects”, “immutability”, and “composition”, but I wanted to dive a bit deeper on the topic to describe what — to me — is important about functional programming.


Referential Transparency

The key differentiating feature of (pure) functional programs is that they provide referential transparency. An expression is said to be referentially transparent if it can be replaced with its corresponding value without changing the program’s behaviour.

An interesting effect of referential transparency is that it makes your code context-independent, meaning that an expression can be run in any order or in any context and it will always return the same result. By declaring what the goal is, but not detailing any steps to reach a goal, a pure function can easily be used in different contexts. A pure function can also be easily optimized because you describe what you want without prescribing how to do it. The compiler or runtime is free to choose how to execute or rearrange statements, including making some code parallel if possible.

Another interesting effect of referential transparency is that it eliminates side-effects from your code. Referential transparency requires that functions be free of anything that can modify the behaviour of the program outside of the function. If you follow referential transparency and substitute an expression for its return value, any side-effects from executing the function would be lost. This shows how having side-effects in functions breaks referential transparency.

Not having side-effects also implies implies immutability, because any piece of code that mutates variables, objects, or data structures outside of a function are not referentially transparent — they cannot be cleanly substituted for the function’s value while keeping the program’s behaviour the same.

Why Functional Programming Matters

Referential transparency (or some variation of it) is often cited as the primary advantage of functional programming. Yet, according to John Hughes in [Why Functional Programming Matters]Why Functional Programming Matters:

Such a catalogue of “advantages” is all very well, but one must not be surprised if outsiders don’t take it too seriously. It says a lot about what functional programming isn’t (it has no assignment, no side effects, no flow of control) but not much about what it is. The functional programmer sounds rather like a medieval monk, denying himself the pleasures of life in the hope that it will make him virtuous. To those more interested in material benefits, these “advantages” are totally unconvincing.

At the time of this paper’s publication, the proponents of structured programming took their views to the somewhat logical extreme of object-oriented programming as a means of coping with the increased complexity inherent in larger programs. This led to the development of languages like SmallTalk and Eiffel, which emphasized using purely object-oriented structures to focus on software quality. The success of these languages led to the introduction of object-oriented concepts to existing languages like C (C++ and Objective-C), and the creation of new languages like Java (started in 1991). Since that time, Java has gone on to become the most widely used languages in the world, with an estimated 9 million developers.

In the midst of the fever surrounding object-oriented and structured programming, Hughes paper provides an argument on why functional programming principles still matter, and why we shouldn’t be so quick to ignore the advantages functional programming offers. In his analysis, Hughes first tries to go beyond the typical list of arguments for functional programming that reference immutability and side-effects, to get to the core point that makes functional programming important — modularity.

Our ability to decompose a problem into parts depends directly on our ability to glue solutions together. … Functional programs provide two new kinds of glue — higher-order functions and lazy evaluation.

Support for higher-order functions enables functions to be passed as arguments and returned from functions. This feature allows you to compose your program as a series of functions that can be glued together to produce new results. The typical examples of this are map and reduce, that allow you to apply a function to a set of values.

Support for lazy evalution — delaying the evaluation of an expression until its value is needed — makes it possible to glue whole programs together. Lazy evaluation,

makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one. While some other systems allow pgorams to be run together in this manner, only functional languages (and not even all of them) use lazy evaluation uniformly for every function call, allowing any part of a program to be modularized in this way. Lazy evaluation is perhaps the most powerful tool for modularization in the functional programmer’s repertoire.

Together, higher-order functions and lazy evaluation provide strategies for modularizing your program that are impossible without functional programming. And this strength, according to Hughes, is why functional programming matters.

Simple Made Easy

So, functional programming offers referential transparency, and referential transparency enables powerful modularization techniques like higher-order functions and lazy evaluation. We intuitively know that more modular programs are somehow better, but why exactly? The key to answering this question is to understand simplicity.

Rich Hickey, author of Clojure, sums this up nicely in his talk Simple Made Easy. In this talk, he contrasts the notion of “Easy” with the notion of “Simple”. “Easy” meaning to make things approachable, available, and ready to use, and “Simple” meaning uncomplicated, and free from elaboration. In software, “Easy” implies ready-to-use libraries that do a lot of stuff under the hood, without necessarily making you aware of it. These libraries can be very complex, with a web of dependencies and frameworks, but they make it “Easy” to build a basic application. Ruby on Rails is a great example of “Easy” software that is very complex. “Simple”, on the other hand, implies uncomplicated pieces that are easy to understand, easy to change, easy to debug, and that can be flexibly combined.

Hickey argues that functional programming in a modular style allows you to build simple software that is also easy. Modular programming enables you to partition the elements of your design horizontally (partitioning), and vertically (stratification).

Partitioning and stratification don’t imply simplicity, but are enabled by it. If you make simple components, you can horizontally separate them and you can vertically stratify them.

Referential transparency (and therefore immutability) makes things simple too.

Having state in your program is never simple. State is easy, but introduces complexity because it intertwines value and time. State intertwines everything it touches directly or indirectly, and it is not mitigated by modules and encapsulation.

What matters in software is: does the software do what is supposed to do? Is it of high quality? Can we rely on it? Can problems be fixed along the way? Can requirements change over time? The answers to these questions are what matters in writing software, not the look and feel of the experience writing the code, or the cultural implications of it. It just turns out that the property of referential transparency that allows you to modularize your program in new ways can help us answer these questions by making things as simple as possible. And this is why we do functional programming.

comments powered by Disqus