Yet Another Haskell Tutorial/Recursion

Haskell
Yet Another Haskell Tutorial
Preamble
Introduction
Getting Started
Language Basics (Solutions)
Type Basics (Solutions)
IO (Solutions)
Modules (Solutions)
Advanced Language (Solutions)
Advanced Types (Solutions)
Monads (Solutions)
Advanced IO
Recursion
Complexity

Recursion and Induction

edit

Informally, a function is recursive if its definition depends on itself. The prototypical example is factorial, whose definition is:

 

Here, we can see that in order to calculate  , we need to calculate  , but in order to calculate , we need to calculate  , and so on.

Recursive function definitions always contain a number of non-recursive base cases and a number of recursive cases. In the case of factorial, we have one of each. The base case is when   and the recursive case is when  .

One can actually think of the natural numbers themselves as recursive (in fact, if you ask set theorists about this, they'll say this is how it is). That is, there is a zero element and then for every element, it has a successor. That is 1=succ(0), 2=succ(1), ..., 573=succ(572), ... and so on forever. We can actually implement this system of natural numbers in Haskell:

data Nat = Zero | Succ Nat

This is a recursive type definition. Here, we represent one as Succ Zero and three as Succ (Succ (Succ Zero)). One thing we might want to do is be able to convert back and forth between Nats and Ints. Clearly, we can write a base case as:

natToInt Zero = 0

In order to write the recursive case, we realize that we're going to have something of the form Succ n. We can make the assumption that we'll be able to take n and produce an Int. Assuming we can do this, all we need to do is add one to this result. This gives rise to our recursive case:

natToInt (Succ n) = natToInt n + 1

There is a close connection between recursion and mathematical induction. Induction is a proof technique which typically breaks problems down into base cases and "inductive" cases, very analogous to our analysis of recursion.

Let's say we want to prove the statement   for all  . First we formulate a base case: namely, we wish to prove the statement when  . When  ,   by definition. Since  , we get that   as desired.

Now, suppose that  . Then   for some value  . We now invoke the inductive hypothesis and claim that the statement holds for  . That is, we assume that  . Now, we use   to formate the statement for our value of  . That is,   if and only iff  . We now apply the definition of factorial and get  . Now, we know  , so   if and only if  . But we know that  , which means  . Thus it is proven.

It may seem a bit counter-intuitive that we are assuming that the claim is true for   in our proof that it is true for  . You can think of it like this: we've proved the statement for the case when  . Now, we know it's true for   so using this we use our inductive argument to show that it's true for  . Now, we know that it is true for   so we reuse our inductive argument to show that it's true for  . We can continue this argument as long as we want and then see that it's true for all  .

It's much like pushing down dominoes. You know that when you push down the first domino, it's going to knock over the second one. This, in turn will knock over the third, and so on. The base case is like pushing down the first domino, and the inductive case is like showing that pushing down domino   will cause the  st domino to fall.

In fact, we can use induction to prove that our natToInt function does the right thing. First we prove the base case: does natToInt Zero evaluate to  ? Yes, obviously it does. Now, we can assume that natToInt n evaluates to the correct value (this is the inductive hypothesis) and ask whether natToInt (Succ n) produces the correct value. Again, it is obvious that it does, by simply looking at the definition.

Let's consider a more complex example: addition of Nats. We can write this concisely as:

addNat Zero m = m
addNat (Succ n) m = addNat n (Succ m)

Now, let's prove that this does the correct thing. First, as the base case, suppose the first argument is Zero. We know that   regardless of what   is; thus in the base case the algorithm does the correct thing. Now, suppose that addNat n m does the correct thing for all m and we want to show that addNat (Succ n) m does the correct thing. We know that   and thus since addNat n (Succ m) does the correct thing (by the inductive hypothesis), our program is correct.