Haskell/Algorithm complexity
Complexity Theory is the study of how long a program will take to run, depending on the size of its input. There are many good introductory books to complexity theory and the basics are explained in any good algorithms book. I'll keep the discussion here to a minimum.
The idea is to say how well a program scales with more data. If you have a program that runs quickly on very small amounts of data but chokes on huge amounts of data, it's not very useful (unless you know you'll only be working with small amounts of data, of course). Consider the following Haskell function to return the sum of the elements in a list:
sum [] = 0 sum (x:xs) = x + sum xs
How long does it take this function to complete? That's a very difficult question; it would depend on all sorts of things: your processor speed, your amount of memory, the exact way in which the addition is carried out, the length of the list, how many other programs are running on your computer, and so on. This is far too much to deal with, so we need to invent a simpler model. The model we use is sort of an arbitrary "machine step." So the question is "how many machine steps will it take for this program to complete?" In this case, it only depends on the length of the input list.
If the input list is of length , the function will take either or or or some very small number of machine steps, depending exactly on how you count them (perhaps step to do the pattern matching and more to return the value ). What if the list is of length ? Well, it would take however much time the list of length would take, plus a few more steps for doing the first (and only element).
If the input list is of length , it will take however many steps an
empty list would take (call this value ) and then, for each element
it would take a certain number of steps to do the addition and the
recursive call (call this number ). Then, the total time this
function will take is since it needs to do those additions
many times. These and values are called constant values, since they are independent of , and actually dependent
only on exactly how we define a machine step, so we really don't want
to consider them all that important. Therefore, we say that the
complexity of this sum
function is (read "order ").
Basically saying something is means that for some constant
factors and , the function takes machine steps to
complete.
Consider the following sorting algorithm for lists (commonly called "insertion sort"):
sort [] = [] sort [x] = [x] sort (x:xs) = insert (sort xs) where insert [] = [x] insert (y:ys) | x <= y = x : y : ys | otherwise = y : insert ys
The way this algorithm works is as follows: if we want to sort an empty
list or a list of just one element, we return them as they are, as
they are already sorted. Otherwise, we have a list of the form
x:xs
. In this case, we sort xs
and then want to insert
x
in the appropriate location. That's what the insert
function does. It traverses the now-sorted tail and inserts x
wherever it naturally fits.
Let's analyze how long this function takes to complete. Suppose it
takes stepts to sort a list of length . Then, in order to
sort a list of -many elements, we first have to sort the tail of
the list first, which takes time. Then, we have to insert
x
into this new list. If x
has to go at the end, this will
take steps. Putting all of this together, we see that
we have to do amount of work many times, which means
that the entire complexity of this sorting algorithm is .
Here, the squared is not a constant value, so we cannot throw it out.
What does this mean? Simply that for really long lists, the sum
function won't take very long, but that the sort
function will
take quite some time. Of course there are algorithms that run much
more slowly than simply and there are ones that run more
quickly than . (Also note that a algorithm may actually be much faster than a algorithm in practice, if it takes much less time to perform a single step of the algorithm.)
Consider the random access functions for lists and arrays. In the worst case, accessing an arbitrary element in a list of length will take time (think about accessing the last element). However with arrays, you can access any element immediately, which is said to be in constant time, or , which is basically as fast an any algorithm can go.
There's much more in complexity theory than this, but this should be enough to allow you to understand all the discussions in this tutorial. Just keep in mind that is faster than is faster than , etc.
Optimising
editThis section is a stub. You can help Haskell by expanding it. |