# Introduction to Programming Languages/Tail Call Cost Model

We start this section by recalling the implementation of `reverse`, which we discussed previously:

```
reverse_orig([],[]).
reverse_orig([Head|Tail],Rev) :- reverse_orig(Tail,TailRev), @(TailRev,[Head],Rev).
```

This implementation was quadratic on the number of elements of the list that we want to invert. Below, we have a different implementation, which is linear on the number of elements in the list that we want to invert:

```
reverse_opt(X,Y) :- rev(X,[],Y).
rev([],Sofar,Sofar).
rev([Head|Tail],Sofar,Rev) :- rev(Tail,[Head|Sofar],Rev).
```

This predicate uses an auxiliary function, which has three arguments.
The second among these arguments works as an accumulator: we are moving to the construction of the argument of the recursive call the tasks that before we were doing upon its return.
This task, in our particular example, consists in adding an element to the beginning of the current version of the inverted list.
It is easy to know that `rev` runs in linear time on the number of elements of the first list.
This predicate has two clauses.
The first, the base case, is O(1).
The second, the inductive case, performs one recursive call, and the cost of each call is O(1), i.e., it consists in adding an element to the beginning of a list.
Additionally, this predicate has another advantage, which we illustrate in w:gprolog:

| ?- length(L, 10000), reverse_opt(L, X). L = [A,B ... ] (354 ms) yes | ?- length(L, 10000), reverse_orig(L, X). Fatal Error: global stack overflow (size: 32768 Kb, environment variable used: GLOBALSZ)

So, what happened? The first version of reverse does not even terminates for large lists, whereas the second not only terminates just fine, but also runs very fast. The secret in this case is an optimization called Tail Call. If the last action of a function is to perform a recursive call, then the activation record of this function can be reused to store the new data. In other words, no new activation record is being created; there is only reuse of memory space going on. Effectively, this optimization transforms a recursive function into a C-style while loop. In principle, any recursive function can be optimized in this way. As an example, we show two functions, this time in SML, which are not tail-call, and show the corresponding tail call version:

```
fun sum [] = 0
| sum (a::l) = sum l + a;
fun sum_opt thelist =
let
fun sum (nil, sofar) = sofar
| sum (head::tail, sofar) = sum (tail, sofar + head)
in
len (thelist, 0)
end;
fun length nil = 0
| length (head::tail) = 1 + length tail;
fun length_opt thelist =
let
fun len (nil, sofar) = sofar
| len (head::tail, sofar) = len (tail, sofar + 1)
in
len (thelist, 0)
end;
```

The trick to transform the non-tail call function into a tail-call invocation is simple: create an auxiliary function that moves to the process of constructing the accumulator the computation that would be performed, otherwise, upon invocation return. As an example, SML's foldl implementation is tail call. The application of the folding operator happens during the construction of the new reduction seed, as we can see below:

```
fun foldl _ c nil = c
| foldl f c (a::b) = foldl f (f(a,c)) b
```