# Statistics Ground Zero/Degrees of freedom

## Degrees of freedomEdit

This is a good point to introduce the idea of degrees of freedom (*df*). This notion causes some anxiety but there is no reason for this in practical circumstances where good statistical software will compute the degrees of freedom for you.

Let us consider an example: to compute the variance I first sum the square deviations from the * mean*. The mean is a

*parameter*: it is a characteristic of the variable under examination as a whole and is part of describing the overall distribution of values. If you know all the parameters you can accurately describe the data. The more parameters you know, that is to say the more you fix, the fewer samples fit this model of the data. If you know only the mean, there will be many possible sets of data that are consistent with this model but if you know the mean and the standard deviation, fewer possible sets of data fit this model.

So in computing the variance I had first to calculate the mean. When I have calculated the mean, I could vary any of the scores in the data *except for one*. If I leave one score unexamined it can always be calculated accurately from the rest of the data and the mean itself. Maybe an example can make this clearer.

I take the ages of a class of students and find the mean. If I fix the mean, how many of the other scores (there are **N** of them remember) could still vary? The answer is * N-1*. There are N-1 independent pieces of information that could vary while the mean is known. These are the degrees of freedom. One piece of information cannot vary because its value is fully determined by the parameter (in this case the mean) and the other scores. Each parameter that is fixed during our computations constitutes the loss of a degree of freedom.

If we imagine starting with a small number of data points and then fixing a relatively large number of parameters as we compute some statistic, we see that as more degrees of freedom are lost, fewer and fewer different situations are accounted for by our model since fewer and fewer pieces of information could in principle be different from what is actually observed.

So, the interest, to put it very informally, in our data is determined by the degrees of freedom: if there is nothing that can vary once our parameter is fixed (because we have so very few data points - maybe just one) then there is nothing to investigate. Degrees of freedom can be seen as linking sample size to explanatory power.

## NotesEdit

## ContentsEdit

3 Parametric and Non-parametric Methods

5 Inferential Statistics: hypothesis testing