Last modified on 24 July 2009, at 15:43

Famous Theorems of Mathematics/Law of large numbers

Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average

\overline{X}_n=\tfrac1n(X_1+\cdots+X_n).

The weak lawEdit

Theorem: \overline{X}_n \, \xrightarrow{P} \, \mu \qquad\textrm{for}\qquad n \to \infty.

Proof:


This proof uses the assumption of finite variance  \operatorname{Var} (X_i)=\sigma^2 (for all i). The independence of the random variables implies no correlation between them, and we have that

 
\operatorname{Var}(\overline{X}_n) = \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n}.

The common mean μ of the sequence is the mean of the sample average:

 
E(\overline{X}_n) = \mu.

Using Chebyshev's inequality on \overline{X}_n results in

 
\operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \leq \frac{\sigma^2}{n\varepsilon^2}.

This may be used to obtain the following:

 
\operatorname{P}( \left| \overline{X}_n-\mu \right| < \varepsilon) = 1 - \operatorname{P}( \left| \overline{X}_n-\mu \right| \geq \varepsilon) \geq 1 - \frac{\sigma^2}{n \varepsilon^2 }.

As n approaches infinity, the expression approaches 1. And by definition of convergence in probability (see Convergence of random variables), we have obtained

\overline{X}_n \, \xrightarrow{P} \, \mu \qquad\textrm{for}\qquad n \to \infty.