# Famous Theorems of Mathematics/Law of large numbers

Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average

${\overline {X}}_{n}={\tfrac {1}{n}}(X_{1}+\cdots +X_{n}).$ ## The weak law

Theorem: ${\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu \qquad {\textrm {for}}\qquad n\to \infty .$

Proof:

This proof uses the assumption of finite variance $\operatorname {Var} (X_{i})=\sigma ^{2}$  (for all $i$ ). The independence of the random variables implies no correlation between them, and we have that

$\operatorname {Var} ({\overline {X}}_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.$

The common mean μ of the sequence is the mean of the sample average:

$E({\overline {X}}_{n})=\mu .$

Using Chebyshev's inequality on ${\overline {X}}_{n}$  results in

$\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\leq {\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.$

This may be used to obtain the following:

$\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|<\varepsilon )=1-\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\geq 1-{\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.$

As n approaches infinity, the expression approaches 1. And by definition of convergence in probability (see Convergence of random variables), we have obtained

${\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu \qquad {\textrm {for}}\qquad n\to \infty .$