# Famous Theorems of Mathematics/Law of large numbers

Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value E(X1) = E(X2) = ... = µ < ∞, we are interested in the convergence of the sample average

${\displaystyle {\overline {X}}_{n}={\tfrac {1}{n}}(X_{1}+\cdots +X_{n}).}$

## The weak law

Theorem: ${\displaystyle {\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu \qquad {\textrm {for}}\qquad n\to \infty .}$

Proof:

This proof uses the assumption of finite variance ${\displaystyle \operatorname {Var} (X_{i})=\sigma ^{2}}$  (for all ${\displaystyle i}$ ). The independence of the random variables implies no correlation between them, and we have that

${\displaystyle \operatorname {Var} ({\overline {X}}_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.}$

The common mean μ of the sequence is the mean of the sample average:

${\displaystyle E({\overline {X}}_{n})=\mu .}$

Using Chebyshev's inequality on ${\displaystyle {\overline {X}}_{n}}$  results in

${\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\leq {\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.}$

This may be used to obtain the following:

${\displaystyle \operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|<\varepsilon )=1-\operatorname {P} (\left|{\overline {X}}_{n}-\mu \right|\geq \varepsilon )\geq 1-{\frac {\sigma ^{2}}{n\varepsilon ^{2}}}.}$

As n approaches infinity, the expression approaches 1. And by definition of convergence in probability (see Convergence of random variables), we have obtained

${\displaystyle {\overline {X}}_{n}\,{\xrightarrow {P}}\,\mu \qquad {\textrm {for}}\qquad n\to \infty .}$