Convergence in probability is going to be a very useful tool for deriving asymptotic distributions later on in this book. Alongside convergence in distribution it will be the most commonly seen mode of convergence.
Almost-sure convergence has a marked similarity to convergence in probability, however the conditions for this mode of convergence are stronger; as we will see later, convergence almost surely actually implies that the sequence also converges in probability.
Convergence in distribution will appear very frequently in our econometric models through the use of the Central Limit Theorem. So let's define this type of convergence.
A sequence of random variables asymptotically converges in distribution to the random variable if for all continuity points. and are the cumulative density functions of and respectively.
It is the distribution of the random variable that we are concerned with here. Think of a students-T distribution: as the degrees of freedom, , increases our distribution becomes closer and closer to that of a gaussian distribution. Therefore the random variable converges in distribution to the random variable (n.b. we say that the random variable as a notational crutch, what we really should use is /
Let's consider the distribution Xn whose sample space consists of two points, 1/n and 1, with equal probability (1/2). Let X be the binomial distribution with p = 1/2. Then Xn converges in distribution to X.
The proof is simple: we ignore 0 and 1 (where the distribution of X is discontinuous) and prove that, for all other points a, . Since for a < 0 all Fs are 0, and for a > 1 all Fs are 1, it remains to prove the convergence for 0 < a < 1. But (using Iverson brackets), so for any a chose N > 1/a, and for n > N we have:
So the sequence converges to for all points where FX is continuous.
A sequence of random variables asymptotically converges in r-th mean (or in the norm) to the random variable if, for any real number and provided that for all n and ,
Let be a sequence of random variables which are defined on the same probability space, share the same probability distribution D and are independent. Assume that both the expected value μ and the standard deviation σ of D exist and are finite.
Consider the sum . Then the expected value of is nμ and its standard error is σ n1/2. Furthermore, informally speaking, the distribution of Sn approaches the normal distribution N(nμ,σ2n) as n approaches ∞.