Consider a four state Markov chain with state space
{
1
,
2
,
3
,
4
}
{\displaystyle \{1,2,3,4\}}
, initial state
X
0
=
1
{\displaystyle X_{0}=1}
, and transition probability matrix
(
1
/
4
1
/
4
1
/
4
1
/
4
1
/
6
1
/
3
1
/
6
1
/
3
0
0
1
0
0
0
0
1
)
{\displaystyle {\begin{pmatrix}1/4&1/4&1/4&1/4\\1/6&1/3&1/6&1/3\\0&0&1&0\\0&0&0&1\end{pmatrix}}}
(a) Compute
lim
n
→
∞
P
(
X
n
=
3
)
{\displaystyle \lim _{n\to \infty }P(X_{n}=3)}
.
(b) Let
τ
=
inf
{
n
≥
0
:
X
n
∈
{
3
,
4
}
}
{\displaystyle \tau =\inf\{n\geq 0:X_{n}\in \{3,4\}\}}
. Compute
E
(
τ
)
{\displaystyle E(\tau )}
.
If
X
1
,
.
.
.
,
X
n
{\displaystyle X_{1},...,X_{n}}
are independent uniformly distributed random variables on [0,1], then let
X
(
2
)
,
n
{\displaystyle X_{(2),n}}
be the second smallest among these numbers. Find a nonrandom sequence
a
n
{\displaystyle a_{n}}
such that
T
n
=
a
n
−
log
X
(
2
)
,
n
{\displaystyle T_{n}=a_{n}-\log X_{(2),n}}
converges in distribution, and compute the limiting distribution.
P
(
T
n
≤
x
)
=
P
(
e
a
n
−
log
X
(
2
)
,
n
≤
e
x
)
=
P
(
e
a
n
X
(
2
)
,
n
≤
e
x
)
=
P
(
X
(
2
)
,
n
≥
e
a
n
−
x
)
=
n
(
1
−
e
a
n
−
x
)
n
−
1
e
a
n
−
x
+
(
1
−
e
1
n
−
x
)
n
{\displaystyle {\begin{aligned}P(T_{n}\leq x)&=P(e^{a_{n}-\log X_{(2),n}}\leq e^{x})=P({\frac {e^{a_{n}}}{X_{(2),n}}}\leq e^{x})\\&=P(X_{(2),n}\geq e^{a_{n}-x})=n(1-e^{a_{n}-x})^{n-1}e^{a_{n}-x}+(1-e^{1_{n}-x})^{n}\end{aligned}}}
The two terms on the right hand side look like the limit definition of the exponential function. Can we choose
a
n
{\displaystyle a_{n}}
appropriately so that it is?
Let
a
n
=
−
log
n
{\displaystyle a_{n}=-\log n}
. Then
P
(
T
n
≤
x
)
=
n
(
1
−
1
n
e
x
)
n
−
1
1
n
e
x
+
(
1
−
1
n
e
x
)
n
→
e
−
e
x
e
−
x
+
e
−
e
x
{\displaystyle {\begin{aligned}P(T_{n}\leq x)&=n(1-{\frac {1}{ne^{x}}})^{n-1}{\frac {1}{ne^{x}}}+(1-{\frac {1}{ne^{x}}})^{n}\\&\to e^{-e^{x}}e^{-x}+e^{-e^{x}}\end{aligned}}}
This is the distribution of
lim
n
→
∞
T
n
{\displaystyle \lim _{n\to \infty }T_{n}}
.
(a)
P
(
ζ
≤
x
)
=
P
(
ξ
≤
x
−
η
)
=
∑
n
=
−
∞
∞
P
(
η
=
n
)
∫
−
∞
x
−
n
p
ξ
(
t
)
d
t
=
lim
N
→
∞
∑
n
=
−
N
N
P
(
η
=
n
)
∫
−
∞
x
−
n
p
ξ
(
t
)
d
t
=
lim
N
→
∞
∑
n
=
−
N
N
P
(
η
=
n
)
∫
−
∞
x
p
ξ
(
t
−
n
)
d
t
=
lim
N
→
∞
∫
−
∞
x
∑
n
=
−
N
N
P
(
η
=
n
)
p
ξ
(
t
−
n
)
d
t
=
∫
−
∞
x
∑
n
=
−
∞
∞
P
(
η
=
n
)
p
ξ
(
t
−
n
)
d
t
{\displaystyle {\begin{aligned}P(\zeta \leq x)&=P(\xi \leq x-\eta )=\sum _{n=-\infty }^{\infty }P(\eta =n)\int _{-\infty }^{x-n}p_{\xi }(t)\,dt\\&=\lim _{N\to \infty }\sum _{n=-N}^{N}P(\eta =n)\int _{-\infty }^{x-n}p_{\xi }(t)\,dt=\lim _{N\to \infty }\sum _{n=-N}^{N}P(\eta =n)\int _{-\infty }^{x}p_{\xi }(t-n)\,dt\\&=\lim _{N\to \infty }\int _{-\infty }^{x}\sum _{n=-N}^{N}P(\eta =n)p_{\xi }(t-n)\,dt=\int _{-\infty }^{x}\sum _{n=-\infty }^{\infty }P(\eta =n)p_{\xi }(t-n)\,dt\end{aligned}}}
where the last equality follows from Monotone Convergence Theorem .
Hence, we have shown explicitly that
ζ
{\displaystyle \zeta }
has a density and it is given by
q
ζ
(
t
)
=
∑
n
=
−
∞
∞
P
(
η
=
n
)
p
ξ
(
t
−
n
)
{\displaystyle q_{\zeta }(t)=\sum _{n=-\infty }^{\infty }P(\eta =n)p_{\xi }(t-n)}
.
(b)
When
ξ
∼
{\displaystyle \xi \sim }
Uniform[0,1] and
η
∼
{\displaystyle \eta \sim }
Poisson(1), we have
p
ξ
(
x
)
=
X
[
0
,
1
]
(
x
)
{\displaystyle p_{\xi }(x)=\mathrm {X} _{[0,1]}(x)}
and
p
η
(
k
)
=
1
k
!
e
{\displaystyle p_{\eta }(k)={\frac {1}{k!e}}}
with support on
k
=
0
,
1
,
2
,
.
.
.
{\displaystyle k=0,1,2,...}
.
Then from part (a), the density will be
q
ζ
(
x
)
=
∑
n
=
−
∞
∞
p
η
(
n
)
p
ξ
(
x
−
n
)
=
∑
n
=
0
∞
1
k
!
e
X
[
0
,
1
]
(
x
)
{\displaystyle q_{\zeta }(x)=\sum _{n=-\infty }^{\infty }p_{\eta }(n)p_{\xi }(x-n)=\sum _{n=0}^{\infty }{\frac {1}{k!e}}\mathrm {X} _{[0,1]}(x)}
.
Let
(
N
(
t
)
,
t
≥
0
)
{\displaystyle (N(t),t\geq 0)}
be a Poisson process with unit rate, and let
W
m
,
n
=
∑
k
=
1
n
I
{
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
}
{\displaystyle W_{m,n}=\sum _{k=1}^{n}I\{N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2\}}
where
I
(
A
)
{\displaystyle I(A)}
is the indicator of the event
A
{\displaystyle A}
.
(a) Find a formula for
E
(
W
m
,
n
)
{\displaystyle E(W_{m,n})}
in terms of
m
,
n
{\displaystyle m,n}
.
(b) Show that if
m
=
n
α
{\displaystyle m=n^{\alpha }}
with
α
>
1
/
2
{\displaystyle \alpha >1/2}
a fixed constant, then
W
m
,
n
→
∞
{\displaystyle W_{m,n}\to \infty }
in probability.
We know that
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
{\displaystyle N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})}
is distributed as Poisson with parameter
m
/
n
{\displaystyle m/n}
. So
P
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
)
=
1
−
P
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
=
1
or
2
)
=
1
−
e
−
m
/
n
−
m
n
e
−
m
/
n
{\displaystyle P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=1-P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})=1{\text{ or }}2)=1-e^{-m/n}-{\frac {m}{n}}e^{-m/n}}
Then
E
(
W
m
,
n
)
=
E
[
∑
k
=
1
n
I
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
)
=
∑
k
=
1
n
E
(
I
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
)
)
by linearity
=
∑
k
=
1
n
P
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
)
=
n
(
1
−
e
−
m
/
n
−
m
n
e
−
m
/
n
)
=
n
−
e
−
m
/
n
(
n
+
m
)
{\displaystyle {\begin{aligned}E(W_{m,n})&=E[\sum _{k=1}^{n}I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)\\&=\sum _{k=1}^{n}E(I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)){\text{ by linearity}}\\&=\sum _{k=1}^{n}P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=n\left(1-e^{-m/n}-{\frac {m}{n}}e^{-m/n}\right)=n-e^{-m/n}(n+m)\end{aligned}}}
If
lim
n
→
∞
W
n
,
n
α
=
∞
{\displaystyle \lim _{n\to \infty }W_{n,n^{\alpha }}=\infty }
then we must have
I
(
N
(
m
k
n
)
−
N
(
m
(
k
−
1
)
n
)
≥
2
)
=
0
{\displaystyle I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=0}
only finitely often. The probability of this even (from part a) is
e
−
n
α
−
1
(
1
+
n
α
−
1
)
{\displaystyle e^{-n^{\alpha -1}}(1+n^{\alpha -1})}
.
This decays to 0 for
α
>
1
{\displaystyle \alpha >1}
. Then clearly, we see that the probability of
lim
n
→
∞
W
n
,
n
α
=
∞
{\displaystyle \lim _{n\to \infty }W_{n,n^{\alpha }}=\infty }
is equal to 1.
I don't know how to show the result for
1
/
2
<
α
<
1
{\displaystyle 1/2<\alpha <1}
...