'Tauberian' refers to a class of theorems with many applications. The full scope of Tauberian theorems is too large to cover here.
We prove here only a particular Tauberian theorem due to Hardy , Littlewood and Karamata , which is useful in analytic combinatorics.
Proof due to Hardy[ 3] .
If
f
(
x
)
=
1
(
1
−
x
)
σ
L
(
1
1
−
x
)
=
∑
0
∞
a
n
x
n
{\displaystyle f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=\sum _{0}^{\infty }a_{n}x^{n}}
then this method actually finds the asymptotic estimate of
s
n
=
∑
i
=
0
n
a
i
{\displaystyle s_{n}=\sum _{i=0}^{n}a_{i}}
after which we can find
a
n
{\displaystyle a_{n}}
by
s
n
−
s
n
−
1
{\displaystyle s_{n}-s_{n-1}}
or by finding
s
n
{\displaystyle s_{n}}
in
(
1
−
x
)
f
(
x
)
{\displaystyle (1-x)f(x)}
.
Let
a
(
t
)
{\displaystyle a(t)}
be a step function where
a
(
n
)
−
a
(
n
−
1
)
=
a
n
{\displaystyle a(n)-a(n-1)=a_{n}}
,
a
(
n
)
=
∑
i
=
0
n
a
i
{\displaystyle a(n)=\sum _{i=0}^{n}a_{i}}
and
a
(
0
)
=
0
{\displaystyle a(0)=0}
[ 4] .
a
(
y
−
1
)
=
∫
0
y
−
1
d
a
(
t
)
{\displaystyle a(y^{-1})=\int _{0}^{y^{-1}}da(t)}
Using integration by parts[ 5] :
∫
0
y
−
1
d
a
(
t
)
=
a
(
y
−
1
)
−
a
(
0
)
−
∫
0
y
−
1
a
(
t
)
d
1
=
a
(
y
−
1
)
{\displaystyle \int _{0}^{y^{-1}}da(t)=a(y^{-1})-a(0)-\int _{0}^{y^{-1}}a(t)d1=a(y^{-1})}
Because
a
(
0
)
=
∫
0
y
−
1
a
(
t
)
d
1
=
0
{\displaystyle a(0)=\int _{0}^{y^{-1}}a(t)d1=0}
.
If
g
(
x
)
=
x
−
1
(
e
−
1
≤
x
≤
1
)
,
g
(
x
)
=
0
(
0
≤
x
<
e
−
1
)
{\displaystyle g(x)=x^{-1}\quad (e^{-1}\leq x\leq 1),\quad g(x)=0\quad (0\leq x<e^{-1})}
then
∫
0
y
−
1
d
a
(
t
)
=
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
{\displaystyle \int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)}
[ 6]
Because when
0
≤
t
≤
y
−
1
{\displaystyle 0\leq t\leq y^{-1}}
then
e
−
y
t
g
(
e
−
y
t
)
{\displaystyle e^{-yt}g(e^{-yt})}
ranges from
e
−
1
g
(
e
−
1
)
{\displaystyle e^{-1}g(e^{-1})}
to
e
0
g
(
e
0
)
{\displaystyle e^{0}g(e^{0})}
so
e
−
y
t
g
(
e
−
y
t
)
=
e
y
t
e
y
t
=
1
{\displaystyle e^{-yt}g(e^{-yt})={\frac {e^{yt}}{e^{yt}}}=1}
and when
t
>
y
−
1
{\displaystyle t>y^{-1}}
then
e
−
y
t
g
(
e
−
y
t
)
=
0
{\displaystyle e^{-yt}g(e^{-yt})=0}
.
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
∼
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
{\displaystyle \int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt}
[ 7]
as
y
→
0
{\displaystyle y\to 0}
.
To prove this we need two further lemmas.
∫
0
∞
e
−
y
t
Q
(
e
−
y
t
)
d
a
(
t
)
∼
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
Q
(
e
−
t
)
d
t
{\displaystyle \int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}Q(e^{-t})dt}
[ 8]
where
Q
{\displaystyle Q}
is any polynomial and as
y
→
0
{\displaystyle y\to 0}
.
∫
0
∞
e
−
y
t
Q
(
e
−
y
t
)
d
a
(
t
)
=
∫
0
∞
e
−
y
t
(
⋯
+
e
−
y
t
n
+
⋯
)
d
a
(
t
)
=
∫
0
∞
⋯
+
∫
0
∞
e
−
y
t
e
−
y
t
n
d
a
(
t
)
+
∫
0
∞
⋯
{\displaystyle \int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)=\int _{0}^{\infty }e^{-yt}(\cdots +e^{-ytn}+\cdots )da(t)=\int _{0}^{\infty }\cdots +\int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)+\int _{0}^{\infty }\cdots }
[ 9]
∫
0
∞
e
−
y
t
e
−
y
t
n
d
a
(
t
)
=
∫
0
∞
e
−
(
n
+
1
)
y
t
d
a
(
t
)
∼
(
n
+
1
)
−
σ
y
−
σ
L
(
1
(
n
+
1
)
y
)
{\displaystyle \int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)=\int _{0}^{\infty }e^{-(n+1)yt}da(t)\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})}
as
y
→
0
{\displaystyle y\to 0}
by assumption in the theorem .
(
n
+
1
)
−
σ
y
−
σ
L
(
1
(
n
+
1
)
y
)
∼
(
n
+
1
)
−
σ
y
−
σ
L
(
1
y
)
{\displaystyle (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})}
by definition of slowly varying function .
(
n
+
1
)
−
σ
y
−
σ
L
(
1
y
)
=
y
−
σ
L
(
1
y
)
1
Γ
(
σ
)
∫
0
∞
e
−
t
e
−
t
n
t
σ
−
1
d
t
{\displaystyle (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})=y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt}
⋯
+
y
−
σ
L
(
1
y
)
1
Γ
(
σ
)
∫
0
∞
e
−
t
e
−
t
m
t
σ
−
1
d
t
+
y
−
σ
L
(
1
y
)
1
Γ
(
σ
)
∫
0
∞
e
−
t
e
−
t
n
t
σ
−
1
d
t
+
⋯
=
y
−
σ
L
(
1
y
)
1
Γ
(
σ
)
∫
0
∞
e
−
t
Q
(
e
−
t
)
t
σ
−
1
d
t
{\displaystyle \cdots +y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tm}t^{\sigma -1}dt+y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt+\cdots =y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}Q(e^{-t})t^{\sigma -1}dt}
[ 9]
If
g
(
x
)
{\displaystyle g(x)}
is real-valued and Riemann integrable in the open interval
(
0
,
1
)
{\displaystyle (0,1)}
and
σ
>
0
{\displaystyle \sigma >0}
then there exist polynomials
p
(
x
)
{\displaystyle p(x)}
and
P
(
x
)
{\displaystyle P(x)}
such that
p
(
x
)
<
g
(
x
)
<
P
(
x
)
{\displaystyle p(x)<g(x)<P(x)}
and
∫
0
1
(
P
(
x
)
−
p
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
=
∫
0
∞
e
−
t
t
σ
−
1
(
P
(
e
−
t
)
−
p
(
e
−
t
)
)
d
t
<
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx=\int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-p(e^{-t}))dt<\epsilon \Gamma (\sigma )}
[ 10]
Construct continuous functions
h
{\displaystyle h}
and
H
{\displaystyle H}
[ 11] such that
h
≤
g
≤
H
,
∫
0
1
(
H
−
g
)
d
x
<
ϵ
,
∫
0
1
(
g
−
h
)
d
x
<
ϵ
{\displaystyle h\leq g\leq H,\quad \int _{0}^{1}(H-g)dx<\epsilon ,\quad \int _{0}^{1}(g-h)dx<\epsilon }
Then
∫
0
1
(
H
−
g
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
ϵ
∫
0
1
(
l
o
g
1
x
)
σ
−
1
d
x
=
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(H-g)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )}
[ 12]
and
∫
0
1
(
g
−
h
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
ϵ
∫
0
1
(
l
o
g
1
x
)
σ
−
1
d
x
=
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(g-h)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )}
By the Weierstrass approximation theorem there are polynomials
q
{\displaystyle q}
and
Q
{\displaystyle Q}
such that
|
Q
−
h
|
<
ϵ
{\displaystyle |Q-h|<\epsilon }
and
|
h
−
q
|
<
ϵ
{\displaystyle |h-q|<\epsilon }
. If
p
(
x
)
=
q
(
x
)
−
ϵ
{\displaystyle p(x)=q(x)-\epsilon }
and
P
(
x
)
=
Q
(
x
)
+
ϵ
{\displaystyle P(x)=Q(x)+\epsilon }
then
p
(
x
)
<
g
(
x
)
<
P
(
x
)
{\displaystyle p(x)<g(x)<P(x)}
as required by the lemma and
∫
0
1
(
P
(
x
)
−
g
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
≤
∫
0
1
(
P
(
x
)
−
Q
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
+
∫
0
1
|
Q
(
x
)
−
h
(
x
)
|
(
l
o
g
1
x
)
σ
−
1
d
x
+
∫
0
1
(
h
(
x
)
−
g
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
3
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(P(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx\leq \int _{0}^{1}(P(x)-Q(x))(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}|Q(x)-h(x)|(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}(h(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )}
By virtue of
g
(
x
)
{\displaystyle g(x)}
being Riemann-integrable, we can find finite step functions
g
1
{\displaystyle g_{1}}
and
g
2
{\displaystyle g_{2}}
such that
g
1
(
x
)
≤
g
(
x
)
≤
g
2
(
x
)
{\displaystyle g_{1}(x)\leq g(x)\leq g_{2}(x)}
and
∫
0
1
(
g
2
(
x
)
−
g
1
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
3
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(g_{2}(x)-g_{1}(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )}
Then, we have proven above there are polynomials
p
(
x
)
{\displaystyle p(x)}
and
P
(
x
)
{\displaystyle P(x)}
such that
p
(
x
)
<
g
1
(
x
)
≤
g
(
x
)
≤
g
2
(
x
)
<
P
(
x
)
{\displaystyle p(x)<g_{1}(x)\leq g(x)\leq g_{2}(x)<P(x)}
and
∫
0
1
(
P
(
x
)
−
g
2
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
ϵ
Γ
(
σ
)
,
∫
0
1
(
g
1
(
x
)
−
p
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(P(x)-g_{2}(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma ),\quad \int _{0}^{1}(g_{1}(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma )}
Combining these we can complete the proof of lemma 3:
∫
0
1
(
P
(
x
)
−
p
(
x
)
)
(
l
o
g
1
x
)
σ
−
1
d
x
<
5
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<5\epsilon \Gamma (\sigma )}
Going back to the proof of lemma 1
lim sup
y
→
∞
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
≤
lim
y
→
∞
∫
0
∞
e
−
y
t
P
(
e
−
y
t
)
d
a
(
t
)
=
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
P
(
e
−
t
)
d
t
{\displaystyle \limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\leq \lim _{y\to \infty }\int _{0}^{\infty }e^{-yt}P(e^{-yt})da(t)={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt}
by lemma 2 .
Lemma 3 implies that
∫
0
∞
e
−
t
t
σ
−
1
(
P
(
e
−
t
)
−
g
(
e
−
t
)
)
d
t
<
ϵ
Γ
(
σ
)
{\displaystyle \int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-g(e^{-t}))dt<\epsilon \Gamma (\sigma )}
so that
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
P
(
e
−
t
)
d
t
−
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
<
ϵ
{\displaystyle {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt-{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt<\epsilon }
and
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
P
(
e
−
t
)
d
t
<
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
+
ϵ
{\displaystyle {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon }
and finally
lim sup
y
→
∞
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
<
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
+
ϵ
{\displaystyle \limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon }
By a similar argument, one can prove
lim inf
y
→
∞
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
>
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
−
ϵ
{\displaystyle \liminf _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)>{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt-\epsilon }
Combining both these results we conclude the proof of lemma 1
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
∼
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
{\displaystyle \int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt}
Putting it all together:
a
(
y
−
1
)
=
∫
0
y
−
1
d
a
(
t
)
=
∫
0
∞
e
−
y
t
g
(
e
−
y
t
)
d
a
(
t
)
∼
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
∞
e
−
t
t
σ
−
1
g
(
e
−
t
)
d
t
=
1
Γ
(
σ
)
y
−
σ
L
(
y
−
1
)
∫
0
1
t
σ
−
1
d
t
=
1
Γ
(
σ
+
1
)
y
−
σ
L
(
y
−
1
)
{\displaystyle a(y^{-1})=\int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{1}t^{\sigma -1}dt={\frac {1}{\Gamma (\sigma +1)}}y^{-\sigma }L(y^{-1})}
or
a
(
y
)
=
y
σ
Γ
(
σ
+
1
)
L
(
y
)
{\displaystyle a(y)={\frac {y^{\sigma }}{\Gamma (\sigma +1)}}L(y)}
.
As
y
→
∞
{\displaystyle y\to \infty }
. Then, if
f
(
x
)
=
1
(
1
−
x
)
σ
L
(
1
1
−
x
)
{\displaystyle f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})}
a
n
=
[
x
n
]
1
(
1
−
x
)
σ
L
(
1
1
−
x
)
=
[
s
n
]
(
1
−
x
)
(
1
−
x
)
σ
L
(
1
1
−
x
)
=
[
s
n
]
1
(
1
−
x
)
σ
−
1
L
(
1
1
−
x
)
=
n
σ
−
1
Γ
(
σ
)
L
(
n
)
{\displaystyle a_{n}=[x^{n}]{\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {(1-x)}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {1}{(1-x)^{\sigma -1}}}L({\frac {1}{1-x}})={\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)}
↑ Hardy 1949, pp. 166.
↑ Because
e
−
y
=
1
−
y
+
o
(
y
)
{\displaystyle e^{-y}=1-y+o(y)}
as
y
→
0
{\displaystyle y\to 0}
which is equivalent to
e
−
y
∼
1
−
y
{\displaystyle e^{-y}\sim 1-y}
as
y
→
0
{\displaystyle y\to 0}
. See De Bruijn 1981, pp. 10 and Hardy 1949, pp. 155.
↑ Hardy 1949, pp. 166-168.
↑ Hardy 1949, pp. 158.
↑ w:Riemann–Stieltjes_integral#Properties
↑ Hardy 1949, pp. 158.
↑ Hardy 1949, pp. 166.
↑ Hardy 1949, pp. 168.
↑ a b Due to the sum rule of integration?
↑ Hardy 1949, pp. 166.
↑ For example, you could construct piecewise continuous functions by partitioning
[
0
,
1
]
{\displaystyle [0,1]}
into
{
x
i
}
{\displaystyle \{x_{i}\}}
, putting
h
(
x
i
)
=
inf
{
f
(
x
)
|
x
∈
[
x
i
,
x
i
+
1
]
}
{\displaystyle h(x_{i})=\inf\{f(x)|x\in [x_{i},x_{i+1}]\}}
and
H
(
x
i
)
=
sup
{
f
(
x
)
|
x
∈
[
x
i
,
x
i
+
1
]
}
{\displaystyle H(x_{i})=\sup\{f(x)|x\in [x_{i},x_{i+1}]\}}
and then "joining the dots" between these values. Refine the partition until the conditions are met.
↑ w:Gamma_function#Integral_representations
Hardy, G.H. (1949). Divergent Series (1st ed.). Oxford University Press.
Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics (PDF) . Cambridge University Press.
De Bruijn, N.G. (1981). Asymptotic Methods in Analysis . Dover Publications.