# Analytic Combinatorics/Tauberian Theorem

## Introduction

'Tauberian' refers to a class of theorems with many applications. The full scope of Tauberian theorems is too large to cover here.

We prove here only a particular Tauberian theorem due to Hardy, Littlewood and Karamata, which is useful in analytic combinatorics.

## Theorem

Theorem due to Hardy.

If $f(y)\sim y^{-\sigma }L(y^{-1})$  as $y\to 0$ , where $\sigma >0$  and $L(y)$  is a slowly varying function, then $a_{n}\sim {\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)$  when $n\to \infty$ .

Bear in mind that if $f(x)=\sum a_{n}x^{n}={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})$  we can convert it to a General Dirichlet series (without changing the value of the coefficients we are interested in) by the substitution $x=e^{-y}$  such that

$f(y)=\sum a_{n}(e^{-y})^{n}={\frac {1}{(1-e^{-y})^{\sigma }}}L({\frac {1}{1-e^{-y}}})\sim y^{-\sigma }L(y^{-1})$

as $y\to 0$ .

This gives us

$[x^{n}]{\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})\sim {\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)$

which is the form it is given in Flajolet and Sedgewick 2009, pp. 435.

## Proof

Proof due to Hardy.

If $f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=\sum _{0}^{\infty }a_{n}x^{n}$  then this method actually finds the asymptotic estimate of $s_{n}=\sum _{i=0}^{n}a_{i}$  after which we can find $a_{n}$  by $s_{n}-s_{n-1}$  or by finding $s_{n}$  in $(1-x)f(x)$ .

Let $a(t)$  be a step function where $a(n)-a(n-1)=a_{n}$ , $a(n)=\sum _{i=0}^{n}a_{i}$  and $a(0)=0$ .

$a(y^{-1})=\int _{0}^{y^{-1}}da(t)$

Using integration by parts:

$\int _{0}^{y^{-1}}da(t)=a(y^{-1})-a(0)-\int _{0}^{y^{-1}}a(t)d1=a(y^{-1})$

Because $a(0)=\int _{0}^{y^{-1}}a(t)d1=0$ .

If $g(x)=x^{-1}\quad (e^{-1}\leq x\leq 1),\quad g(x)=0\quad (0\leq x  then

$\int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)$ 

Because when $0\leq t\leq y^{-1}$  then $e^{-yt}g(e^{-yt})$  ranges from $e^{-1}g(e^{-1})$  to $e^{0}g(e^{0})$  so $e^{-yt}g(e^{-yt})={\frac {e^{yt}}{e^{yt}}}=1$  and when $t>y^{-1}$  then $e^{-yt}g(e^{-yt})=0$ .

### Lemma 1

$\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt$ 

as $y\to 0$ .

To prove this we need two further lemmas.

### Lemma 2

$\int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}Q(e^{-t})dt$ 

where $Q$  is any polynomial and as $y\to 0$ .

$\int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)=\int _{0}^{\infty }e^{-yt}(\cdots +e^{-ytn}+\cdots )da(t)=\int _{0}^{\infty }\cdots +\int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)+\int _{0}^{\infty }\cdots$ 
$\int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)=\int _{0}^{\infty }e^{-(n+1)yt}da(t)\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})$

as $y\to 0$  by assumption in the theorem.

$(n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})$

by definition of slowly varying function.

$(n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})=y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt$
$\cdots +y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tm}t^{\sigma -1}dt+y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt+\cdots =y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}Q(e^{-t})t^{\sigma -1}dt$ 

### Lemma 3

If $g(x)$  is real-valued and Riemann integrable in the open interval $(0,1)$  and $\sigma >0$  then there exist polynomials $p(x)$  and $P(x)$  such that $p(x)  and

$\int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx=\int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-p(e^{-t}))dt<\epsilon \Gamma (\sigma )$ 

Assume there are continuous functions $h$  and $H$  such that

$h\leq g\leq H,\quad \int _{0}^{1}(H-g)dx<\epsilon ,\quad \int _{0}^{1}(g-h)dx<\epsilon$

Then

$\int _{0}^{1}(H-g)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )$ 

and

$\int _{0}^{1}(g-h)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )$

By the Weierstrass approximation theorem there are polynomials $q$  and $Q$  such that $|Q-h|<\epsilon$  and $|h-q|<\epsilon$ . If $p(x)=q(x)-\epsilon$  and $P(x)=Q(x)+\epsilon$  then $p(x)  as required by the lemma and

$\int _{0}^{1}(P(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx\leq \int _{0}^{1}(P(x)-Q(x))(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}|Q(x)-h(x)|(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}(h(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )$

If we can find finite step functions $g_{1}$  and $g_{2}$  such that $g_{1}(x)\leq g(x)\leq g_{2}(x)$  and

$\int _{0}^{1}(g_{2}(x)-g_{1}(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )$

Then, we have proven above there are polynomials $p(x)$  and $P(x)$  such that $p(x)  and

$\int _{0}^{1}(P(x)-g_{2}(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma ),\quad \int _{0}^{1}(g_{1}(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma )$

Combining these we can complete the proof of lemma 3:

$\int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<5\epsilon \Gamma (\sigma )$

Going back to the proof of lemma 1

$\limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\leq \lim _{y\to \infty }\int _{0}^{\infty }e^{-yt}P(e^{-yt})da(t)={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt$

by lemma 2.

Lemma 3 implies that

$\int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-g(e^{-t}))dt<\epsilon \Gamma (\sigma )$

so that

${\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt-{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt<\epsilon$

and

${\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon$

and finally

$\limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon$

By a similar argument, one can prove

$\liminf _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)>{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt-\epsilon$

Combining both these results we conclude the proof of lemma 1

$\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt$

Putting it all together:

$a(y^{-1})=\int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{1}t^{\sigma -1}dt={\frac {1}{\Gamma (\sigma +1)}}y^{-\sigma }L(y^{-1})$

or

$a(y)={\frac {y^{\sigma }}{\Gamma (\sigma +1)}}L(y)$ .

As $y\to \infty$ . Then, if $f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})$

$a_{n}=[x^{n}]{\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {(1-x)}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {1}{(1-x)^{\sigma -1}}}L({\frac {1}{1-x}})={\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)$