# Sequences and Series/Print version

Sequences and Series

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Sequences_and_Series

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

# Multiple limits

Theorem (interchanging summation and integration):

Let ${\displaystyle (\Omega ,{\mathcal {F}},\mu )}$ be a measure space, and let ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ be a sequence of functions from ${\displaystyle \Omega }$ to ${\displaystyle \mathbb {K} ^{d}}$, where ${\displaystyle \mathbb {K} =\mathbb {R} }$ or ${\displaystyle \mathbb {C} }$. If either of the two expressions

${\displaystyle \int _{\Omega }\sum _{n=1}^{\infty }\|f_{n}(\omega )\|_{\infty }\mu (d\omega )}$ or ${\displaystyle \sum _{n=1}^{\infty }\int _{\Omega }\|f_{n}(\omega )\|_{\infty }\mu (d\omega )}$

converges, so does the other, and we have

${\displaystyle \int _{\Omega }\sum _{n=1}^{\infty }f_{n}(\omega )\mu (d\omega )=\sum _{n=1}^{\infty }\int _{\Omega }f_{n}(\omega )\mu (d\omega )}$.

Proof: Regarding the summation as integration over ${\displaystyle \mathbb {N} }$ with σ-algebra ${\displaystyle 2^{\mathbb {N} }}$ and counting measure, this theorem is an immediate consequence of Fubini's theorem, given that integration and summation are defined pointwise. ${\displaystyle \Box }$

Theorem (interchanging summation and real differentiation):

Let ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ be a sequence of continuously differentiable functions from an open subset ${\displaystyle U}$ of ${\displaystyle \mathbb {R} ^{d}}$ to ${\displaystyle \mathbb {R} ^{k}}$. Suppose that both

${\displaystyle \sum _{n=1}^{\infty }\|f_{n}(x)\|_{\infty }}$ and ${\displaystyle \sum _{n=1}^{\infty }\|Df_{n}(x)\|_{\infty }}$

converge for all ${\displaystyle x\in U}$, and that for all ${\displaystyle x\in U}$ there exists ${\displaystyle \delta >0}$ and a sequence ${\displaystyle (a_{n})_{n\in \mathbb {N} }}$ in ${\displaystyle \mathbb {R} }$ such that

${\displaystyle \sum _{n=1}^{\infty }a_{n}<\infty }$ and ${\displaystyle \forall n\in \mathbb {N} :\forall y\in {\overline {B_{\delta }(x)}}:a_{n}\geq |Df_{n}(y)|}$.

Then

${\displaystyle D\sum _{n=1}^{\infty }f_{n}(x)=\sum _{n=1}^{\infty }Df_{n}(x)}$

for all ${\displaystyle x\in U}$.

Proof: ${\displaystyle \Box }$

# Series and integration

Theorem (Abelian partial summation):

Let ${\displaystyle (a_{n})_{n\in \mathbb {N} }}$ be a sequence of complex numbers, and let ${\displaystyle f:[1,\infty )\to \mathbb {C} }$ be differentiable on ${\displaystyle (1,\infty )}$. Finally define

${\displaystyle A(x):=\sum _{1\leq n\leq x}a_{n}}$.

Then for ${\displaystyle x\geq 1}$ we have

${\displaystyle \sum _{1\leq n\leq x}a_{n}f(n)=A(x)f(x)-\int _{1}^{x}A(t)f'(t)dt}$.

Proof: If ${\displaystyle m=\lfloor x\rfloor }$, we have

{\displaystyle {\begin{aligned}\int _{1}^{x}A(t)f'(t)dt&=\sum _{k=2}^{m}\int _{k-1}^{k}A(t)f'(t)dt+\int _{m}^{x}A(t)f'(t)dt\\&=\sum _{k=2}^{m}\sum _{1\leq n\leq k-1}a_{n}\int _{k-1}^{k}f'(t)dt+\sum _{1\leq n\leq m}a_{n}\int _{m}^{x}f'(t)dt\\&=\sum _{k=2}^{m}A(k-1)(f(k)-f(k-1))+A(x)f(x)-A(m)f(m).\end{aligned}}}

But

{\displaystyle {\begin{aligned}\sum _{k=2}^{m}A(k-1)(f(k)-f(k-1))&=\sum _{k=2}^{m}A(k-1)f(k)-\sum _{k=2}^{m}A(k-1)f(k-1)\\&=\sum _{k=2}^{m}A(k-1)f(k)-\sum _{k=1}^{m-1}A(k)f(k)\\&=\sum _{k=2}^{m-1}f(k)(\underbrace {A(k-1)-A(k)} _{=-a_{k}})+A(m-1)f(m)-A(1)f(1).\end{aligned}}}

so that

${\displaystyle \int _{1}^{x}A(t)f'(t)dt=A(x)f(x)-\sum _{k=1}^{m}a_{k}f(k)}$. ${\displaystyle \Box }$

# Power series

Proposition (identity theorem for one-dimensional power series):

Let

${\displaystyle f(z):=\sum _{n=0}^{\infty }a_{n}(z-z_{0})^{n}}$ and ${\displaystyle g(z):=\sum _{n=0}^{\infty }b_{n}(z-z_{0})^{n}}$

be two (complex or real) power series that converge on ${\displaystyle B_{\epsilon }(z_{0})}$ for some ${\displaystyle \epsilon >0}$. Suppose that ${\displaystyle z_{0}}$ is an accumulation point of the set ${\displaystyle \{z\in B_{\epsilon }(z_{0})|f(z)=g(z)\}}$. Then we have ${\displaystyle a_{n}=b_{n}}$ for all ${\displaystyle n\in \mathbb {N} _{0}}$.

Proof: Assume that not ${\displaystyle a_{n}=b_{n}}$ for all ${\displaystyle n\in \mathbb {N} _{0}}$. Then there exists a least ${\displaystyle n}$ (call it ${\displaystyle n_{0}}$) such that ${\displaystyle a_{n_{0}}\neq b_{n_{0}}}$. Consider the function

${\displaystyle h(z):=f(z)-g(z)=\sum _{n=0}^{\infty }(a_{n}-b_{n})(z-z_{0})^{n}}$,

which is defined on at least ${\displaystyle B_{\epsilon }(z_{0})}$. Since ${\displaystyle a_{n}=b_{n}}$ for ${\displaystyle n, the power series ${\displaystyle h}$ starts at ${\displaystyle (z-z_{0})^{n_{0}}}$. Therefore,

${\displaystyle j(z):={\frac {h(z)}{(z-z_{0})^{n_{0}}}}=\sum _{n=n_{0}}(a_{n}-b_{n})(z-z_{0})^{n-n_{0}}}$

is a well-defined function on ${\displaystyle B_{\epsilon }(z_{0})}$ which is also continuous due to the continuity of power series. Moreover,

${\displaystyle j(0)=a_{n}-b_{n}\neq _{0}}$,

and by continuity of ${\displaystyle |j(z)|}$, there exists a ${\displaystyle \delta >0}$ such that ${\displaystyle |j(z)|>0}$ for all ${\displaystyle z\in B_{\delta }(z_{0})}$. But by definition,

${\displaystyle h(z)=(z-z_{0})^{n}j(z)}$,

so that we have for ${\displaystyle z\in B_{\delta }(z_{0})\setminus \{z_{0}\}}$ that ${\displaystyle |h(z)|=|z-z_{0}|^{n}|j(z)|>0}$ and consequently ${\displaystyle h(z)\neq 0}$, and hence ${\displaystyle f(z)\neq g(z)}$. But this contradicts the assumption that ${\displaystyle z_{0}}$ was an accumulation point of ${\displaystyle \{z\in B_{\epsilon }(z_{0})|f(z)=g(z)\}}$. ${\displaystyle \Box }$

Example (falsity of the identity theorem for multi-dimensional power series):

For multi-dimensional power series, that is power series of the type

${\displaystyle h(z):=\sum _{\alpha \in \mathbb {N} _{0}^{d}}a_{\alpha }(z-z_{0})^{\alpha }}$ for a ${\displaystyle z_{0}=(z_{0,1},\ldots ,z_{0,d})\in \mathbb {C} ^{d}}$,

the set ${\displaystyle \{z|h(z)=0\}}$ may have ${\displaystyle z_{0}}$ as an accumulation point even when ${\displaystyle h}$ does not vanish. An easy example (which works in any dimension ${\displaystyle d\geq 2}$) is ${\displaystyle z_{0}=0}$ and

${\displaystyle h(z)=z_{1}z_{2}}$.

 To do:The LHS needs to converge to ${\displaystyle \alpha }$ as ${\displaystyle x=x(z)}$ is chosen in the right way.

Theorem (Abel's theorem):

Let

${\displaystyle \sum _{n=1}^{\infty }a_{n}z^{n}}$

be a real or complex power series of convergence radius ${\displaystyle 1}$, and suppose that

${\displaystyle \lim _{z\to 1}\sum _{n=1}^{\infty }a_{n}z^{n}=\alpha \in \mathbb {C} }$.

Then

${\displaystyle \sum _{n=1}^{\infty }a_{n}=\alpha }$.

{{proof|By Abelian partial summation, we have

${\displaystyle \sum _{1\leq n\leq x}a_{n}z^{n}=z^{x}A(x)-\ln(z)\int _{1}^{x}A(t)z^{t}dt}$

for ${\displaystyle |z|<1}$ and ${\displaystyle x\geq 1}$, where we denote as usual

${\displaystyle A(x):=\sum _{1\leq n\leq x}a_{n}}$.

Substituting ${\displaystyle z=\exp(w)}$, we get

${\displaystyle \sum _{1\leq n\leq x}a_{n}\exp(wn)=\exp(wx)A(x)-w\int _{1}^{x}A(t)\exp(wt)dt}$.

We then put ${\displaystyle }$

# Dirichlet‒Hurwitz series

Definition (Dirichlet‒Hurwitz series):

Let ${\displaystyle f:\mathbb {N} \to \mathbb {C} }$ be a function, and let ${\displaystyle a\in \mathbb {C} \setminus \{-1,-2,\ldots \}}$. The Dirichlet‒Hurwitz series associated to ${\displaystyle f}$ and ${\displaystyle a}$ is the function of ${\displaystyle s\in \mathbb {C} }$ given by the series

${\displaystyle \sum _{n=1}^{\infty }{\frac {f(n)}{(n+a)^{s}}}}$.

Definition (abscissa of absolute convergence of Dirichlet‒Hurwitz series):

Let ${\displaystyle f:\mathbb {N} \to \mathbb {C} }$ be a function, and let ${\displaystyle a\in \mathbb {C} \setminus \{-1,-2,\ldots \}}$. Suppose that there exists a number ${\displaystyle \sigma _{a}\in \mathbb {R} }$ such that

${\displaystyle \sum _{n=1}^{\infty }\left|{\frac {f(n)}{(n+a)^{s}}}\right|}$

converges whenever ${\displaystyle \Re s>\sigma _{a}}$ and diverges whenever ${\displaystyle \Re s<\sigma _{a}}$. Then ${\displaystyle \sigma _{a}}$ is called the abscissa of absolute convergence of the Dirichlet‒Hurwitz series associated to ${\displaystyle f}$ and ${\displaystyle a}$.

Proposition (existence of abscissa of absolute convergence of Dirichlet‒Hurwitz series):

Let ${\displaystyle f:\mathbb {N} \to \mathbb {C} }$ be a function, and let ${\displaystyle a\in \mathbb {C} \setminus \{-1,-2,\ldots \}}$. Suppose that

# Infinite products

Definition (infinite product):

Let ${\displaystyle (b_{n})_{n\in \mathbb {N} }}$ be a sequence of numbers in ${\displaystyle \mathbb {K} =\mathbb {R} }$ or ${\displaystyle \mathbb {C} }$. If the limit

${\displaystyle \lim _{N\to \infty }\prod _{n=1}^{N}b_{n}}$

exists, it is called the infinite product of ${\displaystyle (b_{n})_{n\in \mathbb {N} }}$ and denoted by

${\displaystyle \prod _{n=1}^{\infty }b_{n}}$.

Proposition (necessary condition for convergence of infinite products):

In order for the infinite product

${\displaystyle \prod _{n=1}^{\infty }b_{n}}$

of a sequence ${\displaystyle (b_{n})_{n\in \mathbb {N} }}$ to exist and not to be zero, it is necessary that

${\displaystyle \lim _{n\to \infty }b_{n}=1}$.

Proof: Suppose that not ${\displaystyle \lim _{n\to \infty }b_{n}=1}$. Then there exists ${\displaystyle \epsilon >0}$ and an infinite sequence ${\displaystyle (n_{k})_{k\in \mathbb {N} }}$ such that for all ${\displaystyle k\in \mathbb {N} }$ we have ${\displaystyle |b_{n}-1|>\epsilon }$. Thus, upon denoting

${\displaystyle P_{N}:=\prod _{n=1}^{N}b_{n}}$,

we will have

${\displaystyle |P_{n_{k}}-P_{n_{k}-1}|=|P_{n_{k}-1}||b_{n}|}$.

Suppose for a contradiction that ${\displaystyle \lim _{N\to \infty }P_{N}}$ exited and was equal to ${\displaystyle c\in \mathbb {R} }$. Then when ${\displaystyle k}$ is sufficiently large, we will have

${\displaystyle \left||P_{n_{k}-1}||b_{n}|-|c|\right|\geq |c|\epsilon /2}$,

which is a contradiction. ${\displaystyle \Box }$

Proposition (series criterion for the convergence of infinite products):

Let ${\displaystyle (a_{n})_{n\in \mathbb {N} }}$ be a sequence of real numbers. If

${\displaystyle \sum _{n=1}^{\infty }|a_{n}|<\infty }$,

then

${\displaystyle \prod _{n=1}^{\infty }(1+a_{n})}$

converges.

Proof: ${\displaystyle \Box }$