# Ordinary Differential Equations/Preliminaries from calculus

In this section, we shall do some preparations that will come in handy later, when we need them in order to prove existence/uniqueness theorems. This is since those do rely heavily on some techniques from calculus, which may not usually be taught within a calculus course. Hence this section.

We shall begin with very useful estimation inequalities, called Gronwall's inequalities or inequalities of Gronwall type. These allow us, if we are given one type of estimation (involving an integral with a product of functions), to conclude another type of estimation (involving the exponential function).

## Gronwall's inequalities

Theorem 2.1 (Right Gronwall's inequality):

Let $t_{0},M\in \mathbb {R}$  and let $f,h\in {\mathcal {C}}([t_{0},\infty ))$  such that for all $t\geq t_{0}$

$f(t)\leq M+\int _{t_{0}}^{t}f(s)h(s)ds$ .

Then for all $t\geq t_{0}$ , also

$f(t)\leq M\cdot e^{\int _{t_{0}}^{t}h(s)ds}$ .

Proof:

We define a new function by

$r(t):=M+\int _{t_{0}}^{t}f(s)h(s)ds$ .

By the fundamental theorem of calculus, we immediately obtain

$r'(t)=f(t)h(t)\leq h(t)\left(M+\int _{t_{0}}^{t}f(s)h(s)ds\right)=h(t)r(t)$ ,

where the inequality follows from the assumption on $f$ . From this follows that

$r'(t)-h(t)r(t)\leq 0$ .

We may now multiply both sides of the equation by $e^{-\int _{t_{0}}^{t}h(s)ds}$  and use the equation

$\left(r(t)e^{-\int _{t_{0}}^{t}h(s)ds}\right)'=r'(t)e^{-\int _{t_{0}}^{t}h(s)ds}+r(t)\left((-h(t))\cdot e^{-\int _{t_{0}}^{t}h(s)ds}\right)=(r'(t)-h(t)r(t))e^{-\int _{t_{0}}^{t}h(s)ds}$  (by the product and chain rules)

to justify

$\left(r(t)e^{-\int _{t_{0}}^{t}h(s)ds}\right)'\leq 0$ .

Hence, the function

$t\mapsto r(t)e^{-\int _{t_{0}}^{t}h(s)ds}$

is non-increasing. Furthermore, if we set $t=t_{0}$  in that function, we obtain

$r(t_{0})e^{0}=M$ .

Hence,

$r(t)e^{-\int _{t_{0}}^{t}h(s)ds}\leq r(0)=M\Leftrightarrow r(t)\leq Me^{\int _{t_{0}}^{t}h(s)ds}$ .

From $f(t)\leq r(t)$  (assumption) follows the claim.$\Box$

This result was for functions extending from $t_{0}$  to the right. An analogous result holds for functions extending from $t_{0}$  to the left:

Theorem 2.2 (Left Gronwall's inequality):

Let $t_{0},M\in \mathbb {R}$  and $f,h\in {\mathcal {C}}((-\infty ,t_{0}])$  such that for all $t\leq t_{0}$

$f(t)\leq M+\int _{t}^{t_{0}}f(s)h(s)ds$ ,

then for all $t\leq t_{0}$

$f(t)\geq M\cdot e^{\int _{t}^{t_{0}}h(s)ds}$ .

Note that this time we are not integrating from $t_{0}$  to $t$ , but from $t$  to $t_{0}$ . This is more natural either, since this means we are integrating in positive direction.

Proof 1:

We rewrite the proof of theorem 12.1 for our purposes.

This time, we set

$r(t)=M+\int _{t}^{t_{0}}f(s)h(s)ds$ ,

reversing the order of integration in contrast to the last proof.

Once again, we get $r'(t)\geq -h(t)r(t)\Leftrightarrow r'(t)+h(t)r(t)\geq 0$ . This time we use

$\left(r(t)e^{-\int _{t}^{t_{0}}h(s)ds}\right)'=r'(t)e^{-\int _{t}^{t_{0}}h(s)ds}+h(t)r(t)e^{-\int _{t}^{t_{0}}h(s)ds}$

and multiply $r'(t)+h(t)r(t)\geq 0$  by $e^{-\int _{t}^{t_{0}}h(s)ds}$  to obtain

$\left(r(t)e^{-\int _{t}^{t_{0}}h(s)ds}\right)'\geq 0$ ,

which is why

$t\mapsto r(t)e^{-\int _{t}^{t_{0}}h(s)ds}$

is non-decreasing. Now inserting $t_{0}$  in the thus defined function gives

$r(t_{0})e^{0}=M$ ,

and thus for $t\leq t_{0}$

$r(t)e^{-\int _{t}^{t_{0}}h(s)ds}\leq M\Leftrightarrow r(t)\leq Me^{\int _{t}^{t_{0}}h(s)ds}$ .$\Box$

Proof 2:

We prove the theorem from theorem 12.1. Indeed, for $t\geq t_{0}$  we set ${\tilde {f}}(t):=f(-t)$  and ${\tilde {h}}(t):=h(-t)$ . Then we have

${\tilde {f}}(t)\leq M+\int _{-t}^{t_{0}}f(s)h(s)ds=M+\int _{t_{0}}^{t}{\tilde {f}}(s){\tilde {h}}(s)ds$

by the substitution $s\mapsto -s$ . Hence, we obtain by theorem 12.1, that

${\tilde {f}}(t)\leq Me^{\int _{t_{0}}^{t}{\tilde {h}}(s)ds}$

for $t\geq t_{0}$ . Therefore, if now $t\leq t_{0}$ ,

$f(t)={\tilde {f}}(-t)\leq Me^{\int _{t_{0}}^{-t}{\tilde {h}}(s)ds}=Me^{\int _{-t}^{t_{0}}h(s)ds}$ .$\Box$

## The Arzelà–Ascoli theorem

Theorem 2.3 (Arzelà–Ascoli):

Let $(f_{n})_{n\in \mathbb {N} }$  be a sequence of functions defined on an interval $[a,b],a  which is

• equicontinuous (that is, for any $\epsilon >0$  there exists $\delta >0$  such that $|x-y|<\delta \Rightarrow \forall n\in \mathbb {N} :|f_{n}(x)-f_{n}(y)|<\epsilon$ ) and
• uniformly bounded (that is, there exists $M>0$  such that $\forall n\in \mathbb {N} :\forall x\in [a,b]:|f_{n}(x)| ).

Then $(f_{n})_{n\in \mathbb {N} }$  contains a uniformly convergent subsequence.

Proof:

Let $(x_{n})_{n\in \mathbb {N} }$  be an enumeration of the set $[a,b]\cap \mathbb {Q}$ . The set $\{f_{n}(x_{1})|n\in \mathbb {N} \}$  is bounded, and hence has a convergent subsequence $(f_{k_{1,n}}(x_{1}))_{n\in \mathbb {N} }$  due to the Heine–Borel theorem. Now the sequence $(f_{k_{1,n}}(z_{2}))_{n\in \mathbb {N} }$  also has a convergent subsequence $(f_{k_{2,n}}(x_{2}))_{n\in \mathbb {N} }$ , and successively we may define $f_{k_{m,n}}$  in that way.

Set $f_{l_{m}}:=f_{k_{m,m}}$  for all $m\in \mathbb {N}$ . We claim that the sequence $(f_{l_{m}})_{m\in \mathbb {N} }$  is uniformly convergent. Indeed, let $\epsilon >0$  be arbitrary and let $\delta$  such that $|x-y|<\delta \Rightarrow \forall n\in \mathbb {N} :|f_{n}(x)-f_{n}(y)|<\epsilon /3$ .

Let $N_{1}\in \mathbb {N}$  be sufficiently large that if we order $a,x_{1},\ldots ,x_{N_{1}},b$  ascendingly, the maximum difference between successive elements is less than $\delta$  (possible since $\mathbb {Q}$  is dense in $\mathbb {R}$ ).

Let $N_{2}\in \mathbb {N}$  be sufficiently large that for all $n\in \{1,\ldots ,N_{1}\}$  and $k\geq 1$  $\left|f_{l_{N_{2}+k}}(x_{n})-f_{l_{N_{2}}}(x_{n})\right|<\epsilon /3$ .

Set $N:=\max\{N_{1},N_{2}\}$ , and let $k\geq N$ . Let $y\in [a,b]$  be arbitrary. Choose $x_{n}$  such that $|x_{n}-y|<\delta$  (possible due to the choice of $N_{1}$ ). Due to the choice of $\delta$ , the choice of $N_{2}$  and the triangle inequality we get

$\left|f_{l_{N+k}}(y)-f_{l_{N}}(y)\right|\leq \left|f_{l_{N+k}}(y)-f_{l_{N+k}}(x_{n})\right|+\left|f_{l_{N+k}}(x_{n})-f_{l_{N}}(x_{n})\right|+\left|f_{l_{N}}(x_{n})-f_{l_{N}}(y)\right|<\epsilon /3+\epsilon /3+\epsilon /3=\epsilon$ .

Hence, we have a Cauchy sequence, which converges due to the completeness of ${\mathcal {C}}([a,b])$ .$\Box$

## Convergence considerations

In this section, we shall prove two or three more or less elementary results from analysis, which aren't particular exciting, but useful preparations for the work to come.

Theorem 2.4:

Let $(f_{n})_{n\in \mathbb {N} }$  be a sequence of functions defined on an interval $[a,b]\subset \mathbb {R}$ , whose image is contained within a compact set $K\subset \mathbb {R} ^{n}$ , let $g:K\to \mathbb {R} ^{m}$  be a continuous function, and assume further that $f_{n}\to f$  uniformly. Then

$g\circ f_{n}\to g\circ f$  uniformly.

Proof: Let $\epsilon >0$  be arbitrary. Since $g$  is a continuous function defined on a compact set, it is even uniformly continuous (this is due to the Heine–Cantor theorem). This means that we may pick $\delta >0$  such that $\|x-y\|<\delta \Rightarrow \|g(x)-g(y)\|<\epsilon$  for all $x,y\in K$ . Since $f_{n}\to f$  uniformly, we may pick $N\in \mathbb {N}$  such that for all $k\geq N$  and $t\in [a,b]$ , $\|f_{k}(t)-f(t)\|<\delta$ . Then we have for $k\geq N$  and $t\in [a,b]$  that

$\|g\circ f_{k}(t)-g\circ f(t)\|<\epsilon$ .$\Box$

The next result is very similar; it is an extension of the former theorem making $g$  time-dependent.

Theorem 2.5:

Let $(f_{n})_{n\in \mathbb {N} }$  be a sequence of functions defined on an interval $[a,b]\subset \mathbb {R}$ , whose image is contained within a compact set $K\subset \mathbb {R} ^{n}$  such that $f_{n}\to f$  uniformly, and let this time $g$  be a function from $[a,b]\times K$  to $\mathbb {R} ^{m}$ . Then

$g(t,f_{n}(t))\to g(t,f(t))$

uniformly in $t\in [a,b]$ .

Proof:

First, we note that the set $[a,b]\times K$  is compact. This can be seen either by noting that this set is still bounded and closed, or by noting that for a sequence in this space, we may first choose a convergent subsequence of the "induced" sequence of $K$  and then a convergent subsequence of what's left in $[a,b]$  (or the other way round).

Thus, the function $g$  is uniformly continuous as before. Hence, we may choose $\delta >0$  such that $|t-s|+\|x-y\|<\delta$  implies $\|g(t,x)-g(s,y)\|<\epsilon$  (note that $|\cdot |+\|\cdot \|$  is a norm on $[a,b]\times K$  and since this space is still finite-dimensional, all norms there are equivalent; at least to the norm with respect to which continuity is measured).

Since $f_{n}\to f$  uniformly, we may pick $N\in \mathbb {N}$  such that for all $k\geq N$  and $t\in [a,b]$ , $\|f_{k}(t)-f(t)\|<\delta$ . Then for $k\geq N$  and all $t\in [a,b]$ , we have

$\|g(t,f_{n}(t))-g(t,f(t))\|<\epsilon$ .$\Box$

## Banach's fixed-point theorem

We shall later give two proofs of the Picard-Lindelöf existence of solutions theorem; one can be given using the machinery above, whereas a different one rests upon the following result by Stefan Banach.

Theorem 2.6:

Let $(M,d)$  be a complete metric space, and let $f:M\to M$  be a strict contraction; that is, there exists a constant $0\leq \lambda <1$  such that

$\forall m,n\in M:d(f(m),f(n))\leq \lambda d(m,n)$ .

Then $f$  has a unique fixed point, which means that there is a unique $x\in M$  such that $f(x)=x$ . Furthermore, if we start with a completely arbitrary point $y\in M$ , then the sequence

$y,f(y),f(f(y)),f(f(f(y))),\ldots$

converges to $x$ .

Proof:

First, we prove uniqueness of the fixed point. Assume $x,y$  are both fixed points. Then

$d(x,y)=d(f(x),f(y))\leq \lambda d(x,y)\Rightarrow (1-\lambda )d(x,y)=0$ .

Since $0\leq \lambda <1$ , this implies $d(x,y)=0\Rightarrow x=y$ .

Now we prove existence and simultaneously the claim about the convergence of the sequence $y,f(y),f(f(y)),f(f(f(y))),\ldots$ . For notation, we thus set $z_{0}:=y$  and if $z_{n}$  is already defined, we set $z_{n+1}=f(z_{n})$ . Then the sequence $(z_{n})_{n\in \mathbb {N} }$  is nothing else but the sequence $y,f(y),f(f(y)),f(f(f(y))),\ldots$ .

Let $n\geq 0$ . We claim that

$d(z_{n+1},z_{n})\leq \lambda ^{n}d(z_{1},z_{0})$ .

Indeed, this follows by induction on $n$ . The case $n=0$  is trivial, and if the claim is true for $n$ , then $d(z_{n+2},z_{n+1})=d(f(z_{n+1}),f(z_{n}))\leq \lambda d(z_{n+1},z_{n})\leq \lambda \cdot \lambda ^{n}d(z_{1},z_{0})$ .

Hence, by the triangle inequality,

{\begin{aligned}d(z_{n+m},z_{n})&\leq \sum _{j=n+1}^{n+m}d(z_{j},z_{j-1})\\&\leq \sum _{j=n+1}^{n+m}\lambda ^{j-1}d(z_{1},z_{0})\\&\leq \sum _{j=n+1}^{\infty }\lambda ^{j-1}d(z_{1},z_{0})\\&=d(z_{1},z_{0})\lambda ^{n}{\frac {1}{1-\lambda }}\end{aligned}} .

The latter expression goes to zero as $n\to \infty$  and hence we are dealing with a Cauchy sequence. As we are in a complete metric space, it converges to a limit $x$ . This limit further is a fixed point, as the continuity of $f$  ($f$  is Lipschitz continuous with constant $\lambda$ ) implies

$x=\lim _{n\to \infty }z_{n}=\lim _{n\to \infty }f(z_{n-1})=f(\lim _{n\to \infty }z_{n-1})=f(x)$ .$\Box$