In this section, we shall do some preparations that will come in handy later, when we need them in order to prove existence/uniqueness theorems. This is since those do rely heavily on some techniques from calculus, which may not usually be taught within a calculus course. Hence this section.
We shall begin with very useful estimation inequalities, called Gronwall's inequalities or inequalities of Gronwall type. These allow us, if we are given one type of estimation (involving an integral with a product of functions), to conclude another type of estimation (involving the exponential function).
By the fundamental theorem of calculus, we immediately obtain
,
where the inequality follows from the assumption on . From this follows that
.
We may now multiply both sides of the equation by and use the equation
(by the product and chain rules)
to justify
.
Hence, the function
is non-increasing. Furthermore, if we set in that function, we obtain
.
Hence,
.
From (assumption) follows the claim.
This result was for functions extending from to the right. An analogous result holds for functions extending from to the left:
Theorem 2.2 (Left Gronwall's inequality):
Let and such that for all
,
then for all
.
Note that this time we are not integrating from to , but from to . This is more natural either, since this means we are integrating in positive direction.
Proof 1:
We rewrite the proof of theorem 12.1 for our purposes.
This time, we set
,
reversing the order of integration in contrast to the last proof.
Once again, we get . This time we use
and multiply by to obtain
,
which is why
is non-decreasing. Now inserting in the thus defined function gives
,
and thus for
.
Proof 2:
We prove the theorem from theorem 12.1. Indeed, for we set and . Then we have
by the substitution . Hence, we obtain by theorem 12.1, that
Let be a sequence of functions defined on an interval which is
equicontinuous (that is, for any there exists such that ) and
uniformly bounded (that is, there exists such that ).
Then contains a uniformly convergent subsequence.
Proof:
Let be an enumeration of the set . The set is bounded, and hence has a convergent subsequence due to the Heine–Borel theorem. Now the sequence also has a convergent subsequence , and successively we may define in that way.
Set for all . We claim that the sequence is uniformly convergent. Indeed, let be arbitrary and let such that .
Let be sufficiently large that if we order ascendingly, the maximum difference between successive elements is less than (possible since is dense in ).
Let be sufficiently large that for all and .
Set , and let . Let be arbitrary. Choose such that (possible due to the choice of ). Due to the choice of , the choice of and the triangle inequality we get
.
Hence, we have a Cauchy sequence, which converges due to the completeness of .
In this section, we shall prove two or three more or less elementary results from analysis, which aren't particular exciting, but useful preparations for the work to come.
Theorem 2.4:
Let be a sequence of functions defined on an interval , whose image is contained within a compact set , let be a continuous function, and assume further that uniformly. Then
uniformly.
Proof: Let be arbitrary. Since is a continuous function defined on a compact set, it is even uniformly continuous (this is due to the Heine–Cantor theorem). This means that we may pick such that for all . Since uniformly, we may pick such that for all and , . Then we have for and that
.
The next result is very similar; it is an extension of the former theorem making time-dependent.
Theorem 2.5:
Let be a sequence of functions defined on an interval , whose image is contained within a compact set such that uniformly, and let this time be a function from to . Then
uniformly in .
Proof:
First, we note that the set is compact. This can be seen either by noting that this set is still bounded and closed, or by noting that for a sequence in this space, we may first choose a convergent subsequence of the "induced" sequence of and then a convergent subsequence of what's left in (or the other way round).
Thus, the function is uniformly continuous as before. Hence, we may choose such that implies (note that is a norm on and since this space is still finite-dimensional, all norms there are equivalent; at least to the norm with respect to which continuity is measured).
Since uniformly, we may pick such that for all and , . Then for and all , we have
We shall later give two proofs of the Picard-Lindelöf existence of solutions theorem; one can be given using the machinery above, whereas a different one rests upon the following result by Stefan Banach.
Theorem 2.6:
Let be a complete metric space, and let be a strict contraction; that is, there exists a constant such that
.
Then has a unique fixed point, which means that there is a unique such that . Furthermore, if we start with a completely arbitrary point , then the sequence
converges to .
Proof:
First, we prove uniqueness of the fixed point. Assume are both fixed points. Then
.
Since , this implies .
Now we prove existence and simultaneously the claim about the convergence of the sequence . For notation, we thus set and if is already defined, we set . Then the sequence is nothing else but the sequence .
Let . We claim that
.
Indeed, this follows by induction on . The case is trivial, and if the claim is true for , then .
Hence, by the triangle inequality,
.
The latter expression goes to zero as and hence we are dealing with a Cauchy sequence. As we are in a complete metric space, it converges to a limit . This limit further is a fixed point, as the continuity of ( is Lipschitz continuous with constant ) implies