Introduction and first examplesEdit
What is a partial differential equation?Edit
Let be a natural number, and let be an arbitrary set. A partial differential equation on looks like this:
is an arbitrary function here, specific to the partial differential equation, which goes from to , where is a natural number. And a solution to this partial differential equation on is a function satisfying the above logical statement. The solutions of some partial differential equations describe processes in nature; this is one reason why they are so important.
In the whole theory of partial differential equations, multiindices are extremely important. Only with their help we are able to write down certain formulas a lot briefer.
A -dimensional multiindex is a vector , where are the natural numbers and zero.
If is a multiindex, then its absolute value is defined by
If is a -dimensional multiindex, is an arbitrary set and is sufficiently often differentiable, we define , the -th derivative of , as follows:
Types of partial differential equationsEdit
We classify partial differential equations into several types, because for partial differential equations of one type we will need different solution techniques as for differential equations of other types. We classify them into linear and nonlinear equations, and into equations of different orders.
A linear partial differential equation is an equation of the form
, where only finitely many of the s are not the constant zero function. A solution takes the form of a function . We have for an arbitrary , is an arbitrary function and the sum in the formula is taken over all possible -dimensional multiindices. If the equation is called homogenous.
A partial differential equation is called nonlinear iff it is not a linear partial differential equation.
Let . We say that a partial differential equation has -th order iff is the smallest number such that it is of the form
First example of a partial differential equationEdit
Now we are very curious what practical examples of partial differential equations look like after all.
Theorem and definition 1.4:
If is a differentiable function and , then the function
solves the one-dimensional homogenous transport equation
Proof: Exercise 2.
We therefore see that the one-dimensional transport equation has many different solutions; one for each continuously differentiable function in existence. However, if we require the solution to have a specific initial state, the solution becomes unique.
Theorem and definition 1.5:
If is a differentiable function and , then the function
is the unique solution to the initial value problem for the one-dimensional homogenous transport equation
Surely . Further, theorem 1.4 shows that also:
Now suppose we have an arbitrary other solution to the initial value problem. Let's name it . Then for all , the function
Therefore, in particular
, which means, inserting the definition of , that
, which shows that . Since was an arbitrary solution, this shows uniqueness.
In the next chapter, we will consider the non-homogenous arbitrary-dimensional transport equation.
- Have a look at the definition of an ordinary differential equation (see for example the Wikipedia page on that) and show that every ordinary differential equation is a partial differential equation.
- Prove Theorem 1.4 using direct calculation.
- What is the order of the transport equation?
- Find a function such that and .
The transport equationEdit
In the first chapter, we had already seen the one-dimensional transport equation. In this chapter we will see that we can quite easily generalise the solution method and the uniqueness proof we used there to multiple dimensions. Let . The inhomogenous -dimensional transport equation looks like this:
, where is a function and is a vector.
The following definition will become a useful shorthand notation in many occasions. Since we can use it right from the beginning of this chapter, we start with it.
Let be a function and . We say that is times continuously differentiable iff all the partial derivatives
exist and are continuous. We write .
Before we prove a solution formula for the transport equation, we need a theorem from analysis which will play a crucial role in the proof of the solution formula.
Theorem 2.2: (Leibniz' integral rule)
Let be open and , where is arbitrary, and let . If the conditions
- for all ,
- for all and , exists
- there is a function such that
We will omit the proof.
If , and , then the function
solves the inhomogenous -dimensional transport equation
Note that, as in chapter 1, that there are many solutions, one for each continuously differentiable in existence.
We show that is sufficiently often differentiable. From the chain rule follows that is continuously differentiable in all the directions . The existence of
follows from the Leibniz integral rule (see exercise 1). The expression
we will later in this proof show to be equal to
which exists because
just consists of the derivatives
We show that
in three substeps.
We show that
This is left to the reader as an exercise in the application of the multi-dimensional chain rule (see exercise 2).
We show that
so that we have
By the multi-dimensional chain rule, we obtain
But on the one hand, we have by the fundamental theorem of calculus, that and therefore
and on the other hand
, seeing that the differential quotient of the definition of is equal for both sides. And since on the third hand
, the second part of the second part of the proof is finished.
We add and together, use the linearity of derivatives and see that the equation is satisfied.
Initial value problemEdit
Theorem and definition 2.4:
If and , then the function
is the unique solution of the initial value problem of the transport equation
Quite easily, . Therefore, and due to theorem 2.3, is a solution to the initial value problem of the transport equation. So we proceed to show uniqueness.
Assume that is an arbitrary other solution. We show that , thereby excluding the possibility of a different solution.
We define . Then
Analogous to the proof of uniqueness of solutions for the one-dimensional homogenous initial value problem of the transport equation in the first chapter, we define for arbitrary ,
Using the multi-dimensional chain rule, we calculate :
Therefore, for all is constant, and thus
, which shows that and thus .
- Let and . Using Leibniz' integral rule, show that for all the derivative
is equal to
and therefore exists.
- Let and . Calculate .
Find the unique solution to the initial value problem
Before we dive deeply into the chapter, let's first motivate the notion of a test function. Let's consider two functions which are piecewise constant on the intervals and zero elsewhere; like, for example, these two:
Let's call the left function , and the right function .
Of course we can easily see that the two functions are different; they differ on the interval ; however, let's pretend that we are blind and our only way of finding out something about either function is evaluating the integrals
for functions in a given set of functions .
We proceed with choosing sufficiently clever such that five evaluations of both integrals suffice to show that . To do so, we first introduce the characteristic function. Let be any set. The characteristic function of is defined as
With this definition, we choose the set of functions as
It is easy to see (see exercise 1), that for , the expression
equals the value of on the interval , and the same is true for . But as both functions are uniquely determined by their values on the intervals (since they are zero everywhere else), we can implement the following equality test:
This obviously needs five evaluations of each integral, as .
Since we used the functions in to test and , we call them test functions. What we ask ourselves now is if this notion generalises from functions like and , which are piecewise constant on certain intervals and zero everywhere else, to continuous functions. The following chapter shows that this is true.
In order to write down the definition of a bump function more shortly, we need the following two definitions:
Let , and let . We say that is smooth if all the partial derivatives
exist in all points of and are continuous. We write .
Let . We define the support of , , as follows:
Now we are ready to define a bump function in a brief way:
is called a bump function iff and is compact. The set of all bump functions is denoted by .
These two properties make the function really look like a bump, as the following example shows:
The standard mollifier
Example 3.4: The standard mollifier , given by
, where , is a bump function (see exercise 2).
As for the bump functions, in order to write down the definition of Schwartz functions shortly, we first need two helpful definitions.
Let be an arbitrary set, and let be a function. Then we define the supremum norm of as follows:
For a vector and a -dimensional multiindex we define , to the power of , as follows:
Now we are ready to define a Schwartz function.
We call a Schwartz function iff the following two conditions are satisfied:
By we mean the function .
is a Schwartz function.
Every bump function is also a Schwartz function.
This means for example that the standard mollifier is a Schwartz function.
Let be a bump function. Then, by definition of a bump function, . By the definition of bump functions, we choose such that
, as in , a set is compact iff it is closed & bounded. Further, for arbitrary,
Convergence of bump and Schwartz functionsEdit
Now we define what convergence of a sequence of bump (Schwartz) functions to a bump (Schwartz) function means.
A sequence of bump functions is said to converge to another bump function iff the following two conditions are satisfied:
- There is a compact set such that
We say that the sequence of Schwartz functions converges to iff the following condition is satisfied:
Let be an arbitrary sequence of bump functions. If with respect to the notion of convergence for bump functions, then also with respect to the notion of convergence for Schwartz functions.
Let be open, and let be a sequence in such that with respect to the notion of convergence of . Let thus be the compact set in which all the are contained. From this also follows that , since otherwise , where is any nonzero value takes outside ; this would contradict with respect to our notion of convergence.
In , ‘compact’ is equivalent to ‘bounded and closed’. Therefore, for an . Therefore, we have for all multiindices :
Therefore the sequence converges with respect to the notion of convergence for Schwartz functions.
The ‘testing’ property of test functionsEdit
In this section, we want to show that we can test equality of continuous functions by evaluating the integrals
for all (thus, evaluating the integrals for all will also suffice as due to theorem 3.9).
But before we are able to show that, we need a modified mollifier, where the modification is dependent of a parameter, and two lemmas about that modified mollifier.
For , we define
Let . Then
From the definition of follows
Therefore, and since
, we have:
In order to prove the next lemma, we need the following theorem from integration theory:
Theorem 3.15: (Multi-dimensional integration by substitution)
If are open, and is a diffeomorphism, then
We will omit the proof, as understanding it is not very important for understanding this wikibook.
Let . Then
Now we are ready to prove the ‘testing’ property of test functions:
Let be continuous. If
Let be arbitrary, and let . Since is continuous, there exists a such that
Then we have
Therefore, . An analogous reasoning also shows that . But due to the assumption, we have
As limits in the reals are unique, it follows that , and since was arbitrary, we obtain .
Let be continuous. If
This follows from all bump functions being Schwartz functions, which is why the requirements for theorem 3.17 are met.
Let and be constant on the interval . Show that
- Prove that the standard mollifier as defined in example 3.4 is a bump function by proceeding as follows:
Prove that the function
is contained in .
Prove that the function
is contained in .
- Conclude that .
- Prove that is compact by calculating explicitly.
- Let be open, let and let . Prove that if , then and .
- Let be open, let be bump functions and let . Prove that .
- Let be Schwartz functions functions and let . Prove that is a Schwartz function.
- Let , let be a polynomial, and let in the sense of Schwartz functions. Prove that in the sense of Schwartz functions.
Distributions and tempered distributionsEdit
Let be open, and let be a function. We call a distribution iff
- is linear ()
- is sequentially continuous (if in the notion of convergence of bump functions, then in the reals)
The set of all distributions for we denote by
Let be a function. We call a tempered distribution iff
- is linear ()
- is sequentially continuous (if in the notion of convergence of Schwartz functions, then in the reals)
The set of all tempered distributions we denote by .
Let be a tempered distribution. Then the restriction of to bump functions is a distribution.
Let be a tempered distribution, and let be open.
We show that has a well-defined value for .
Due to theorem 3.9, every bump function is a Schwartz function, which is why the expression
makes sense for every .
We show that the restriction is linear.
Let and . Since due to theorem 3.9 and are Schwartz functions as well, we have
due to the linearity of for all Schwartz functions. Thus is also linear for bump functions.
We show that the restriction of to is sequentially continuous. Let in the notion of convergence of bump functions. Due to theorem 3.11, in the notion of convergence of Schwartz functions. Since as a tempered distribution is sequentially continuous, .
Let . The integral
is called convolution of and and denoted by if it exists.
The convolution of two functions may not always exist, but there are sufficient conditions for it to exist:
Let such that and let and . Then for all , the integral
has a well-defined real value.
Due to Hölder's inequality,
We shall now prove that the convolution is commutative, i. e. .
Let such that (where ) and let and . Then for all :
We apply multi-dimensional integration by substitution using the diffeomorphism to obtain
Let be open and let . Then .
Let be arbitrary. Then, since for all
Leibniz' integral rule (theorem 2.2) is applicable, and by repeated application of Leibniz' integral rule we obtain
In this section, we shortly study a class of distributions which we call regular distributions. In particular, we will see that for certain kinds of functions there exist corresponding distributions.
Let be an open set and let . If for all can be written as
for a function which is independent of , then we call a regular distribution.
Let . If for all can be written as
for a function which is independent of , then we call a regular tempered distribution.
Two questions related to this definition could be asked: Given a function , is for open given by
well-defined and a distribution? Or is given by
well-defined and a tempered distribution? In general, the answer to these two questions is no, but both questions can be answered with yes if the respective function has the respectively right properties, as the following two theorems show. But before we state the first theorem, we have to define what local integrability means, because in the case of bump functions, local integrability will be exactly the property which needs in order to define a corresponding regular distribution:
Let be open, be a function. We say that is locally integrable iff for all compact subsets of
We write .
Now we are ready to give some sufficient conditions on to define a corresponding regular distribution or regular tempered distribution by the way of
Let be open, and let be a function. Then
is a regular distribution iff .
We show that if , then is a distribution.
Well-definedness follows from the triangle inequality of the integral and the monotony of the integral:
In order to have an absolute value strictly less than infinity, the first integral must have a well-defined value in the first place. Therefore, really maps to and well-definedness is proven.
Continuity follows similarly due to
, where is the compact set in which all the supports of and are contained (remember: The existence of a compact set such that all the supports of are contained in it is a part of the definition of convergence in , see the last chapter. As in the proof of theorem 3.11, we also conclude that the support of is also contained in ).
Linearity follows due to the linearity of the integral.
We show that is a distribution, then (in fact, we even show that if has a well-defined real value for every , then . Therefore, by part 1 of this proof, which showed that if it follows that is a distribution in , we have that if is a well-defined real number for every , is a distribution in .
Let be an arbitrary compact set. We define
is continuous, even Lipschitz continuous with Lipschitz constant : Let . Due to the triangle inequality, both
, which can be seen by applying the triangle inequality twice.
We choose sequences and in such that and and consider two cases. First, we consider what happens if . Then we have
Second, we consider what happens if :
Since always either or , we have proven Lipschitz continuity and thus continuity. By the extreme value theorem, therefore has a minimum . Since would mean that for a sequence in which is a contradiction as is closed and , we have .
Hence, if we define , then . Further, the function
has support contained in , is equal to within and further is contained in due to lemma 4.7. Hence, it is also contained in . Since therefore, by the monotonicity of the integral
, is indeed locally integrable.
Let , i. e.
is a regular tempered distribution.
From Hölder's inequality we obtain
Hence, is well-defined.
Due to the triangle inequality for integrals and Hölder's inequality, we have
If in the notion of convergence of the Schwartz function space, then this expression goes to zero. Therefore, continuity is verified.
Linearity follows from the linearity of the integral.
We now introduce the concept of equicontinuity.
Let be a metric space equipped with a metric which we shall denote by here, let be a set in , and let be a set of continuous functions mapping from to the real numbers . We call this set equicontinuous if and only if
So equicontinuity is in fact defined for sets of continuous functions mapping from (a set in a metric space) to the real numbers .
Let be a metric space equipped with a metric which we shall denote by , let be a sequentially compact set in , and let be an equicontinuous set of continuous functions from to the real numbers . Then follows: If is a sequence in such that has a limit for each , then for the function , which maps from to , it follows uniformly.
In order to prove uniform convergence, by definition we must prove that for all , there exists an such that for all .
So let's assume the contrary, which equals by negating the logical statement
We choose a sequence in . We take in such that for an arbitrarily chosen and if we have already chosen and for all , we choose such that , where is greater than .
As is sequentially compact, there is a convergent subsequence of . Let us call the limit of that subsequence sequence .
As is equicontinuous, we can choose such that
Further, since (if of course), we may choose such that
But then follows for and the reverse triangle inequality:
Since we had , the reverse triangle inequality and the definition of t
, we obtain:
Thus we have a contradiction to .
Let be a set of differentiable functions, mapping from the convex set to . If we have, that there exists a constant such that for all functions in , (the exists for each function in because all functions there were required to be differentiable), then is equicontinuous.
Proof: We have to prove equicontinuity, so we have to prove
Let be arbitrary.
We choose .
Let such that , and let be arbitrary. By the mean-value theorem in multiple dimensions, we obtain that there exists a such that:
The element is inside , because is convex. From the Cauchy-Schwarz inequality then follows:
The generalised product ruleEdit
If are two -dimensional multiindices, we define the binomial coefficient of over as
We also define less or equal relation on the set of multi-indices.
Let be two -dimensional multiindices. We define to be less or equal than if and only if
For , there are vectors such that neither nor . For , the following two vectors are examples for this:
This example can be generalised to higher dimensions (see exercise 6).
With these multiindex definitions, we are able to write down a more general version of the product rule. But in order to prove it, we need another lemma.
If and , where the is at the -th place, we have
for arbitrary multiindices .
For the ordinary binomial coefficients for natural numbers, we had the formula
This is the general product rule:
Let and let . Then
We prove the claim by induction over .
We start with the induction base . Then the formula just reads
, and this is true. Therefore, we have completed the induction base.
Next, we do the induction step. Let's assume the claim is true for all such that . Let now such that . Let's choose