Partial Differential Equations/Print version

Introduction and first examples

edit
Partial Differential Equations
Print version The transport equation → 

What is a partial differential equation?

edit

Let be a natural number, and let be an arbitrary set. A partial differential equation on looks like this:

is an arbitrary function here, specific to the partial differential equation, which goes from to , where is a natural number. And a solution to this partial differential equation on is a function satisfying the above logical statement. The solutions of some partial differential equations describe processes in nature; this is one reason why they are so important.

Multiindices

edit

In the whole theory of partial differential equations, multiindices are extremely important. Only with their help we are able to write down certain formulas a lot briefer.

Definitions 1.1:

A -dimensional multiindex is a vector , where are the natural numbers and zero.

If is a multiindex, then its absolute value is defined by

If is a -dimensional multiindex, is an arbitrary set and is sufficiently often differentiable, we define , the -th derivative of , as follows:

Types of partial differential equations

edit

We classify partial differential equations into several types, because for partial differential equations of one type we will need different solution techniques as for differential equations of other types. We classify them into linear and nonlinear equations, and into equations of different orders.

Definitions 1.2:

A linear partial differential equation is an equation of the form

, where only finitely many of the s are not the constant zero function. A solution takes the form of a function . We have for an arbitrary , is an arbitrary function and the sum in the formula is taken over all possible -dimensional multiindices. If the equation is called homogenous.

A partial differential equation is called nonlinear iff it is not a linear partial differential equation.

Definition 1.3:

Let . We say that a partial differential equation has -th order iff is the smallest number such that it is of the form

First example of a partial differential equation

edit

Now we are very curious what practical examples of partial differential equations look like after all.

Theorem and definition 1.4:

If is a differentiable function and , then the function

solves the one-dimensional homogenous transport equation

Proof: Exercise 2.

We therefore see that the one-dimensional transport equation has many different solutions; one for each continuously differentiable function in existence. However, if we require the solution to have a specific initial state, the solution becomes unique.

Theorem and definition 1.5:

If is a differentiable function and , then the function

is the unique solution to the initial value problem for the one-dimensional homogenous transport equation

Proof:

Surely . Further, theorem 1.4 shows that also:

Now suppose we have an arbitrary other solution to the initial value problem. Let's name it . Then for all , the function

is constant:

Therefore, in particular

, which means, inserting the definition of , that

, which shows that . Since was an arbitrary solution, this shows uniqueness.

In the next chapter, we will consider the non-homogenous arbitrary-dimensional transport equation.

Exercises

edit
  1. Have a look at the definition of an ordinary differential equation (see for example the Wikipedia page on that) and show that every ordinary differential equation is a partial differential equation.
  2. Prove Theorem 1.4 using direct calculation.
  3. What is the order of the transport equation?
  4. Find a function such that and .

Sources

edit
  • Martin Brokate (2011/2012), Partielle Differentialgleichungen, Vorlesungsskript (PDF) (in German) {{citation}}: Check date values in: |year= (help)
  • Daniel Matthes (2013/2014), Partial Differential Equations, lecture notes {{citation}}: Check date values in: |year= (help)
Partial Differential Equations
Print version The transport equation → 

The transport equation

edit
Partial Differential Equations
 ← Introduction and first examples Print version Test functions → 

In the first chapter, we had already seen the one-dimensional transport equation. In this chapter we will see that we can quite easily generalise the solution method and the uniqueness proof we used there to multiple dimensions. Let . The inhomogenous -dimensional transport equation looks like this:

, where is a function and is a vector.

Solution

edit

The following definition will become a useful shorthand notation in many occasions. Since we can use it right from the beginning of this chapter, we start with it.

Definition 2.1:

Let be a function and . We say that is times continuously differentiable iff all the partial derivatives

exist and are continuous. We write .

Before we prove a solution formula for the transport equation, we need a theorem from analysis which will play a crucial role in the proof of the solution formula.

Theorem 2.2: (Leibniz' integral rule)

Let be open and , where is arbitrary, and let . If the conditions

  • for all ,
  • for all and , exists
  • there is a function such that

hold, then

We will omit the proof.

Theorem 2.3: If , and , then the function

solves the inhomogenous -dimensional transport equation

Note that, as in chapter 1, that there are many solutions, one for each continuously differentiable in existence.

Proof:

1.

We show that is sufficiently often differentiable. From the chain rule follows that is continuously differentiable in all the directions . The existence of

follows from the Leibniz integral rule (see exercise 1). The expression

we will later in this proof show to be equal to

,

which exists because

just consists of the derivatives

2.

We show that

in three substeps.

2.1

We show that

This is left to the reader as an exercise in the application of the multi-dimensional chain rule (see exercise 2).

2.2

We show that

We choose

so that we have

By the multi-dimensional chain rule, we obtain

But on the one hand, we have by the fundamental theorem of calculus, that and therefore

and on the other hand

, seeing that the differential quotient of the definition of is equal for both sides. And since on the third hand

, the second part of the second part of the proof is finished.

2.3

We add and together, use the linearity of derivatives and see that the equation is satisfied.

Initial value problem

edit

Theorem and definition 2.4: If and , then the function

is the unique solution of the initial value problem of the transport equation

Proof:

Quite easily, . Therefore, and due to theorem 2.3, is a solution to the initial value problem of the transport equation. So we proceed to show uniqueness.

Assume that is an arbitrary other solution. We show that , thereby excluding the possibility of a different solution.

We define . Then

Analogous to the proof of uniqueness of solutions for the one-dimensional homogenous initial value problem of the transport equation in the first chapter, we define for arbitrary ,

Using the multi-dimensional chain rule, we calculate :

Therefore, for all is constant, and thus

, which shows that and thus .

Exercises

edit
  1. Let and . Using Leibniz' integral rule, show that for all the derivative

    is equal to

    and therefore exists.

  2. Let and . Calculate .
  3. Find the unique solution to the initial value problem

    .

Sources

edit
Partial Differential Equations
 ← Introduction and first examples Print version Test functions → 

Test functions

edit
Partial Differential Equations
 ← The transport equation Print version Distributions → 

Motivation

edit

Before we dive deeply into the chapter, let's first motivate the notion of a test function. Let's consider two functions which are piecewise constant on the intervals and zero elsewhere; like, for example, these two:

Let's call the left function , and the right function .

Of course we can easily see that the two functions are different; they differ on the interval ; however, let's pretend that we are blind and our only way of finding out something about either function is evaluating the integrals

and

for functions in a given set of functions .

We proceed with choosing sufficiently clever such that five evaluations of both integrals suffice to show that . To do so, we first introduce the characteristic function. Let be any set. The characteristic function of is defined as

With this definition, we choose the set of functions as

It is easy to see (see exercise 1), that for , the expression

equals the value of on the interval , and the same is true for . But as both functions are uniquely determined by their values on the intervals (since they are zero everywhere else), we can implement the following equality test:

This obviously needs five evaluations of each integral, as .

Since we used the functions in to test and , we call them test functions. What we ask ourselves now is if this notion generalises from functions like and , which are piecewise constant on certain intervals and zero everywhere else, to continuous functions. The following chapter shows that this is true.

Bump functions

edit

In order to write down the definition of a bump function more shortly, we need the following two definitions:

Definition 3.1:

Let , and let . We say that is smooth if all the partial derivatives

exist in all points of and are continuous. We write .

Definition 3.2:

Let . We define the support of , , as follows:

Now we are ready to define a bump function in a brief way:

Definition 3.3:

is called a bump function iff and is compact. The set of all bump functions is denoted by .

These two properties make the function really look like a bump, as the following example shows:

The standard mollifier in dimension

Example 3.4: The standard mollifier , given by

, where , is a bump function (see exercise 2).

Schwartz functions

edit

As for the bump functions, in order to write down the definition of Schwartz functions shortly, we first need two helpful definitions.

Definition 3.5:

Let be an arbitrary set, and let be a function. Then we define the supremum norm of as follows:

Definition 3.6:

For a vector and a -dimensional multiindex we define , to the power of , as follows:

Now we are ready to define a Schwartz function.

Definition 3.7:

We call a Schwartz function iff the following two conditions are satisfied:

By we mean the function .

Example 3.8: The function

is a Schwartz function.

Theorem 3.9:

Every bump function is also a Schwartz function.

This means for example that the standard mollifier is a Schwartz function.

Proof:

Let be a bump function. Then, by definition of a bump function, . By the definition of bump functions, we choose such that

, as in , a set is compact iff it is closed & bounded. Further, for arbitrary,

Convergence of bump and Schwartz functions

edit

Now we define what convergence of a sequence of bump (Schwartz) functions to a bump (Schwartz) function means.

Definition 3.10:

A sequence of bump functions is said to converge to another bump function iff the following two conditions are satisfied:

  1. There is a compact set such that

Definition 3.11:

We say that the sequence of Schwartz functions converges to iff the following condition is satisfied:

Theorem 3.12:

Let be an arbitrary sequence of bump functions. If with respect to the notion of convergence for bump functions, then also with respect to the notion of convergence for Schwartz functions.

Proof:

Let be open, and let be a sequence in such that with respect to the notion of convergence of . Let thus be the compact set in which all the are contained. From this also follows that , since otherwise , where is any nonzero value takes outside ; this would contradict with respect to our notion of convergence.

In , ‘compact’ is equivalent to ‘bounded and closed’. Therefore, for an . Therefore, we have for all multiindices :

Therefore the sequence converges with respect to the notion of convergence for Schwartz functions.

The ‘testing’ property of test functions

edit

In this section, we want to show that we can test equality of continuous functions by evaluating the integrals

and

for all (thus, evaluating the integrals for all will also suffice as due to theorem 3.9).

But before we are able to show that, we need a modified mollifier, where the modification is dependent of a parameter, and two lemmas about that modified mollifier.

Definition 3.13:

For , we define

.

Lemma 3.14:

Let . Then

.

Proof:

From the definition of follows

.

Further, for

Therefore, and since

, we have:

In order to prove the next lemma, we need the following theorem from integration theory:

Theorem 3.15: (Multi-dimensional integration by substitution)

If are open, and is a diffeomorphism, then

We will omit the proof, as understanding it is not very important for understanding this wikibook.

Lemma 3.16:

Let . Then

.

Proof:

Now we are ready to prove the ‘testing’ property of test functions:

Theorem 3.17:

Let be continuous. If

,

then .

Proof:

Let be arbitrary, and let . Since is continuous, there exists a such that

Then we have

Therefore, . An analogous reasoning also shows that . But due to the assumption, we have

As limits in the reals are unique, it follows that , and since was arbitrary, we obtain .

Remark 3.18: Let be continuous. If

,

then .

Proof:

This follows from all bump functions being Schwartz functions, which is why the requirements for theorem 3.17 are met.

Exercises

edit
  1. Let and be constant on the interval . Show that

  2. Prove that the standard mollifier as defined in example 3.4 is a bump function by proceeding as follows:
    1. Prove that the function

      is contained in .

    2. Prove that the function

      is contained in .

    3. Conclude that .
    4. Prove that is compact by calculating explicitly.
  3. Let be open, let and let . Prove that if , then and .
  4. Let be open, let be bump functions and let . Prove that .
  5. Let be Schwartz functions functions and let . Prove that is a Schwartz function.
  6. Let , let be a polynomial, and let in the sense of Schwartz functions. Prove that in the sense of Schwartz functions.
Partial Differential Equations
 ← The transport equation Print version Distributions → 

Distributions

edit
Partial Differential Equations
 ← Test functions Print version Fundamental solutions, Green's functions and Green's kernels → 

Distributions and tempered distributions

edit

Definition 4.1:

Let be open, and let be a function. We call a distribution iff

  • is linear ()
  • is sequentially continuous (if in the notion of convergence of bump functions, then in the reals)

The set of all distributions for we denote by

Definition 4.2:

Let be a function. We call a tempered distribution iff

  • is linear ()
  • is sequentially continuous (if in the notion of convergence of Schwartz functions, then in the reals)

The set of all tempered distributions we denote by .

Theorem 4.3:

Let be a tempered distribution. Then the restriction of to bump functions is a distribution.

Proof:

Let be a tempered distribution, and let be open.

1.

We show that has a well-defined value for .

Due to theorem 3.9, every bump function is a Schwartz function, which is why the expression

makes sense for every .

2.

We show that the restriction is linear.

Let and . Since due to theorem 3.9 and are Schwartz functions as well, we have

due to the linearity of for all Schwartz functions. Thus is also linear for bump functions.

3.

We show that the restriction of to is sequentially continuous. Let in the notion of convergence of bump functions. Due to theorem 3.11, in the notion of convergence of Schwartz functions. Since as a tempered distribution is sequentially continuous, .

The convolution

edit

Definition 4.4:

Let . The integral

is called convolution of and and denoted by if it exists.

The convolution of two functions may not always exist, but there are sufficient conditions for it to exist:

Theorem 4.5:

Let such that and let and . Then for all , the integral

has a well-defined real value.

Proof:

Due to Hölder's inequality,

.

We shall now prove that the convolution is commutative, i. e. .

Theorem 4.6:

Let such that (where ) and let and . Then for all :

Proof:

We apply multi-dimensional integration by substitution using the diffeomorphism to obtain

.

Lemma 4.7:

Let be open and let . Then .

Proof:

Let be arbitrary. Then, since for all

and further

,

Leibniz' integral rule (theorem 2.2) is applicable, and by repeated application of Leibniz' integral rule we obtain

.

Regular distributions

edit

In this section, we shortly study a class of distributions which we call regular distributions. In particular, we will see that for certain kinds of functions there exist corresponding distributions.

Definition 4.8:

Let be an open set and let . If for all can be written as

for a function which is independent of , then we call a regular distribution.

Definition 4.9:

Let . If for all can be written as

for a function which is independent of , then we call a regular tempered distribution.

Two questions related to this definition could be asked: Given a function , is for open given by

well-defined and a distribution? Or is given by

well-defined and a tempered distribution? In general, the answer to these two questions is no, but both questions can be answered with yes if the respective function has the respectively right properties, as the following two theorems show. But before we state the first theorem, we have to define what local integrability means, because in the case of bump functions, local integrability will be exactly the property which needs in order to define a corresponding regular distribution:

Definition 4.10:

Let be open, be a function. We say that is locally integrable iff for all compact subsets of

We write .

Now we are ready to give some sufficient conditions on to define a corresponding regular distribution or regular tempered distribution by the way of

or

:

Theorem 4.11:

Let be open, and let be a function. Then

is a regular distribution iff .

Proof:

1.

We show that if , then is a distribution.

Well-definedness follows from the triangle inequality of the integral and the monotony of the integral:

In order to have an absolute value strictly less than infinity, the first integral must have a well-defined value in the first place. Therefore, really maps to and well-definedness is proven.

Continuity follows similarly due to

, where is the compact set in which all the supports of and are contained (remember: The existence of a compact set such that all the supports of are contained in it is a part of the definition of convergence in , see the last chapter. As in the proof of theorem 3.11, we also conclude that the support of is also contained in ).

Linearity follows due to the linearity of the integral.

2.

We show that is a distribution, then (in fact, we even show that if has a well-defined real value for every , then . Therefore, by part 1 of this proof, which showed that if it follows that is a distribution in , we have that if is a well-defined real number for every , is a distribution in .

Let be an arbitrary compact set. We define

is continuous, even Lipschitz continuous with Lipschitz constant : Let . Due to the triangle inequality, both

and

, which can be seen by applying the triangle inequality twice.

We choose sequences and in such that and and consider two cases. First, we consider what happens if . Then we have

.

Second, we consider what happens if :

Since always either or , we have proven Lipschitz continuity and thus continuity. By the extreme value theorem, therefore has a minimum . Since would mean that for a sequence in which is a contradiction as is closed and , we have .

Hence, if we define , then . Further, the function

has support contained in , is equal to within and further is contained in due to lemma 4.7. Hence, it is also contained in . Since therefore, by the monotonicity of the integral

, is indeed locally integrable.

Theorem 4.12:

Let , i. e.

Then

is a regular tempered distribution.

Proof:

From Hölder's inequality we obtain

.

Hence, is well-defined.

Due to the triangle inequality for integrals and Hölder's inequality, we have

Furthermore

.

If in the notion of convergence of the Schwartz function space, then this expression goes to zero. Therefore, continuity is verified.

Linearity follows from the linearity of the integral.

Equicontinuity

edit

We now introduce the concept of equicontinuity.

Definition 4.13:

Let be a metric space equipped with a metric which we shall denote by here, let be a set in , and let be a set of continuous functions mapping from to the real numbers . We call this set equicontinuous if and only if

.

So equicontinuity is in fact defined for sets of continuous functions mapping from (a set in a metric space) to the real numbers .

Theorem 4.14:

Let be a metric space equipped with a metric which we shall denote by , let be a sequentially compact set in , and let be an equicontinuous set of continuous functions from to the real numbers . Then follows: If is a sequence in such that has a limit for each , then for the function , which maps from to , it follows uniformly.

Proof:

In order to prove uniform convergence, by definition we must prove that for all , there exists an such that for all .

So let's assume the contrary, which equals by negating the logical statement

.

We choose a sequence in . We take in such that for an arbitrarily chosen and if we have already chosen and for all , we choose such that , where is greater than .

As is sequentially compact, there is a convergent subsequence of . Let us call the limit of that subsequence sequence .

As is equicontinuous, we can choose such that

.

Further, since (if of course), we may choose such that

.

But then follows for and the reverse triangle inequality:

Since we had , the reverse triangle inequality and the definition of t

, we obtain:

Thus we have a contradiction to .

Theorem 4.15:

Let be a set of differentiable functions, mapping from the convex set to . If we have, that there exists a constant such that for all functions in , (the exists for each function in because all functions there were required to be differentiable), then is equicontinuous.

Proof: We have to prove equicontinuity, so we have to prove

.

Let be arbitrary.

We choose .

Let such that , and let be arbitrary. By the mean-value theorem in multiple dimensions, we obtain that there exists a such that:

The element is inside , because is convex. From the Cauchy-Schwarz inequality then follows:

The generalised product rule

edit

Definition 4.16:

If are two -dimensional multiindices, we define the binomial coefficient of over as

.

We also define less or equal relation on the set of multi-indices.

Definition 4.17:

Let be two -dimensional multiindices. We define to be less or equal than if and only if

.

For , there are vectors such that neither nor . For , the following two vectors are examples for this:

This example can be generalised to higher dimensions (see exercise 6).

With these multiindex definitions, we are able to write down a more general version of the product rule. But in order to prove it, we need another lemma.

Lemma 4.18:

If and , where the is at the -th place, we have

for arbitrary multiindices .

Proof:

For the ordinary binomial coefficients for natural numbers, we had the formula

.

Therefore,

This is the general product rule:

Theorem 4.19:

Let and let . Then


Proof:

We prove the claim by induction over .

1.

We start with the induction base . Then the formula just reads

, and this is true. Therefore, we have completed the induction base.

2.

Next, we do the induction step. Let's assume the claim is true for all such that . Let now such that . Let's choose such that (we may do this because ). We define again , where the is at the -th place. Due to Schwarz' theorem and the ordinary product rule, we have

.

By linearity of derivatives and induction hypothesis, we have

.

Since

and

,

we are allowed to shift indices in the first of the two above sums, and furthermore we have by definition

.

With this, we obtain

Due to lemma 4.18,

.

Further, we have

where in ,

and

(these two rules may be checked from the definition of ). It follows

.

Operations on Distributions

edit

For there are operations such as the differentiation of , the convolution of and and the multiplication of and . In the following section, we want to define these three operations (differentiation, convolution with and multiplication with ) for a distribution instead of .

Lemma 4.20:

Let be open sets and let be a linear function. If there is a linear and sequentially continuous (in the sense of definition 4.1) function such that

, then for every distribution , the function is a distribution. Therefore, the function

really maps to . This function has the property

Proof:

We have to prove two claims: First, that the function is a distribution, and second that as defined above has the property

1.

We show that the function is a distribution.

has a well-defined value in as maps to , which is exactly the preimage of . The function is continuous since it is the composition of two continuous functions, and it is linear for the same reason (see exercise 2).

2.

We show that has the property

For every , we have

Since equality of two functions is equivalent to equality of these two functions evaluated at every point, this shows the desired property.

We also have a similar lemma for Schwartz distributions:

Lemma 4.21:

Let be a linear function. If there is a linear and sequentially continuous (in the sense of definition 4.2) function such that

, then for every distribution , the function is a distribution. Therefore, we may define a function

This function has the property

The proof is exactly word-for-word the same as the one for lemma 4.20.

Noting that multiplication, differentiation and convolution are linear, we will define these operations for distributions by taking in the two above lemmas as the respective of these three operations.

Theorem and definitions 4.22:

Let , and let be open. Then for all , the pointwise product is contained in , and if further and all of it's derivatives are bounded by polynomials, then for all the pointwise product is contained in . Also, if in the sense of bump functions, then in the sense of bump functions, and if and all of it's derivatives are bounded by polynomials, then in the sense of Schwartz functions implies in the sense of Schwartz functions. Further:

  • Let be a distribution. If we define

    ,

    then the expression on the right hand side is well-defined and for all we have

    ,

    and is a distribution.

  • Assume that and all of it's derivatives are bounded by polynomials. Let be a tempered distribution. If we define

    ,

    then the expression on the right hand side is well-defined and for all we have

    ,

    and is a tempered distribution.

Proof:

The product of two functions is again , and further, if , then also . Hence, .

Also, if in the sense of bump functions, then, if is a compact set such that for all ,

.

Hence, in the sense of bump functions.

Further, also . Let be arbitrary. Then

.

Since all the derivatives of are bounded by polynomials, by the definition of that we obtain

, where are polynomials. Hence,

.

Similarly, if in the sense of Schwartz functions, then by exercise 3.6

and hence in the sense of Schwartz functions.

If we define , from lemmas 4.20 and 4.21 follow the other claims.

Theorem and definitions 4.23:

Let be open. We define

, where such that only finitely many of the are different from the zero function (such a function is also called a linear partial differential operator), and further we define

.
  • Let be a distribution. If we define

    ,

    then the expression on the right hand side is well-defined, for all we have

    ,

    and is a distribution.

  • Assume that all s and all their derivatives are bounded by polynomials. Let be a tempered distribution. If we define

    ,

    then the expression on the right hand side is well-defined, for all we have

    ,

    and is a tempered distribution.

Proof:

We want to apply lemmas 4.20 and 4.21. Hence, we prove that the requirements of these lemmas are met.

Since the derivatives of bump functions are again bump functions, the derivatives of Schwartz functions are again Schwartz functions (see exercise 3.3 for both), and because of theorem 4.22, we have that and map to , and if further all and all their derivatives are bounded by polynomials, then and map to .

The sequential continuity of follows from theorem 4.22.

Further, for all ,

.

Further, if we single out an , by Fubini's theorem and integration by parts we obtain

.

Hence,

and the lemmas are applicable.

Definition 4.24:

Let and let . Then we define the function

.

This function is called the convolution of and .

Theorem 4.25:

Let and let . Then

  1. is continuous,
  2. and
  3. .

Proof:

1.

Let be arbitrary, and let be a sequence converging to and let such that . Then

is compact. Hence, if is arbitrary, then uniformly. But outside , . Hence, uniformly. Further, for all . Hence, in the sense of bump functions. Thus, by continuity of ,

.

2.

We proceed by induction on .

The induction base is obvious, since for all functions by definition.

Let the statement be true for all such that . Let such that . We choose such that (this is possible since otherwise ). Further, we define

.

Then , and hence .

Furthermore, for all ,

.

But due to Schwarz' theorem, in the sense of bump functions, and thus

.

Hence, , since is a bump function (see exercise 3.3).

3.

This follows from 1. and 2., since is a bump function for all (see exercise 3.3).

Exercises

edit
  1. Let be (tempered) distributions and let . Prove that also is a (tempered) distribution.
  2. Let be essentially bounded. Prove that is a tempered distribution.
  3. Prove that if is a set of differentiable functions which go from to , such that there exists a such that for all it holds , and if is a sequence in for which the pointwise limit exists for all , then converges to a function uniformly on (hint: is sequentially compact; this follows from the Bolzano–Weierstrass theorem).
  4. Let such that is a distribution. Prove that for all .
  5. Prove that for the function is a tempered distribution (this function is called the Dirac delta distribution after Paul Dirac).
  6. For each , find such that neither nor .

Sources

edit
Partial Differential Equations
 ← Test functions Print version Fundamental solutions, Green's functions and Green's kernels → 

Fundamental solutions, Green's functions and Green's kernels

edit
Partial Differential Equations
 ← Distributions Print version The heat equation → 

In the last two chapters, we have studied test function spaces and distributions. In this chapter we will demonstrate a method to obtain solutions to linear partial differential equations which uses test function spaces and distributions.

Distributional and fundamental solutions

edit

In the last chapter, we had defined multiplication of a distribution with a smooth function and derivatives of distributions. Therefore, for a distribution , we are able to calculate such expressions as

for a smooth function and a -dimensional multiindex . We therefore observe that in a linear partial differential equation of the form

we could insert any distribution instead of in the left hand side. However, equality would not hold in this case, because on the right hand side we have a function, but the left hand side would give us a distribution (as finite sums of distributions are distributions again due to exercise 4.1; remember that only finitely many are allowed to be nonzero, see definition 1.2). If we however replace the right hand side by (the regular distribution corresponding to ), then there might be distributions which satisfy the equation. In this case, we speak of a distributional solution. Let's summarise this definition in a box.

Definition 5.1:

Let be open, let

be a linear partial differential equation, and let . is called a distributional solution to the above linear partial differential equation if and only if

.

Definition 5.2:

Let be open and let

be a linear partial differential equation. If has the two properties

  1. is continuous and
  2. ,

we call a fundamental solution for that partial differential equation.

For the definition of see exercise 4.5.

Lemma 5.3:

Let be open and let be a set of distributions, where . Let's further assume that for all , the function is continuous and bounded, and let be compactly supported. Then

is a distribution.

Proof:

Let be the support of . For , let us denote the supremum norm of the function by

.

For or , is identically zero and hence a distribution. Hence, we only need to treat the case where both and .

For each , is a compact set since it is bounded and closed. Therefore, we may cover by finitely many pairwise disjoint sets with diameter at most (for convenience, we choose these sets to be subsets of ). Furthermore, we choose .

For each , we define

, which is a finite linear combination of distributions and therefore a distribution (see exercise 4.1).

Let now and be arbitrary. We choose such that for all

.

This we may do because continuous functions are uniformly continuous on compact sets. Further, we choose such that

.

This we may do due to dominated convergence. Since for

,

. Thus, the claim follows from theorem AI.33.

Theorem 5.4:

Let be open, let

be a linear partial differential equation such that is integrable and has compact support. Let be a fundamental solution of the PDE. Then

is a distribution which is a distributional solution for the partial differential equation.

Proof: Since by the definition of fundamental solutions the function is continuous for all , lemma 5.3 implies that is a distribution.

Further, by definitions 4.16,

.

Lemma 5.5:

Let , , and . Then

.

Proof:

By theorem 4.21 2., for all

.

Theorem 5.6:

Let be a solution of the equation

,

where only finitely many are nonzero, and let . Then solves

.

Proof:

By lemma 5.5, we have

.

Partitions of unity

edit

In this section you will get to know a very important tool in mathematics, namely partitions of unity. We will use it in this chapter and also later in the book. In order to prove the existence of partitions of unity (we will soon define what this is), we need a few definitions first.

Definitions 5.7:

Let be a set. We define:

is called the boundary of and is called the interior of . Further, if , we define

.

We also need definition 3.13 in the proof, which is why we restate it now:

Definition 3.13:

For , we define

.

Theorem and definitions 5.8: Let be an open set, and let be open subsets of such that (i. e. the sets form an open cover of ). Then there exists a sequence of functions in such that the following conditions are satisfied:

The sequence is called a partition of unity for with respect to .

Proof: We will prove this by explicitly constructing such a sequence of functions.

1. First, we construct a sequence of open balls with the properties

  • .

In order to do this, we first start with the definition of a sequence compact sets; for each , we define

.

This sequence has the properties

  • .

We now construct such that

  • and

for some . We do this in the following way: To meet the first condition, we first cover with balls by choosing for every a ball such that for an . Since these balls cover , and is compact, we may choose a finite subcover .

To meet the second condition, we proceed analogously, noting that for all is compact and is open.

This sequence of open balls has the properties which we wished for.

2. We choose the respective functions. Since each , is an open ball, it has the form

where and .

It is easy to prove that the function defined by

satisfies if and only if . Hence, also . We define

and, for each ,

.

Then, since is never zero, the sequence is a sequence of functions and further, it has the properties 1. - 4., as can be easily checked.

Green's functions and Green's kernels

edit

Definition 5.9:

Let

be a linear partial differential equation. A function such that for all is well-defined and

is a fundamental solution of that partial differential equation is called a Green's function of that partial differential equation.

Definition 5.10:

Let

be a linear partial differential equation. A function such that the function

is a Greens function for that partial differential equation is called a Green's kernel of that partial differential equation.

Theorem 5.11:

Let

be a linear partial differential equation (in the following, we will sometimes abbreviate PDE for partial differential equation) such that , and let be a Green's kernel for that PDE. If

exists and exists and is continuous, then solves the partial differential equation.

Proof:

We choose to be a partition of unity of , where the open cover of shall consist only of the set . Then by definition of partitions of unity

.

For each , we define

and

.

By Fubini's theorem, for all and

.

Hence, as given in theorem 4.11 is a well-defined distribution.

Theorem 5.4 implies that is a distributional solution to the PDE

.

Thus, for all we have, using theorem 4.19,

.

Since and are both continuous, they must be equal due to theorem 3.17. Summing both sides of the equation over yields the theorem.

Theorem 5.12:

Let and let be open. Then for all , the function is continuous.

Proof:

If , then

for sufficiently large , where the maximum in the last expression converges to as , since the support of is compact and therefore is uniformly continuous by the Heine–Cantor theorem.

The last theorem shows that if we have found a locally integrable function such that

,

we have found a Green's kernel for the respective PDEs. We will rely on this theorem in our procedure to get solutions to the heat equation and Poisson's equation.

Exercises

edit

Sources

edit
Partial Differential Equations
 ← Distributions Print version The heat equation → 

The heat equation

edit
Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Print version Poisson's equation → 

This chapter is about the heat equation, which looks like this:

for some . Using distribution theory, we will prove an explicit solution formula (if is often enough differentiable), and we even prove a solution formula for the initial value problem.

Green's kernel and solution

edit

Lemma 6.1:

Proof:

Taking the square root on both sides finishes the proof.

Lemma 6.2:

Proof:

By lemma 6.1,

.

If we apply to this integration by substitution (theorem 5.5) with the diffeomorphism , we obtain

and multiplying with

Therefore, calculating the innermost integrals first and then pulling out the resulting constants,

Theorem 6.3:

The function

is a Green's kernel for the heat equation.

Proof:

1.

We show that is locally integrable.

Let a compact set, and let such that . We first show that the integral

exists:

By transformation of variables in the inner integral using the diffeomorphism , and lemma 6.2, we obtain:

Therefore the integral

exists. But since

, where is the characteristic function of , the integral

exists. Since was an arbitrary compact set, we thus have local integrability.

2.

We calculate and (see exercise 1).

3.

We show that

Let be arbitrary.

In this last step of the proof, we will only manipulate the term .

If we choose and such that

, we have even

Using the dominated convergence theorem (theorem 5.1), we can rewrite the term again:

, where is the characteristic function of .

We split the limit term in half to manipulate each summand separately:

The last integrals are taken over for . In this area and its boundary, is differentiable. Therefore, we are allowed to integrate by parts.

In the last two manipulations, we used integration by parts where and exchanged the role of the function in theorem 5.4, and and exchanged the role of the vector field. In the latter manipulation, we did not apply theorem 5.4 directly, but instead with subtracted boundary term on both sides.

Let's also integrate the other integral by parts.

Now we add the two terms back together and see that

The derivative calculations from above show that , which is why the last two integrals cancel and therefore

Using that and with multi-dimensional integration by substitution with the diffeomorphism we obtain:

Since is continuous (even smooth), we have

Therefore

Theorem 6.4: If is bounded, once continuously differentiable in the -variable and twice continuously differentiable in the -variable, then

solves the heat equation

Proof:

1.

We show that is sufficiently often differentiable such that the equations are satisfied.

2.

We invoke theorem 5.?, which states exactly that a convolution with a Green's kernel is a solution, provided that the convolution is sufficiently often differentiable (which we showed in part 1 of the proof).

Initial Value Problem

edit

Definition 6.5: Let and be two functions. The spatial convolution of and is given by:

Theorem and definition 6.6: Let be bounded, once continuously differentiable in the -variable and twice continuously differentiable in the -variable, and let be continuous and bounded. If we define

, then the function

is a continuous solution of the initial value problem for the heat equation, that is

Note that if we do not require the solution to be continuous, we may just take any solution and just set it to at .

Proof:

1.

We show

From theorem 7.4, we already know that solves

Therefore, we have for ,

which is why would follow if

This we shall now check.

By definition of the spatial convolution, we have

and

By applying Leibniz' integral rule (see exercise 2) we find that

for all .

2.

We show that is continuous.

It is clear that is continuous on , since all the first-order partial derivatives exist and are continuous (see exercise 2). It remains to be shown that is continuous on .

To do so, we first note that for all

Furthermore, due to the continuity of , we may choose for arbitrary and any a such that

.

From these last two observations, we may conclude:

But due to integration by substitution using the diffeomorphism , we obtain

which is why

Since was arbitrary, continuity is proven.

Exercises

edit

Sources

edit
Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Print version Poisson's equation → 

Poisson's equation

edit
Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Print version Heat equation → 

This chapter deals with Poisson's equation

Provided that , we will through distribution theory prove a solution formula, and for domains with boundaries satisfying a certain property we will even show a solution formula for the boundary value problem. We will also study solutions of the homogenous Poisson's equation

The solutions to the homogenous Poisson's equation are called harmonic functions.

Important theorems from multi-dimensional integration

edit

In section 2, we had seen Leibniz' integral rule, and in section 4, Fubini's theorem. In this section, we repeat the other theorems from multi-dimensional integration which we need in order to carry on with applying the theory of distributions to partial differential equations. Proofs will not be given, since understanding the proofs of these theorems is not very important for the understanding of this wikibook. The only exception will be theorem 6.3, which follows from theorem 6.2. The proof of this theorem is an exercise.

Theorem 6.2: (Divergence theorem)

Let a compact set with smooth boundary. If is a vector field, then

, where is the outward normal vector.

Theorem 6.3: (Multi-dimensional integration by parts)

Let a compact set with smooth boundary. If is a function and is a vector field, then

, where is the outward normal vector.

Proof: See exercise 1.

The volume and surface area of d-dimensional spheres

edit

Definition 6.5:

The Gamma function is defined by

The Gamma function satisfies the following equation:

Theorem 6.6:

Proof:

If the Gamma function is shifted by 1, it is an interpolation of the factorial (see exercise 2):

As you can see, in the above plot the Gamma function also has values on negative numbers. This is because what is plotted above is some sort of a natural continuation of the Gamma function which one can construct using complex analysis.

Definition and theorem 6.7:

The -dimensional spherical coordinates, given by

are a diffeomorphism. The determinant of the Jacobian matrix of , , is given by

Proof:

Theorem 6.8:

The volume of the -dimensional ball with radius , is given by

Proof:

Theorem 6.9:

The area of the surface of the -dimensional ball with radius (i. e. the area of ) is given by

The surface area and the volume of the -dimensional ball with radius are related to each other "in a differential way" (see exercise 3).

Proof:

Green's kernel

edit

We recall a fact from integration theory:

Lemma 6.11: is integrable is integrable.

We omit the proof.

Theorem 6.12:

The function , given by

is a Green's kernel for Poisson's equation.

We only prove the theorem for . For see exercise 4.

Proof:

1.

We show that is locally integrable. Let be compact. We have to show that

is a real number, which by lemma 6.11 is equivalent to

is a real number. As compact in is equivalent to bounded and closed, we may choose an such that . Without loss of generality we choose , since if it turns out that the chosen is , any will do as well. Then we have

For ,

For ,

, where we applied integration by substitution using spherical coordinates from the first to the second line.

2.

We calculate some derivatives of (see exercise 5):

For , we have

For , we have

For all , we have

3.

We show that

Let and be arbitrary. In this last step of the proof, we will only manipulate the term . Since , has compact support. Let's define

Since the support of

, where is the characteristic function of .

The last integral is taken over (which is bounded and as the intersection of the closed sets and closed and thus compact as well). In this area, due to the above second part of this proof, is continuously differentiable. Therefore, we are allowed to integrate by parts. Thus, noting that is the outward normal vector in of , we obtain

Let's furthermore choose . Then

.

From Gauß' theorem, we obtain

, where the minus in the right hand side occurs because we need the inward normal vector. From this follows immediately that

We can now calculate the following, using the Cauchy-Schwartz inequality:

Now we define , which gives:

Applying Gauß' theorem on gives us therefore

, noting that .

We furthermore note that

Therefore, we have

due to the continuity of .

Thus we can conclude that

.

Therefore, is a Green's kernel for the Poisson's equation for .

QED.

Integration over spheres

edit

Theorem 6.12:

Let be a function.

Proof: We choose as an orientation the border orientation of the sphere. We know that for , an outward normal vector field is given by . As a parametrisation of , we only choose the identity function, obtaining that the basis for the tangent space there is the standard basis, which in turn means that the volume form of is

Now, we use the normal vector field to obtain the volume form of :

We insert the formula for and then use Laplace's determinant formula:

As a parametrisation of we choose spherical coordinates with constant radius .

We calculate the Jacobian matrix for the spherical coordinates:

We observe that in the first column, we have only the spherical coordinates divided by . If we fix , the first column disappears. Let's call the resulting matrix and our parametrisation, namely spherical coordinates with constant , . Then we have:

Recalling that

, the claim follows using the definition of the surface integral.


Theorem 6.13:

Let be a function. Then

Proof:

We have , where are the spherical coordinates. Therefore, by integration by substitution, Fubini's theorem and the above formula for integration over the unit sphere,

Harmonic functions

edit

Definition 6.14: Let be open and let be a function. If and

is called a harmonic function.

Theorem 6.15:

Let be open and let . The following conditions are equivalent:

  • is harmonic

Proof: Let's define the following function:

From first coordinate transformation with the diffeomorphism and then applying our formula for integration on the unit sphere twice, we obtain:

From first differentiation under the integral sign and then Gauss' theorem, we know that

Case 1: If is harmonic, then we have

, which is why is constant. Now we can use the dominated convergence theorem for the following calculation:

Therefore for all .

With the relationship

, which is true because of our formula for , we obtain that

, which proves the first formula.

Furthermore, we can prove the second formula by first transformation of variables, then integrating by onion skins, then using the first formula of this theorem and then integration by onion skins again:

This shows that if is harmonic, then the two formulas for calculating , hold.

Case 2: Suppose that is not harmonic. Then there exists an such that . Without loss of generality, we assume that ; the proof for will be completely analogous exept that the direction of the inequalities will interchange. Then, since as above, due to the dominated convergence theorem, we have

Since is continuous (by the dominated convergence theorem), this is why grows at , which is a contradiction to the first formula.

The contradiction to the second formula can be obtained by observing that is continuous and therefore there exists a

This means that since

and therefore

, that

and therefore, by the same calculation as above,

This shows (by proof with contradiction) that if one of the two formulas hold, then is harmonic.

Definition 6.16:

A domain is an open and connected subset of .

For the proof of the next theorem, we need two theorems from other subjects, the first from integration theory and the second from topology.

Theorem 6.17:

Let and let be a function. If

then for almost every .

Theorem 6.18:

In a connected topological space, the only simultaneously open and closed sets are the whole space and the empty set.

We will omit the proofs.

Theorem 6.19:

Let be a domain and let be harmonic. If there exists an such that

, then is constant.

Proof:

We choose

Since is open by assumption and , for every exists an such that

By theorem 6.15, we obtain in this case:

Further,

, which is why

Since

, we have even

By theorem 6.17 we conclude that

almost everywhere in , and since

is continuous, even

really everywhere in (see exercise 6). Therefore , and since was arbitrary, is open.

Also,

and is continuous. Thus, as a one-point set is closed, lemma 3.13 says is closed in . Thus is simultaneously open and closed. By theorem 6.18, we obtain that either or . And since by assumtion is not empty, we have .

Theorem 6.18:

Let be a domain and let be harmonic. If there exists an such that

, then is constant.

Proof: See exercise 7.

Corollary 6.20:

Let be a bounded domain and let be harmonic on and continuous on . Then

Proof:

Theorem 6.20:

Let be open and be a harmonic function, let and let such that . Then

Proof:

What we will do next is showing that every harmonic function is in fact automatically contained in .

Theorem 6.25: Let be open, and let be harmonic. Then . Furthermore, for all , there is a constant depending only on the dimension and such that for all and such that

Proof:

Definition 6.26:

Let be a sequence of harmonic functions, and let be a function. converges locally uniformly to iff

Theorem 6.27:

Let be open and let be harmonic functions such that the sequence converges locally uniformly to a function . Then also is harmonic.

Proof:

Definition 6.28:

Theorem 6.29: (Arzelà-Ascoli) Let be a set of continuous functions, which are defined on a compact set . Then the following two statements are equivalent:

  1. (the closure of ) is compact
  2. is bounded and equicontinuous

Proof:

Definition 6.30:

Theorem 6.31:

Let be a locally uniformly bounded sequence of harmonic functions. Then it has a locally uniformly convergent subsequence.

Proof:

Boundary value problem

edit

The dirichlet problem for the Poisson equation is to find a solution for

Uniqueness of solutions

edit

If is bounded, then we can know that if the problem

has a solution , then this solution is unique on .


Proof: Let be another solution. If we define , then obviously solves the problem

, since for and for .

Due to the above corollary from the minimum and maximum principle, we obtain that is constantly zero not only on the boundary, but on the whole domain . Therefore on . This is what we wanted to prove.

Green's functions of the first kind

edit

Let be a domain. Let be the Green's kernel of Poisson's equation, which we have calculated above, i.e.

, where denotes the surface area of .

Suppose there is a function which satisfies

Then the Green's function of the first kind for for is defined as follows:

is automatically a Green's function for . This is verified exactly the same way as veryfying that is a Green's kernel. The only additional thing we need to know is that does not play any role in the limit processes because it is bounded.

A property of this function is that it satisfies

The second of these equations is clear from the definition, and the first follows recalling that we calculated above (where we calculated the Green's kernel), that for .

Representation formula

edit

Let be a domain, and let be a solution to the Dirichlet problem

. Then the following representation formula for holds:

, where is a Green's function of the first kind for .


Proof: Let's define

. By the theorem of dominated convergence, we have that

Using multi-dimensional integration by parts, it can be obtained that:

When we proved the formula for the Green's kernel of Poisson's equation, we had already shown that

and

The only additional thing which is needed to verify this is that , which is why it stays bounded, while goes to infinity as , which is why doesn't play a role in the limit process.

This proves the formula.

Harmonic functions on the ball: A special case of the Dirichlet problem

edit

Green's function of the first kind for the ball

edit

Let's choose

Then

is a Green's function of the first kind for .

Proof: Since and therefore

Furthermore, we obtain:

, which is why is a Green's function.

The property for the boundary comes from the following calculation:

, which is why , since is radially symmetric.

Solution formula

edit

Let's consider the following problem:

Here shall be continuous on . Then the following holds: The unique solution for this problem is given by:

Proof: Uniqueness we have already proven; we have shown that for all Dirichlet problems for on bounded domains (and the unit ball is of course bounded), the solutions are unique.

Therefore, it only remains to show that the above function is a solution to the problem. To do so, we note first that

Let be arbitrary. Since is continuous in , we have that on it is bounded. Therefore, by the fundamental estimate, we know that the integral is bounded, since the sphere, the set over which is integrated, is a bounded set, and therefore the whole integral must be always below a certain constant. But this means, that we are allowed to differentiate under the integral sign on , and since was arbitrary, we can directly conclude that on ,

Furthermore, we have to show that , i. e. that is continuous on the boundary.

To do this, we notice first that

This follows due to the fact that if , then solves the problem

and the application of the representation formula.

Furthermore, if and , we have due to the second triangle inequality:

In addition, another application of the second triangle inequality gives:

Let then be arbitrary, and let . Then, due to the continuity of , we are allowed to choose such that

.

In the end, with the help of all the previous estimations we have made, we may unleash the last chain of inequalities which shows that the representation formula is true:

Since implies , we might choose close enough to such that

. Since was arbitrary, this finishes the proof.

Barriers

edit

Let be a domain. A function is called a barrier with respect to if and only if the following properties are satisfied:

  1. is continuous
  2. is superharmonic on

Exterior sphere condition

edit

Let be a domain. We say that it satisfies the exterior sphere condition, if and only if for all there is a ball such that for some and .

Subharmonic and superharmonic functions

edit

Let be a domain and .

We call subharmonic if and only if:

We call superharmonic if and only if:

From this definition we can see that a function is harmonic if and only if it is subharmonic and superharmonic.

Minimum principle for superharmonic functions

edit

A superharmonic function on attains it's minimum on 's border .

Proof: Almost the same as the proof of the minimum and maximum principle for harmonic functions. As an exercise, you might try to prove this minimum principle yourself.

Harmonic lowering

edit

Let , and let . If we define

, then .

Proof: For this proof, the very important thing to notice is that the formula for inside is nothing but the solution formula for the Dirichlet problem on the ball. Therefore, we immediately obtain that is superharmonic, and furthermore, the values on don't change, which is why . This was to show.

Definition 3.1

edit

Let . Then we define the following set:

Lemma 3.2

edit

is not empty and

Proof: The first part follows by choosing the constant function , which is harmonic and therefore superharmonic. The second part follows from the minimum principle for superharmonic functions.

Lemma 3.3

edit

Let . If we now define , then .

Proof: The condition on the border is satisfied, because

is superharmonic because, if we (without loss of generality) assume that , then it follows that

, due to the monotony of the integral. This argument is valid for all , and therefore is superharmonic.

Lemma 3.4

edit

If is bounded and , then the function

is harmonic.

Proof:

Lemma 3.5

edit

If satisfies the exterior sphere condition, then for all there is a barrier function.

Existence theorem of Perron

edit

Let be a bounded domain which satisfies the exterior sphere condition. Then the Dirichlet problem for the Poisson equation, which is, writing it again:

has a solution .

Proof:

Let's summarise the results of this section.

Corollary 6.last:

Let be a domain satisfying the exterior sphere condition, let , let be continuous and let be a Green's function of the first kind for . Then

is the unique continuous solution to the boundary value problem

In the next chapter, we will have a look at the heat equation.

Exercises

edit
  1. Prove theorem 6.3 using theorem 6.2 (Hint: Choose in theorem 6.2).
  2. Prove that , where is the factorial of .
  3. Calculate . Have you seen the obtained function before?
  4. Prove that for , the function as defined in theorem 6.11 is a Green's kernel for Poisson's equation (hint: use integration by parts twice).
  5. For all and , calculate and .
  6. Let be open and be continuous. Prove that almost everywhere in implies everywhere in .
  7. Prove theorem 6.20 by modelling your proof on the proof of theorem 6.19.
  8. For all dimensions , give an example for vectors such that neither nor .

Sources

edit
Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Print version Heat equation → 

The Fourier transform

edit
Partial Differential Equations
 ← The heat equation Print version The wave equation → 

In this chapter, we introduce the Fourier transform. The Fourier transform transforms functions into other functions. It can be used to solve certain types of linear differential equations.

Definition and calculation rules

edit

Definition 8.1:

Let . Then the Fourier transform of is defined as follows:

We recall that is integrable is integrable.

Now we're ready to prove the next theorem:

Theorem 8.2: The Fourier transform of an integrable is well-defined.

Proof: Since is integrable, lemma 8.2 tells us that is integrable. But

, and therefore is integrable. But then, is integrable, which is why

has a unique complex value, by definition of integrability.

Theorem 8.3: Let . Then the Fourier transform of , , is bounded.

Proof:

Once we have calculated the Fourier transform of a function , we can easily find the Fourier transforms of some functions similar to . The following calculation rules show examples how you can do this. But just before we state the calculation rules, we recall a definition from chapter 2, namely the power of a vector to a multiindex, because it is needed in the last calculation rule.

Definition 2.6:

For a vector and a -dimensional multiindex we define , to the power of , as follows:

Now we write down the calculation rules, using the following notation:

Notation 8.4:

We write

to mean the sentence 'the function is the Fourier transform of the function '.

Theorem 8.5:

Let be the Fourier transform of . Then the following calculation rules hold:

  1. for arbitrary
  2. for arbitrary
  3. for arbitrary

If additionally is the Fourier transform of , we have

4.

Proof: To prove the first rule, we only need one of the rules for the exponential function (and the symmetry of the standard dot product):

1.

For the next two rules, we apply the general integration by substitution rule, using the diffeomorphisms and , which are bijections from to itself.

2.

3.

4.

The Fourier transform of Schwartz functions

edit

In order to proceed with further rules for the Fourier transform which involve Schwartz functions, we first need some further properties of Schwartz functions.

Theorem 8.6:

Let be a Schwartz function and let . Then the function

is a Schwartz function as well.

Proof:

Let . Due to the general product rule, we have:

We note that for all and , equals to to some multiindex power. Since is a Schwartz function, there exist constants such that:

Hence, the triangle inequality for implies:

Theorem 8.7:

Every Schwartz function is integrable.

Proof:

We use that if the absolute value of a function is almost everywhere smaller than the value of an integrable function, then the first function is integrable.

Let be a Schwartz function. Then there exist such that for all :

The latter function is integrable, and integrability of follows.

Now we can prove all three of the following rules for the Fourier transform involving Schwartz functions.

Theorem 8.8:

If is the Fourier transform of a function in the Schwartz space , in addition to the rules in theorem 8.4, also the following rules hold:

  1. for arbitrary
  2. for arbitrary

If additionally is the Fourier transform of a , then

3.

Proof:

1.

For the first rule, we use induction over .

It is clear that the claim is true for (then the rule states that the Fourier transform of is the Fourier transform of ).

We proceed to the induction step: Let , and assume that the claim is true for all such that . Let such that . We show that the claim is also true for .

Remember that we have . We choose such that (this is possible since otherwise ), define

and obtain

by Schwarz' theorem, which implies that one may interchange the order of partial derivation arbitrarily.

Let be an arbitrary positive real number. From Fubini's theorem and integration by parts, we obtain:

Due to the dominated convergence theorem (with dominating function ), the integral on the left hand side of this equation converges to

as . Further, since is a Schwartz function, there are such that:

Hence, the function within the large parentheses in the right hand sinde of the last line of the last equation is dominated by the function

and hence, by the dominated convergence theorem, the integral over that function converges, as , to:

From the uniqueness of limits of real sequences we obtain 1.

2.

We use again induction on , note that the claim is trivially true for , assume that the claim is true for all such that , choose such that and such that and define .

Theorems 8.6 and 8.7 imply that

  • for all , and
  • for all , .

Further,

  • exists for all .

Hence, Leibniz' integral rule implies:

3.

Theorem 8.9:

Let , and let be the Fourier transform of . Then .

Proof:

Let be two arbitrary -dimensional multiindices, and let . By theorem 8.6 is a Schwartz function as well. Theorem 8.8 implies:

By theorem 8.3, is bounded. Since were arbitrary, this shows that .

Definitions 8.10:

We define the Fourier transform on the Schwartz space to be the function

.

Theorem 8.9 assures that this function really maps to . Furthermore, we define the inverse Fourier transform on the Schwartz space to be the function

.

This function maps to since .

Both the Fourier transform and the inverse Fourier transform are sequentially continuous:

Theorem 8.11:

Let and let be a sequence of Schwartz functions such that . Then and , both in the sense of Schwartz function convergence as defined in definition 3.11.

Proof:

1. We prove .

Let . Due to theorem 8.8 1. and 2. and the linearity of derivatives, integrals and multiplication, we have

.

As in the proof of theorem 8.3, we hence obtain

.

Due to the multi-dimensional product rule,

.

Let now be arbitrary. Since as defined in definition 3.11, for each we may choose such that

.

Further, we may choose such that

.

Hence follows for :

Since was arbitrary, we obtain .

2. From 1., we deduce .

If in the sense of Schwartz functions, then also in the sense of Schwartz functions, where we define

and .

Therefore, by 1. and integration by substitution using the diffeomorphism , .

In the next theorem, we prove that is the inverse function of the Fourier transform. But for the proof of that theorem (which will be a bit long, and hence to read it will be a very good exercise), we need another two lemmas:

Lemma 8.12:

If we define the function

,

then and .

Proof:

1. :

We define

.

By the product rule, we have for all

.

Due to 1. of theorem 8.8, we have

;

from 2. of theorem 8.8 we further obtain

.

Hence, is constant. Further,

.

2. :

By substitution using the diffeomorphism ,

.

For the next lemma, we need example 3.4 again, which is why we restate it:

Example 3.4: The standard mollifier , given by

, where , is a bump function (see exercise 3.2).

Lemma 8.13:

Let , and for each define . Then in the sense of Schwartz functions.

Proof:

Let be arbitrary. Due to the generalised product rule,

.

By the triangle inequality, we may hence deduce

.

Since both and are Schwartz functions (see exercise 3.2 and theorem 3.9), for each we may choose such that

and .

Further, for each , we may choose such that

.

Let now be arbitrary. We choose such that for all

.

Further, we choose such that

.

This is possible since

due to our choice of .

Then we choose such that for all

.

Inserting all this in the above equation gives for . Since , and were arbitrary, this proves in the sense of Schwartz functions.

Theorem 8.14:

Let . Then and .

Proof:

1. We prove that if is a Schwartz function vanishing at the origin (i. e. ), then .

So let be a Schwartz function vanishing at the origin. By the fundamental theorem of calculus, the multi-dimensional chain rule and the linearity of the integral, we have

.

Defining ,

,

and multiplying both sides of the above equation by , we obtain

.

Since by repeated application of Leibniz' integral rule for all

,

all the are bump functions (due to theorem 4.15 and exercise 3.?), and hence Schwartz functions (theorem 3.9). Hence, by theorem 8.8 and the linearity of the Fourier transform (which follows from the linearity of the integral),

.

Hence,

.

Let . By Fubini's theorem, the fundamental theorem of calculus and since is a bump function, we have

.

If we let , theorem 8.11 and lemma 8.13 give the claim.

2. We deduce from 1. that if is an arbitrary Schwartz function, then .

As in lemma 8.12, we define

.

Let now be any Schwartz function. Then is also a Schwartz function (see exercise 3.?). Further, since , it vanishes at the origin. Hence, by 1.,

.

Further, due to lemma 8.12 and the linearity of the Fourier transform,

.

3. We deduce from 2. that if is a Schwartz function and is arbitrary, then (i. e. .

Let and be arbitrary. Due to the definition of ,

.

Further, if we define ,

.

Hence, by 2.,

.

4. We deduce from 3. that for any Schwartz function we have .

Let and be arbitrary. Then we have

.

The Fourier transform of tempered distributions

edit

Definition 8.15:

Let be a tempered distribution. We define

.

Theorem 8.16:

is a tempered distribution.

Proof:

1. Sequential continuity follows from the sequential continuity of and (theorem 8.11) and that the composition of two sequentially continuous functions is sequentially continuous again.

2. Linearity follows from the linearity of and and that the composition of two linear functions is linear again.

Definition 8.17:

Let be a tempered distribution. We define

.

Exercises

edit

Sources

edit
Partial Differential Equations
 ← The heat equation Print version The wave equation → 

The Malgrange-Ehrenpreis theorem

edit

Vandermonde's matrix

edit

Definition 10.1:

Let and let . Then the Vandermonde matrix associated to is defined to be the matrix

.

For pairwise different (i. e. for ) matrix is invertible, as the following theorem proves:

Theorem 10.2:

Let be the Vandermonde matrix associated to the pairwise different points . Then the matrix whose -th entry is given by

is the inverse matrix of .

Proof:

We prove that , where is the identity matrix.

Let . We first note that, by direct multiplication,

.

Therefore, if is the -th entry of the matrix , then by the definition of matrix multiplication

.

The Malgrange-Ehrenpreis theorem

edit

Lemma 10.3:

Let be pairwise different. The solution to the equation

is given by

, .

Proof:

We multiply both sides of the equation by on the left, where is as in theorem 10.2, and since is the inverse of

,

we end up with the equation

.

Calculating the last expression directly leads to the desired formula.

Exercises

edit

Sources

edit

Sobolev spaces

edit
Partial Differential Equations
 ← Characteristic equations Print version Calculus of variations → 

There are some partial differential equations which have no solution. However, some of them have something like ‘almost a solution’, which we call a weak solution. Among these there are partial differential equations whose weak solutions model processes in nature, just like solutions of partial differential equations which have a solution.

These weak solutions will be elements of the so-called Sobolev spaces. By proving properties which elements of Sobolev spaces in general have, we will thus obtain properties of weak solutions to partial differential equations, which therefore are properties of some processes in nature.

In this chapter we do show some properties of elements of Sobolev spaces. Furthermore, we will show that Sobolev spaces are Banach spaces (this will help us in the next section, where we investigate existence and uniqueness of weak solutions).

The fundamental lemma of the calculus of variations

edit

But first we shall repeat the definition of the standard mollifier defined in chapter 3.

Example 3.4: The standard mollifier , given by

, where , is a bump function (see exercise 3.2).

Definition 3.13:

For , we define

.

Lemma 12.1: (to be replaced by characteristic function version)

Let be a simple function, i. e.

,

where are intervals and is the indicator function. If

,

then .

The following lemma, which is important for some theorems about Sobolev spaces, is known as the fundamental lemma of the calculus of variations:

Lemma 12.2:

Let and let be functions such that and . Then almost everywhere.

Proof:

We define

Weak derivatives

edit

Definition 12.1:

Let be a set, and . If is a -dimensional multiindex and such that

, we call an th-weak derivative of .

Remarks 12.2: If is a function and is a -dimensional multiindex, any two th-weak derivatives of are equal except on a null set. Furthermore, if exists, it also is an th-weak derivative of .

Proof:

1. We prove that any two th-weak derivatives are equal except on a nullset.

Let be two th-weak derivatives of . Then we have

Notation 12.3 If it exists, we denote the th-weak derivative of by , which is of course the same symbol as for the ordinary derivative.

Theorem 12.4:

Let be open, , and . Assume that have -weak derivatives, which we - consistent with notation 12.3 - denote by and . Then for all :

Proof:

Definition and first properties of Sobolev spaces

edit

Definition and theorem 12.6:

Let be open, , and . The Sobolev space is defined as follows:

A norm on is defined as follows:

With respect to this norm, is a Banach space.

In the above definition, denotes the th-weak derivative of .

Proof:

1.

We show that

is a norm.

We have to check the three defining properties for a norm:

  • (definiteness)
  • for every (absolute homogeneity)
  • (triangle inequality)

We start with definiteness: If , then , since all the directional derivatives of the constant zero function are again the zero function. Furthermore, if , then it follows that implying that as is a norm.

We proceed to absolute homogeneity. Let .

And the triangle inequality has to be shown:

2.

We prove that is a Banach space.

Let be a Cauchy sequence in . Since for all -dimensional multiindices with and

since we only added non-negative terms, we obtain that for all -dimensional multiindices with , is a Cauchy sequence in . Since is a Banach space, this sequence converges to a limit in , which we shall denote by .

We show now that and with respect to the norm , thereby showing that is a Banach space.

To do so, we show that for all -dimensional multiindices with the th-weak derivative of is given by . Convergence then automatically follows, as

where in the last line all the summands converge to zero provided that for all -dimensional multiindices with .

Let . Since and by the second triangle inequality

, the sequence is, for large enough , dominated by the function , and the sequence is dominated by the function .

incomplete: Why are the dominating functions L1?

Therefore

, which is why is the th-weak derivative of for all -dimensional multiindices with .

Approximation by smooth functions

edit

We shall now prove that for any function, we can find a sequence of bump functions converging to that function in norm.

approximation by simple functions and lemma 12.1, ||f_eps-f|| le ||f_eps - g_eps|| + ||g_eps - g|| + ||g - f||

Let be a domain, let , and , such that . Let furthermore . Then is in for and .

Proof: The first claim, that , follows from the fact that if we choose

Then, due to the above section about mollifying -functions, we know that the first claim is true.

The second claim follows from the following calculation, using the one-dimensional chain rule:

Due to the above secion about mollifying -functions, we immediately know that , and the second statement therefore follows from the definition of the -norm.

Let be an open set. Then for all functions , there exists a sequence of functions in approximating it.

Proof:

Let's choose

and

One sees that the are an open cover of . Therefore, we can choose a sequence of functions (partition of the unity) such that

By defining and

, we even obtain the properties

where the properties are the same as before except the third property, which changed. Let , be a bump function and be a sequence which approximates in the -norm. The calculation

reveals that, by taking the limit on both sides, implies , since the limit of must be in since we may choose a sequence of bump functions converging to 1.

Let's choose now

We may choose now an arbitrary and so small, that

Let's now define

This function is infinitely often differentiable, since by construction there are only finitely many elements of the sum which do not vanish on each , and also since the elements of the sum are infinitely differentiable due to the Leibniz rule of differentiation under the integral sign. But we also have:

Since was arbitrary, this finishes the proof.

Let be a bounded domain, and let have the property, that for every point , there is a neighbourhood such that

for a continuous function . Then every function in can be approximated by -functions in the -norm.

Proof:

to follow

Hölder spaces and Morrey's inequality

edit

Continuous representatives

edit

The Gagliardo–Nirenberg–Sobolev inequality

edit

Sobolev embedding theorems

edit

Exercises

edit

Sources

edit
Partial Differential Equations
 ← Characteristic equations Print version Calculus of variations →