In order to solve partial differential equations, distributions can be very, very helpful. Sometimes they lead to really easy ways to find a cool solution for a partial differential equation.
In this chapter it can be found what distributions are, what they are like, and how you can modify them. How you can actually apply them, you can see not in this, but in the next chapter (about fundamental solutions, Green's functions and Green's kernels).
Important preliminary definitionsEdit
The support of a functionEdit
Let be a function. We define the support of as follows:
A multiindex is a vector with entries of natural numbers and zero, i. e. for some : .
The absolute value of a multiindex is defined by
For a given multiindex , we define the th derivative als follows:
For a given vector and a multiindex we define to the power of as follows:
Let , . Then
Let . We define the addition of A and B as follows:
- , for every .
The bump functionsEdit
Definition of a bump functionEdit
Let be an open subset of . We call a bump function, if and only if the two conditions hold:
- (this means is infinitely often differentiable)
- and also is compact.
Example: The standard mollifier, given by
, where , is a bump function.
The space of bump functionsEdit
We define the space of bump functions for a domain (a domain is an open and connected set) as the set of all bump functions on this domain:
This space has a notion of convergence: We say that a sequence of bump functions converges to another bump function iff the following two conditions are satisfied:
- There is a compact set such that and:
- for every multi-index
The Schwartz functionsEdit
Definition of a Schwartz functionEdit
We call a Schwartz function, if and only if the following two conditions hold:
- (this means again is infinitely often differentiable)
Example: The function
is a Schwartz function.
The space of Schwartz functionsEdit
Analoguously to the space of bump functions, we can also define the space of Schwartz functions:
The space of Schwartz functions also has a notion of convergence: We say that the sequence of Schwartz functions converges to iff the following condition is satisfied:
Relations between bump functions and Schwartz functionsEdit
Every bump function is also a Schwartz function, i. e. .
A bump function has compact support. Outside the support, the bump function and all the derivatives are zero, because the bump function is constantly zero there. The support is a compact set, and therefore the absolute value of bump function itself and all the derivatives, which are continuous due to the properties of a bump function, attain their maximum there, as well as the function , which is also continuous. (see Wikipedia: Extreme value theorem). But since the function and all the derivatives are zero outside the support, this is a global maximum of the absolute value. We furthermore obtain: for every multiindices , where the last inequality is true, because is zero outside . This is what we wanted.
Let be an arbitrary sequence of bump functions. If with respect to the notion of convergence for bump functions, then also with respect to the notion of convergence for Schwartz functions.
Let be the compact set in which all the are contained. In , ‘compact’ is the same as ‘bounded and closed’. Therefore, for some . Then we have for every multiindices that
due to the definition of convergence for bump functions. Therefore the sequence converges with respect to the notion of convergence for Schwartz functions.
Let be a function space with a notion of convergence. A distribution is a mapping with two properties:
- is linear
- is continuous; i. e. if in the notion of convergence of the function space, then it must follow that in the ordinary notion of convergence in the real numbers known from first semester Analysis (i. e. )
If is the space of the bump functions, we call a distribution a distribution (because usually distributions are distributions with the bump functions as function space). If however is the space of Schwartz distributions, then we call a distribution a tempered distribution.
An example for a distribution is the dirac delta distribution for an , which is defined by
for functions .
Let be a function and be a function space, where denotes the set of the essentially bounded functions (i. e. the functions which are below a certain constant exept for a Lebesgue nullset). Then we can define a mapping as follows:
We call a distribution a regular distribution, if and only if there is a function such that .
The following three claims are true:
- If is an integrable function and , where the inverse of the embedding is continuous, then as defined above is a distribution.
- If is a locally integrable function, is a domain and , then as defined above is a distribution.
- If and , then as defined above is a distribution.
1) The linearity is due to the linearity of the integral. Well-definedness follows from the calculation
Since the inverse of the embedding is continuous, we have
Therefore, continuity follows from
2) The proof follows by observing that , since is bounded, and that the notion of convergence in requires that if , then there exists a compact set such that , and then performing almost the same calculations as above.
3) Due to the triangle inequality for integrals and Hölder's inequality, we have
But we furthermore have
If in the notion of convergence of the Schwartz function space, then this expression goes to zero. Therefore, continuity is verified. Linearity again follows by the properties of the integral. Well-definedness follows from
If is a function space of functions defined on with a notion of convergence, then the set of all distributions on this space is usually denoted with . This set is also called a "distribution space". It is the dual space of .
Proof: Let , let be a convergent sequence of bump functions with their limit, and let be two bump functions.
Theorem 1.1 gives us that are Schwartz functions.
Theorem 1.2 gives us that in the sense of Schwartz functions.
From these two statements we can conclude due to , that .
Theorem 1.1 tells us furthermore that are Schwartz functions. From this we can conclude due to that .
This completes the proof.
Operations on DistributionsEdit
Let be function spaces, and be a linear function.
If there exists a linear operator , which is sequentially continuous, and it holds that:
Then, under these conditions, we may define the operator
, which really maps to , and for regular distributions and it will have the property
Proof: Well-definedness follows from the fact that is a function of due to the first requirement on . Linearity follows from the linearity of and linearity of :
Continuity follows just the same way from continuity of and : Let w.r.t. the notion of conv. of . Then
follows directly from the equation
Multiplication by a smooth functionEdit
Let be a smooth function ("smooth" means it is often differentiable). Then, by defining and , we meet the requirements of the above lemma and may define multiplication of distributions by smooth functions as follows:
- Let , then
For the bump functions and the Schwartz functions, we also may define the differentiation of distributions. Let and . Let's now define
Then, for the spaces or , the requirements for the above lemma 1.4 are met and we may define the differentiation of distribution in the following way:
This definition also satisfies .
Proof: By integration by parts, we obtain:
, where is the i-th component of the outward normal vector and is the boundary of . For bump functions, the boundary integral vanishes anyway, because the functions in are zero there. For Schwartz functions, we may use the identity
and the decreasing property of the Schwartz functions to see that the boundary integral goes to zero and therefore
To derive the equation
, we may apply the formula from above several times. This finishes the proof, because this equation was the only non-trivial property of , which we need for applying lemma 1.5.
Let , be domains, and let be a smooth function from to , such that for all compact subsets , is compact. Then we call the function
the pull-back of bump functions.
If we choose , i. e. is a smooth function from to such that for all compact sets , is compact, then we also define the pull-back of Schwartz functions just exactly the same way:
For bump functions and Schwartz functions, we may define the push-forward:
For the bump functions
or, for Schwartz functions:
Let , and let . Let's define
This function () is linear, because the integral is linear. It is called the convolution of and .
We can also define: , and:
By the theorem of Fubini, we can calculate as follows:
Therefore, the first assumption for Lemma 1.5 holds.
Due to the Leibniz integral rule, we obtain that for (i. e. is integrable) and (i. e. the partial derivatives of exist up to order and are also continuous):
With this formula, we can see (due to the monotony of the integral) that
From this follows sequential continuity for Schwartz and bump functions by defining and . Thus, with the help of lemma 1.5, we can define the convolution with a distribution of or as follows:
- This means in this case that if with respect to the notion of convergence of , then must also w.r.t. (="with respecct to") the notion of convergence of