Last modified on 5 October 2014, at 09:49

Partial Differential Equations/Distributions

Partial Differential Equations
 ← Test functions Distributions Fundamental solutions, Green's functions and Green's kernels → 

To get solutions to the first more difficult partial differential equations (like, for example, Poisson's equation, the heat equation and Helmholtz' equation), we will now take a look at distributions.

Distributions and tempered distributionsEdit

Definition 4.1:

Let O \subseteq \mathbb R^d be open, and let \mathcal T: \mathcal D(O) \to \mathbb R be a function. We call \mathcal T a distribution iff

  • \mathcal T is linear (\forall \varphi, \vartheta \in \mathcal D(O), b, c \in \mathbb R : \mathcal T(b \varphi + c \vartheta) = b \mathcal T(\varphi) + c \mathcal T(\vartheta))
  • \mathcal T is sequentially continuous (if \varphi_l \to \varphi in the notion of convergence of bump functions, then \mathcal T(\varphi_l) \to \mathcal T(\varphi) in the reals)

The set of all distributions for \mathcal D(O) we denote by \mathcal D(O)*

Definition 4.2:

Let \mathcal T: \mathcal S(\mathbb R^d) \to \mathbb R be a function. We call \mathcal T a tempered distribution iff

  • \mathcal T is linear (\forall \varphi, \vartheta \in \mathcal S(\mathbb R^d), b, c \in \mathbb R : \mathcal T(b \varphi + c \vartheta) = b \mathcal T(\varphi) + c \mathcal T(\vartheta))
  • \mathcal T is sequentially continuous (if \varphi_l \to \varphi in the notion of convergence of Schwartz functions, then \mathcal T(\varphi_l) \to \mathcal T(\varphi) in the reals)

The set of all tempered distributions we denote by \mathcal S(\mathbb R^d).

Theorem 4.3:

Let \mathcal T be a tempered distribution. Then the restriction of \mathcal T to bump functions is a distribution.

Proof:

Let \mathcal T be a tempered distribution, and let O \subseteq \mathbb R^d be open.

1.

We show that \mathcal T(\varphi) has a well-defined value for \varphi \in \mathcal D(O).

Due to theorem 3.9, every bump function is a Schwartz function, which is why the expression

\mathcal T (\varphi)

makes sense for every \varphi \in \mathcal D(O).

2.

We show that the restriction is linear.

Let a, b \in \mathbb R and \varphi, \vartheta \in \mathcal D(O). Since due to theorem 3.9 \varphi and \vartheta are Schwartz functions as well, we have

\forall a, b \in \mathbb R, \varphi, \vartheta \in \mathcal D(O) : \mathcal T (a \varphi + b \vartheta) = a \mathcal T (\varphi) + b \mathcal T (\vartheta)

due to the linearity of \mathcal T for all Schwartz functions. Thus \mathcal T is also linear for bump functions.

3.

We show that the restriction of \mathcal T to \mathcal D(O) is sequentially continuous. Let \varphi_l \to \varphi in the notion of convergence of bump functions. Due to theorem 3.11, \varphi_l \to \varphi in the notion of convergence of Schwartz functions. Since \mathcal T as a tempered distribution is sequentially continuous, \mathcal T(\varphi_l) \to \mathcal T(\varphi).

////

Convolution and approximation of Lp functionsEdit

Definition 4.4:

Let p, q \in [1, \infty] such that \frac{1}{p} + \frac{1}{q} = 1 (where \frac{1}{\infty} = 0) and let f \in L^p(\mathbb R^d) and g \in L^q(\mathbb R^d). The convolution of f and g, f * g is defined as:

f * g : \mathbb R^d \to \mathbb R, (f * g)(y) := \int_{\mathbb R^d} f(x) g(y - x) dx

The convolution is well-defined:

Theorem 4.5:

Let p, q \in [1, \infty] such that \frac{1}{p} + \frac{1}{q} = 1 and let f \in L^p(\mathbb R^d) and g \in L^q(\mathbb R^d).. Then for all y \in O, the integral

\int_{\mathbb R^d} f(x) g(y - x) dx

has a well-defined real value.

Proof:

We shall now prove that the convolution is commutative, i. e. f * g = g * f.

Theorem 4.6:

Let p, q \in [1, \infty] such that \frac{1}{p} + \frac{1}{q} = 1 (where \frac{1}{\infty} = 0) and let f \in L^p(\mathbb R^d) and g \in L^q(\mathbb R^d). Then for all y \in \mathbb R^d:

\forall y \in \mathbb R^d : (f * g)(y) = (g * f)(y)

Theorem 4.7:

Let O \subseteq \mathbb R^d be open and let f \in L^p(\mathbb R^d). Then

  1. \eta_\epsilon * f \in \mathcal C^\infty(\mathbb R^d)
  2. \eta_\epsilon * f \to f, \epsilon \to 0 \text{ uniformly}
  3. \text{supp } \eta_\epsilon * f \subseteq \{x + y \in \mathbb R^d : x \in \text{supp } f, y \in \overline{B_\epsilon(0)}\}

Proof:

Regular distributionsEdit

In this section, we shortly study a class of distributions which we call regular distributions. In particular, we will see that for certain kinds of functions there exist corresponding regular distributions. But let's first define what regular distributions are.

Definition 4.8:

Let O \subseteq \mathbb R^d be an open set and let \mathcal T \in \mathcal D(O)^*. If for all \varphi \in \mathcal D(O) \mathcal T(\varphi) can be written as

\mathcal T(\varphi) = \int_O f(x) \varphi(x) dx

for a function f: O \to \mathbb R which is independent of \varphi, then we call \mathcal T a regular distribution.

Definition 4.9:

Let \mathcal T \in \mathcal S(\mathbb R^d)^*. If for all \phi \in \mathcal S(\mathbb R^d) \mathcal T(\phi) can be written as

\mathcal T(\phi) = \int_{\mathbb R^d} f(x) \phi(x) dx

for a function f: \mathbb R^d \to \mathbb R which is independent of \phi, then we call \mathcal T a regular tempered distribution.

Two questions related to this definition could be asked: Given a function f: \mathbb R^d \to \mathbb R, is \mathcal T_f: \mathcal D(O) \to \mathbb R for O \subseteq \mathbb R^d open given by

\mathcal T_f(\varphi) := \int_\Omega f(x) \varphi(x) dx

well-defined and a distribution? Or is \mathcal T_f: \mathcal S(\mathbb R^d) \to \mathbb R given by

\mathcal T_f(\phi) := \int_{\mathbb R^d} f(x) \phi(x) dx

well-defined and a tempered distribution? In general, the answer to these two questions is no, but both questions can be answered with yes if the respective function f has the respectively right properties, as the following two theorems show. But before we state the first theorem, we have to define what local integrability means, because in the case of bump functions, local integrability will be exactly the property which f needs in order to define a corresponding regular distribution:

Definition 4.10:

Let O \subseteq \mathbb R^d be open, f: O \to \mathbb R be a function. We say that f is locally integrable iff for all compact subsets K of O

-\infty < \int_K f(x) dx < \infty

We write f \in L^1_\text{loc}(O).

Now we are ready to give some sufficient conditions on f to define a corresponding regular distribution or regular tempered distribution by the way of

\mathcal T_f : \mathcal D(O) \to \mathbb R, \mathcal T_f(\varphi) := \int_O f(x) \varphi(x) dx

or

\mathcal T_f : \mathcal S(\mathbb R^d) \to \mathbb R, \mathcal T_f(\phi) := \int_{\mathbb R^d} f(x) \phi(x) dx

Theorem 4.11:

Let U \subseteq \mathbb R^d be open, and let f: U \to \mathbb R be a function. Then

\mathcal T_f : \mathcal D(U) \to \mathbb R, \mathcal T_f(\varphi) := \int_{U} f(x) \varphi(x) dx

is a regular distribution iff f \in L^1_\text{loc}(U).

Proof:

1.

We show that if f \in L^1_\text{loc}(U), then \mathcal T_f : \mathcal D(U) \to \mathbb R is a distribution.

Well-definedness follows from the triangle inequality of the integral and the monotony of the integral:

\begin{align}
\left| \int_U \varphi(x) f(x) dx \right| \le \int_U |\varphi(x) f(x)| dx = \int_{\text{supp } \varphi} |\varphi(x) f(x)| dx\\
\le \int_{\text{supp } \varphi} \|\varphi\|_\infty |f(x)| dx = \|\varphi\|_\infty \int_{\text{supp } \varphi} |f(x)| dx < \infty
\end{align}

In order to have an absolute value strictly less than infinity, the first integral must have a well-defined value in the first place. Therefore, \mathcal T_f really maps to \mathbb R and well-definedness is proven.

Continuity follows similarly due to

|T_f \varphi_l - T_f \varphi| = \left| \int_K (\varphi_l - \varphi)(x) f(x) dx \right| \le \|\varphi_l - \varphi\|_\infty \underbrace{\int_K |f(x)| dx}_{\text{independent of } l} \to 0, l \to \infty

, where K is the compact set in which all the supports of \varphi_l, l \in \mathbb N and \varphi are contained (remember: The existence of a compact set such that all the supports of \varphi_l, l \in \mathbb N are contained in it is a part of the definition of convergence in \mathcal D(O), see the last chapter. As in the proof of theorem 3.11, we also conclude that the support of \varphi is also contained in K).

Linearity follows due to the linearity of the integral.

2.

We show that \mathcal T_f is a distribution, then f \in L^1_\text{loc}(U) (in fact, we even show that if \mathcal T_f(\varphi) has a well-defined real value for every \varphi \in \mathcal D(U), then f \in L^1_\text{loc}(U). Therefore, by part 1 of this proof, which showed that if f \in L^1_\text{loc}(U) it follows that \mathcal T_f is a distribution in \mathcal D^*(U), we have that if \mathcal T_f(\varphi) is a well-defined real number for every \varphi \in \mathcal D(U), \mathcal T_f is a distribution in \mathcal D(U).

Let K \subset U be an arbitrary compact set. We define

\mu: K \to \mathbb R, \mu(\xi) := \inf_{x \in \mathbb R^d \setminus U} \|\xi - x\|

\mu is continuous, even Lipschitz continuous with Lipschitz constant 1: Let \xi, \iota \in \mathbb R^d. Due to the triangle inequality, both

\forall (x, y) \in \mathbb R^2 : \|\xi - x\| \le \|\xi - \iota\| + \|\iota - y\| + \|y - x\| ~~~~~(*)

and

\forall (x, y) \in \mathbb R^2 : \|\iota - y\| \le \|\iota - \xi\| + \|\xi - x\| + \|x - y\| ~~~~~(**)

, which can be seen by applying the triangle inequality twice.

We choose sequences (x_l)_{l \in \mathbb N} and (y_m)_{m \in \mathbb N} in \mathbb R^d \setminus U such that \lim_{l \to \infty} \|\xi - x_l\| = \mu(\xi) and \lim_{m \to \infty} \|\iota - y_m\| = \mu(\iota) and consider two cases. First, we consider what happens if \mu(\xi) \ge \mu(\iota). Then we have

\begin{align}
|\mu(\xi) - \mu(\iota)| & = \mu(\xi) - \mu(\iota) & \\
& = \inf_{x \in \mathbb R^d \setminus U} \|\xi - x\| - \inf_{y \in \mathbb R^d \setminus U} \|\iota - y\| & \\
& = \inf_{x \in \mathbb R^d \setminus U} \|\xi - x\| - \lim_{m \to \infty} \|\iota - y_m\| & \\
& = \lim_{m \to \infty} \inf_{x \in \mathbb R^d \setminus U} \left( \|\xi - x\| - \|\iota - y_m\| \right) & \\
& \le \lim_{m \to \infty} \inf_{x \in \mathbb R^d \setminus U} \left( \|\xi - \iota\| + \|x - y_m\| \right) & (*) \text{ with } y = y_m \\
& = \|\xi - \iota\| &
\end{align}

Second, we consider what happens if \mu(\xi) \le \mu(\iota):

\begin{align}
|\mu(\xi) - \mu(\iota)| & = \mu(\iota) - \mu(\xi) & \\
& = \inf_{y \in \mathbb R^d \setminus U} \|\iota - y\| - \inf_{x \in \mathbb R^d \setminus U} \|\xi - x\| & \\
& = \inf_{y \in \mathbb R^d \setminus U} \|\iota - y\| - \lim_{l \to \infty} \|\xi - x_l\| & \\
& = \lim_{l \to \infty} \inf_{y \in \mathbb R^d \setminus U} \left( \|\iota - y\| - \|\xi - x_l\| \right) & \\
& \le \lim_{l \to \infty} \inf_{y \in \mathbb R^d \setminus U} \left( \|\xi - \iota\| + \|y - x_l\| \right) & (**) \text{ with } x = x_l \\
& = \|\xi - \iota\| &
\end{align}

Since always either \mu(\xi) \ge \mu(\iota) or \mu(\xi) \le \mu(\iota), we have proven Lipschitz continuity and thus continuity. By the extreme value theorem, \mu therefore has a minimum \xi \in \mathbb R^d. Since \mu(\xi) = 0 would mean that \|\xi - x_l\| \to 0, l \to \infty for a sequence (x_l)_{l \in \mathbb N} in \mathbb R^d \setminus U which is a contradiction as \mathbb R^d \setminus U is closed and \xi \in K \subset U, we have \mu(\xi) > 0.

Theorem 4.12:

Let f \in L^2(\mathbb R^d), i. e.

\int_{\mathbb R^d} |f(x)|^2 dx < \infty

Then

\mathcal T_f : \mathcal S(\mathbb R^d) \to \mathbb R, \mathcal T_f(\phi) := \int_{\mathbb R^d} f(x) \phi(x) dx

is a regular tempered distribution

Proof:

Well-definedness follows from Hölder's inequality:

\int_{\R^d} |\phi(x)| |f(x)| dx \le \|\phi\|_{L^2} \|f\|_{L^2} < \infty

Due to the triangle inequality for integrals and Hölder's inequality, we have

|T_f(\phi_i) - T_f(\phi)| \le \int_{\R^d} |(\phi_i - \phi)(x)| |f(x)| dx \le \|\phi_i - \phi\|_{L^2} \|f\|_{L^2}

Furthermore

\begin{align}
\|\phi_i - \phi\|_{L^2}^2 & \le \|\phi_i - \phi\|_{L^\infty} \int_{\R^d} |(\phi_i - \phi)(x)| dx \\
& = \|\phi_i - \phi\|_{L^\infty} \int_{\R^d} \prod_{j=1}^d (1 + x_j^2) |(\phi_i - \phi)(x)| \frac{1}{\prod_{j=1}^d (1 + x_j^2)} dx \\
& \le \|\phi_i - \phi\|_{L^\infty} \|\prod_{j=1}^d (1 + x_j^2) (\phi_i - \phi)\|_{L^\infty} \underbrace{\int_{\R^d} \frac{1}{\prod_{j=1}^d (1 + x_j^2)} dx}_{= \pi^d}
\end{align}

If \phi_i \to \phi in the notion of convergence of the Schwartz function space, then this expression goes to zero. Therefore, continuity is verified.

Linearity follows from the linearity of the integral.

Operations on DistributionsEdit

For \varphi, \vartheta \in \mathcal D(\mathbb R^d) there are operations such as the differentiation of \varphi, the convolution of \varphi and \vartheta and the multiplication of \varphi and \vartheta. In the following section, we want to define these three operations (differentiation, convolution with \vartheta and multiplication with \vartheta) for a distribution \mathcal T instead of \varphi.

Lemma 4.13:

Let O, U \subseteq \mathbb R^d be open sets and let L : \mathcal D(O) \to L^1_\text{loc} be a linear function. If there is a linear and continuous function \mathcal L : \mathcal D(U) \to \mathcal D(O) such that

\forall \varphi \in \mathcal D(O), \vartheta \in \mathcal D(U) : \int_O \varphi(x) \mathcal L(\vartheta)(x) dx = \int_U L(\varphi)(x) \vartheta(x) dx

, then for every distribution \mathcal T \in \mathcal D(O)^*, the function \varphi \mapsto \mathcal T(\mathcal L(\varphi)) is a distribution. Therefore, we may define a function

\Lambda : \mathcal D(O)^* \to \mathcal D(U)^*, \Lambda(\mathcal T) := \mathcal T \circ \mathcal L

This function has the property

\forall \varphi \in \mathcal D(O) : \Lambda(\mathcal T_\varphi) = \mathcal T_{L \varphi}

Noticing that differentiation, convolution and multiplication are linear, we will define these operations for distributions by taking L as the respective of these three operations.

Proof:

We have to prove two claims: First, that the function \varphi \mapsto \mathcal T(\mathcal L(\varphi)) is a distribution, and second that \Lambda as defined above has the property

\forall \varphi \in \mathcal D(O) : \Lambda(\mathcal T_\varphi) = \mathcal T_{L \varphi}

1.

We show that the function \varphi \mapsto \mathcal T(\mathcal L(\varphi)) is a distribution.

\mathcal T(\mathcal L(\varphi)) has a well-defined value in \mathbb R as \mathcal L maps to \mathcal D(O), which is exactly the preimage of \mathcal T. The function \varphi \mapsto \mathcal T(\mathcal L(\varphi)) is continuous since it is the composition of two continuous functions, and it is linear for the same reason (see exercise 2).

2.

We show that \Lambda has the property

\forall \varphi \in \mathcal D(O) : \Lambda(\mathcal T_\varphi) = \mathcal T_{L \varphi}

For every \vartheta \in \mathcal D(U), we have

\Lambda(\mathcal T_\varphi)(\vartheta) := (\mathcal T_\varphi \circ \mathcal L)(\vartheta) := \int_O \varphi(x) \mathcal L(\vartheta)(x) dx \overset{\text{by assumption}}{=} \int_U L(\varphi)(x) \vartheta(x) dx =: \mathcal T_{L \varphi}(\vartheta)

Since equality of two functions is equivalent to equality of these two functions evaluated at every point, this shows the desired property.

////

Using the last lemma 4.13, we define operations like multiplication with a smooth function, differentiation and convolution with a smooth function for distributions.

Definition 4.14:

DifferentiationEdit

For the bump functions and the Schwartz functions, we also may define the differentiation of distributions. Let k \in \N and L = \sum_{|\alpha| \le k} a_\alpha (x) \frac{\partial^\alpha}{\partial x^\alpha}. Let's now define

L^*(\phi) := \sum_{|\alpha| \le k} (-1)^{|\alpha|}\frac{\partial^\alpha}{\partial x^\alpha} (a_\alpha (x) \phi (x)).

Then, for the spaces \mathcal A (\Omega_1) = \mathcal A (\Omega_2) = \mathcal D(\Omega) or \mathcal S(\R^d), the requirements for the above lemma 1.4 are met and we may define the differentiation of distribution in the following way:

L T(\varphi) := T(L^* \varphi)

This definition also satisfies LT_f = T_{Lf}.


Proof: By integration by parts, we obtain:

\int_\Omega \phi(x) \alpha(x) \frac{\partial}{\partial x_i} \psi(x) dx = -\int_\Omega \frac{\partial}{\partial x_i} (\phi(x) \alpha(x)) \psi(x) dx + \int_{\partial \Omega} \alpha(x) \phi(x) \psi(x) \nu_i(x) dx

, where \nu_i is the i-th component of the outward normal vector and \partial \Omega is the boundary of \Omega. For bump functions, the boundary integral \int_{\partial \Omega} \alpha(x) \phi(x) \psi(x) \nu_i(x) dx vanishes anyway, because the functions in \mathcal D (\Omega) are zero there. For Schwartz functions, we may use the identity

\int_{\R^d} \phi(x) \alpha(x) \frac{\partial}{\partial x_i} \psi(x) dx = \lim_{r \to \infty} \int_{B_r(0)} \phi(x) \alpha(x) \frac{\partial}{\partial x_i} \psi(x) dx

and the decreasing property of the Schwartz functions to see that the boundary integral goes to zero and therefore

\int_{\R^d} \phi(x) \alpha(x) \frac{\partial}{\partial x_i} \psi(x) dx = -\int_{\R^d} \frac{\partial}{\partial x_i} (\phi(x) \alpha(x)) \psi(x) dx

To derive the equation

\int\limits_{\Omega} \varphi(x) (L^*\psi)(x) dx = \int\limits_{\Omega} (L \varphi)(x) \psi(x) dx

, we may apply the formula from above several times. This finishes the proof, because this equation was the only non-trivial property of L^*, which we need for applying lemma 1.5.

ConvolutionEdit

Let \vartheta \in \mathcal D(B_r(0), and let \Omega_1 \supseteq \Omega_2 + B_r(0). Let's define

L: \mathcal D(\Omega_1) \to C^\infty(\Omega_2), (L \varphi)(y) = (\varphi * \vartheta)(y) := \int_{\Omega} \varphi(x) \vartheta(y - x) dx.

This function (L) is linear, because the integral is linear. It is called the convolution of \vartheta and \varphi.

We can also define: \tilde \vartheta(x) = \vartheta(-x), and:

L^* \varphi := \tilde \vartheta * \varphi

By the theorem of Fubini, we can calculate as follows:

\int_{\Omega_2} (L \varphi)(x) \psi(x) dx = \int_{\Omega_2} \int_{\Omega_1} \vartheta(x - y) \varphi(y) \psi(x) dy dx
= \int_{\Omega_1} \int_{\Omega_2} \vartheta(x - y) \varphi(y) \psi(x) dx dy = \int_{\Omega_1} \varphi(y) (L^*\psi)(y) dy

Therefore, the first assumption for Lemma 1.5 holds.

Due to the Leibniz integral rule, we obtain that for f \in L^1 (i. e. f is integrable) and g \in C^k (\R^d) (i. e. the partial derivatives of g exist up to order k and are also continuous):

\frac{\partial^\alpha}{\partial x^\alpha} (f * g) = f * \left( \frac{\partial^\alpha}{\partial x^\alpha} g \right), |\alpha| \le k

With this formula, we can see (due to the monotony of the integral) that

\sup_{x \in \R^d} \left|\frac{\partial^\alpha}{\partial x^\alpha} (f * g)(x)\right| = \sup_{x \in \R^d} \left| \int_{\R^d} f(y) \frac{\partial^\alpha}{\partial x^\alpha} g(x-y)dy \right| \le \overbrace{\sup_{x \in \R^d} \left| \int_{\R^d} f(y) dy \right|}^{\text{constant}} \cdot \sup_{x \in \R^d} \left| \frac{\partial^\alpha}{\partial x^\alpha} g(x) \right|

From this follows sequential continuity for Schwartz and bump functions by defining f = \vartheta and g = \phi_i - \phi. Thus, with the help of lemma 1.5, we can define the convolution with a distribution of \mathcal D'(\Omega) or \mathcal S'(\R^d) as follows:

(\vartheta * T)(\varphi) := T(\tilde \vartheta * \varphi)

ExercisesEdit

  1. Show that R^d endowed with the usual topology is a topological vector space.

SourcesEdit

Partial Differential Equations
 ← Test functions Distributions Fundamental solutions, Green's functions and Green's kernels →