Last modified on 30 August 2014, at 22:53

Partial Differential Equations/Poisson's equation

Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Poisson's equation Heat equation → 

This chapter deals with Poisson's equation

\forall x \in \mathbb R^d : -\Delta u(x) = f(x)

Provided that f \in \mathcal C^2(\mathbb R^d), we will through distribution theory prove a solution formula, and for domains with boundaries satisfying a certain property we will even show a solution formula for the boundary value problem. We will also study solutions of the homogenous Poisson's equation

\forall x \in \mathbb R^d : -\Delta u(x) = 0

The solutions to the homogenous Poisson's equation are called harmonic functions.

Important theorems from multi-dimensional integrationEdit

In section 2, we had seen Leibniz' integral rule, and in section 4, Fubini's theorem. In this section, we repeat the other theorems from multi-dimensional integration which we need in order to carry on with applying the theory of distributions to partial differential equations. Proofs will not be given, since understanding the proofs of these theorems is not very important for the understanding of this wikibook. The only exception will be theorem 6.3, which follows from theorem 6.2. The proof of this theorem is an exercise.

Theorem 6.1: (Dominated convergence theorem)

Let (f_l)_{l \in \mathbb N} be a sequence of functions such that

  • \lim_{l \to \infty} f_l(x) = f(x) \text{ almost everywhere}
  • \forall l \in \mathbb N : |f_l(x)| < |g(x)| \text{ almost everywhere}

for a g \in L^1(\mathbb R^d), which is independent of n. Then

\lim_{l \to \infty} \int_{\mathbb R^d} f_l(x) dx = \int_{\mathbb R^d} f(x) dx

Theorem 6.2: (Divergence theorem)

Let K \subset \mathbb R^d a compact set with smooth boundary. If \mathbf V : K \to \mathbb R^d is a vector field, then

\int_K \nabla \cdot \mathbf V (x) dx = \int_{\partial K} \nu(x) \cdot \mathbf V(x) dx

, where \nu: \partial K \to \mathbb R^d is the outward normal vector.

Theorem 6.3: (Multi-dimensional integration by parts)

Let K \subset \mathbb R^d a compact set with smooth boundary. If f: K \to \mathbb R is a function and \mathbf W : K \to \mathbb R^d is a vector field, then

\int_K f(x) \nabla \cdot \mathbf W (x) dx = \int_{\partial K} f(x) \nu(x) \cdot \mathbf W (x) dx - \int_K \mathbf W (x) \cdot \nabla f(x) dx

, where \nu: \partial K \to \mathbb R^d is the outward normal vector.

Proof: See exercise 1.

Theorem 6.4: (Multi-dimensional integration by substitution)

If O, U \subseteq \mathbb R^d are open, and \psi : U \to O is a diffeomorphism, then

\int_O f(x) dx = \int_U f(\psi(x)) | \det J_\psi (x) | dx

The volume and surface area of d-dimensional spheresEdit

Definition 6.5:

The Gamma function \Gamma : \mathbb R_{>0} \to \mathbb R is defined by

\Gamma(x) := \int_0^\infty s^{x-1} e^{-s} ds

The Gamma function satisfies the following equation:

Theorem 6.6:

\forall x \in \mathbb R_{>0} : \Gamma(x+1) = x \Gamma(x)

Proof:

\Gamma(x+1) = \int_0^\infty s^x e^{-s} ds \overset{\text{integration by parts}}{=} \underbrace{- s^x e^{-s} \big|^{s=\infty}_{s=0}}_{=0} - \int_0^\infty - x s^{x-1} e^{-s} ds = x \Gamma(x)
////

If the Gamma function is shifted by 1, it is an interpolation of the factorial (see exercise 2):

Generalized factorial function more infos.svg

As you can see, in the above plot the Gamma function also has values on negative numbers. This is because what is plotted above is some sort of a natural continuation of the Gamma function which one can construct using complex analysis.

Definition and theorem 6.7:

The d-dimensional spherical coordinates, given by \Psi : (0, \infty) \times (0, 2\pi) \times (-\pi/2, \pi/2)^{d-2} \to \mathbb R^d \setminus \{(x_1, \ldots, x_d) \in \mathbb R^d : x_1 \ge 0 \wedge x_2 = 0\}

\Psi(r, \Phi, \Theta_1, \ldots, \Theta_{d-2}) = \begin{pmatrix}
r \cos(\Phi) \cos(\Theta_1) \cdots \cos(\Theta_{d-2}) \\
r \sin(\Phi) \cos(\Theta_1) \cdots \cos(\Theta_{d-2}) \\
r \sin(\Theta_1) \cos(\Theta_2) \cdots \cos(\Theta_{d-2}) \\
\vdots \\
r \sin(\Theta_{d-3}) \cos(\Theta_{d-2}) \\
r \sin(\Theta_{d-2}) \\
\end{pmatrix}

are a diffeomorphism. The determinant of the Jacobian matrix of \Psi, \det J_\Psi, is given by

\det J_\Psi(r, \Phi, \Theta_1, \ldots, \Theta_{d-2}) = r^{d-1} \cos(\Theta_1) \cos(\Theta_2)^2 \cdots \cos(\Theta_{d-2})^{d-2}

Proof:

Theorem 6.8:

The volume of the d-dimensional ball with radius R \in \mathbb R_{>0}, B_R(0) is given by

V_d(R) := \frac{\pi^{d/2}}{\Gamma(d/2 + 1)} R^d

Proof:

Theorem 6.9:

The area of the surface of the d-dimensional ball with radius R \in \mathbb R_{>0} (i. e. the area of \partial B_R(0)) is given by

A_d(R) := \frac{d\pi^{d/2}}{\Gamma(d/2 + 1)} R^{d-1}

The surface area and the volume of the d-dimensional ball with radius R \in \mathbb R_{>0} are related to each other (see exercise 3).

Proof:

Green's kernelEdit

Lemma 6.10:

\int_\R e^{-x^2} dx = \sqrt{\pi}

Proof:

\left( \int_\R e^{-x^2} \right)^2 = \left( \int_\R e^{-x^2} \right) \cdot \left( \int_\R e^{-y^2} \right) = \int_\R \int_\R e^{-(x^2 + y^2)} dx\, dy = \int_{\R^2} e^{-\|z\|^2} dz

Now we transform variables by using two dimensional polar coordinates, and then transform one-dimensional with r \mapsto \sqrt{r}:

 = \int_0^\infty \int_0^{2\pi} r e^{-r^2} d \varphi dr = 2 \pi \int_0^\infty r e^{-r^2} dr = 2 \pi \int_0^\infty \frac{1}{2\sqrt{r}}\sqrt{r} e^{-r} dr = \pi

Taking the square root on both sides of the equation finishes the proof.

Theorem 6.11:

The function P : \mathbb R^d \to \mathbb R, given by

P(x) :=
\begin{cases}
-\frac{1}{2} |x| & d=1 \\
-\frac{1}{2\pi}\ln \|x\| & d=2 \\
\frac{1}{(d - 2) A_d(1) \|x\|^{d-2}} & d \ge 3
\end{cases}

is a Green's kernel for Poisson's equation.

Proof:

First, we show that \tilde G is locally integrable. Let's choose an arbitrary compact \Omega \subset \R^d and R > 0 such that \Omega \subseteq B_R(0). For d=1, we can see:

\int_\Omega -\frac{1}{2} |x| dx \le  -\frac{1}{2} \int_{-R}^R |x| dx = - \int_0^R r dr = -\frac{1}{2} R^2 < \infty

By transformation with polar coordinates, we obtain for d \ge 2:

d=2: ~ \int_\Omega -\frac{1}{2\pi}\ln \|x\| dx \le \int_{B_R(0)} -\frac{1}{2\pi}\ln \|x\| dx = -\frac{1}{2\pi} \int_0^R r \ln r dr = -\frac{1}{8\pi} R^2 (2\ln(R) - 1) < \infty
d \ge 3: ~ \int_\Omega \frac{1}{(d-2)c} \frac{1}{\|x\|^{d-2}} \le \frac{1}{(d-2)c} \int_{B_R(0)} \frac{1}{\|x\|^{d-2}} dx
 = \frac{1}{(d-2)c} \int_0^R \frac{r^{d-1}}{r^{d-2}} \underbrace{\int_{\partial B_1(0)} 1 dz}_{=: c} dr = \frac{1}{2(d-2)} R^2 < \infty

This shows us that we are allowed to apply lemma 2.4, which shows us that \xi \mapsto T_{G(\cdot - \xi)} is continuous. Well-definedness follows from theorem 1.3.

Furthermore, we calculate now the gradient and the laplacian of \tilde G(z) for z \in \R^d \setminus \{0\}, because we will need them later:

d=1: ~ \nabla -\frac{1}{2} |z| = -\frac{z}{2|z|}
\Delta -\frac{1}{2} |z| = 0, since \nabla -\frac{1}{2} |z| is continuous and has constant absolute value for x \neq 0


d=2: ~ \nabla -\frac{1}{2\pi}\ln \|z\| = -\frac{z}{2\pi \|z\|^2}
\Delta -\frac{1}{2\pi}\ln \|z\| = - \frac{1}{2\pi}\frac{\|z\|^2 - 2z_1^2 + \|z\|^2 - 2z_2^2}{\|z\|^4} = 0


d \ge 3: ~ \nabla \frac{1}{(d-2)c} \frac{1}{\|z\|^{d-2}} = \frac{1}{c} \frac{1}{\|z\|^{d-1}} \cdot \frac{z}{\|z\|} = \frac{1}{c} \frac{z}{\|z\|^d}
\Delta \frac{1}{(d-2)c} \frac{1}{\|z\|^{d-2}} = \frac{1}{c} \frac{\|z\|^{2d} - d \|z\|^{2d - 2} z_1^2 + \cdots + \|z\|^{2d} - d \|z\|^{2d - 2} z_d^2}{\|z\|^d} = 0

Let first d \ge 2.

We define now

J_0(R) := -\int_{\R^d \setminus B_R(\xi)} \tilde G(x - \xi) \Delta \varphi(x) dx

Due to the dominated convergence theorem, we have

\lim_{R \to 0} J_0(R) = -\int_{\R^d} \tilde G(x - \xi) \Delta \varphi(x) 1_{\R^d \setminus B_R(\xi)}(x) dx = -\Delta T_{G(\cdot - \xi)}(\varphi)

Let's furthermore choose w(x) = \tilde G(x - \xi) \nabla \varphi(x). Then

\nabla \cdot w(x) = \Delta \varphi(x) \tilde G(x - \xi) + \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle.

From Gauß' theorem, we obtain

\int_{\R^d \setminus B_R(\xi)} \Delta \varphi(x) \tilde G(x - \xi) + \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle dx = -\int_{\partial B_R(\xi)} \langle \tilde G(x - \xi) \nabla \varphi(x), \frac{x-\xi}{\|x-\xi\|} \rangle dx

, where the minus in the right hand side occurs because we need the inward normal vector. From this follows immediately that

\int_{\R^d \setminus B_R(\xi)} -\Delta \varphi(x) \tilde G(x - \xi) = \underbrace{\int_{\partial B_R(\xi)} \langle \tilde G(x - \xi) \nabla \varphi(x), \frac{x-\xi}{\|x-\xi\|} \rangle dx }_{:= J_1(R)} - \underbrace{\int_{\R^d \setminus B_R(\xi)} \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle dx}_{:=J_2(R)}

We can now calculate the following, using the Cauchy-Schwartz inequality:

|J_1(R)| \le \int_{\partial B_R(\xi)} \| \tilde G(x - \xi) \nabla \varphi(x)\| \overbrace{\| \frac{x-\xi}{\|x-\xi\|} \|}^{=1} dx
 =
\begin{cases}
\displaystyle\int_{\partial B_R(\xi)} -\frac{1}{2\pi}\ln |x-\xi| \|\nabla \varphi(x)\|dx = \int_{\partial B_1(\xi)} -R \frac{1}{2\pi}\ln \|R(x-\xi)\| \|\nabla \varphi(Rx)\|dx & d=2 \\
\displaystyle\int_{\partial B_R(\xi)} \frac{1}{(d-2)c} \frac{1}{|x-\xi|^{d-2}} \|\nabla \varphi(x)\|dx = \int_{\partial B_1(\xi)} R^{d-1} \frac{1}{(d-2)c} \frac{1}{|R(x-\xi)|^{d-2}} & d \ge 3
\end{cases}
 \le
\begin{cases}
\displaystyle\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \int_{\partial B_1(\xi)} -R \frac{1}{2\pi}\ln R^2 dx = -\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \frac{c}{2\pi}R \ln R^2 \to 0, R \to 0 & d=2 \\
\displaystyle\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \int_{\partial B_1(\xi)} \frac{1}{(d-2)c} R dx = \max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \frac{R}{d-2} \to 0, R \to 0 & d \ge 3
\end{cases}

Now we define v(x) = \varphi(x) \nabla \tilde G(x - \xi), which gives:

\nabla \cdot v(x) = \varphi(x) \underbrace{\Delta \tilde G(x - \xi)}_{=0, x \neq \xi} + \langle \nabla \varphi(x), \nabla \tilde G(x - \xi) \rangle

Applying Gauß' theorem on v gives us therefore

J_2(R) = \int_{\partial B_R(\xi)} \varphi(x) \langle \nabla \tilde G(x - \xi), \frac{x-\xi}{\|x-\xi\|} \rangle dx
 = \int_{\partial B_R(\xi)} \varphi(x) \langle -\frac{x-\xi}{c \|x-\xi\|^d}, \frac{x-\xi}{\|x-\xi\|} \rangle dx = -\frac{1}{c}\int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} \varphi(x) dx

, noting that d = 2 \Rightarrow c = 2\pi.

We furthermore note that

\varphi(\xi) = \frac{1}{c} \int_{\partial B_1(\xi)} \varphi(\xi) dx = \frac{1}{c} \int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} \varphi(\xi) dx

Therefore, we have

\lim_{R \to 0} |-J_2(R) - \varphi(\xi)| \le \frac{1}{c} \lim_{R \to 0} \int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} |\varphi(\xi) - \varphi(x)| dx \le \lim_{R \to 0} \frac{1}{c} \max_{x \in B_R(\xi)} |\varphi(x) - \varphi(\xi)| \int_{\partial B_1(\xi)} 1 dx
 = \lim_{R \to 0} \max_{x \in B_R(\xi)} |\varphi(x) - \varphi(\xi)| = 0

due to the continuity of \varphi.

Thus we can conclude that

\forall \Omega \text{ domain of } \R^d: \forall \varphi \in \mathcal D(\Omega): -\Delta T_{\tilde G( \cdot - \xi)}(\varphi) = \lim_{R \to 0} J_0(R) = \lim_{R \to 0} J_1(R) - J_2(R) = 0 + \varphi(\xi) = \delta_\xi (\varphi).

Therefore, \tilde G is a Green's kernel for the Poisson's equation for d \ge 2.

For d = 1, we can calculate directly, using one-dimensional integration by parts:

-2\Delta T_{\tilde G( \cdot - \xi)} = \int_\R |x - \xi| \varphi''(x) dx = \int_\xi^{\infty} (x - \xi) \varphi''(x) dx + \int_{-\infty}^\xi (\xi - x) \varphi''(x) dx
= \overbrace{-(\xi - \xi)\varphi'(\xi) + (\xi - \xi) \varphi'(\xi)}^{=0} + \overbrace{\lim_{x \to \infty} (x - \xi) \varphi'(x) - \lim_{x \to -\infty} (\xi - x) \varphi'(x)}^{= 0 \text{ since supp } \varphi \text{ is compact} } - \int_\xi^{\infty} 1 \varphi'(x) dx - \int_{-\infty}^\xi (- 1) \varphi'(x) dx
 = 1 \varphi(\xi) -(- 1) \varphi(\xi) + 0 = 2\varphi(\xi) = 2 \delta_\xi(\varphi)

, and dividing by 2 gives the result that we wanted.

QED.


Integration over spheresEdit

Theorem 6.12:

Let f: \mathbb R^d \to \mathbb R be a function. \int_{\partial B_R(0)} f(x) dx = R^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f\left(\Psi(r, \Phi, \Theta_1, \ldots, \Theta_{d-2})\right) \cos(\Theta_1) \cos(\Theta_2)^2 \cdots \cos(\Theta_{d-2})^{d-2} d\Theta_1 d\Theta_2 \cdots d\Theta_{d-2} d\Phi

Proof: We choose as an orientation the border orientation of the sphere. We know that for \partial B_r(0), an outward normal vector field is given by \nu(x) = \frac{x}{r}. As a parametrisation of B_r(0), we only choose the identity function, obtaining that the basis for the tangent space there is the standard basis, which in turn means that the volume form of B_r(0) is

\omega_{B_r(0)}(x) = e_1^* \wedge \cdots \wedge e_d^*

Now, we use the normal vector field to obtain the volume form of \partial B_r(0):

\omega_{\partial B_r(0)}(x)(v_1, \ldots, v_{d-1}) = \omega_{B_r(0)}(x)(\nu(x), v_1, \ldots, v_{d-1})

We insert the formula for \omega_{B_r(0)}(x) and then use Laplace's determinant formula:

=e_1^* \wedge \cdots \wedge e_d^* (\nu(x), v_1, \ldots, v_{d-1}) = \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} x_i e_1^* \cdots \wedge e_{i-1}^* \wedge e_{i+1}^* \wedge \cdots \wedge e_d^*(v_1,  \ldots, v_{d-1})

As a parametrisation of \partial B_r(x) we choose spherical coordinates with constant radius r.

We calculate the Jacobian matrix for the spherical coordinates:


J = \left(
 \begin{smallmatrix}
  \cos(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & r -\sin(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & -r \cos(\varphi) \sin(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & \cdots & \cdots & -r \cos(\varphi) \cos(\vartheta_1) \cdots \sin(\vartheta_{d-2}) \\
  \sin(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & r \cos(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & -r \sin(\varphi) \sin(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & \cdots & \cdots & -r \sin(\varphi) \cos(\vartheta_1) \cdots \sin(\vartheta_{d-2}) \\
  \vdots  & 0  & \ddots & \ddots & \ddots & \vdots \\
  \vdots  & \vdots  & \ddots & \ddots & \ddots &  \\
  \sin(\vartheta_{d-3}) \cos(\vartheta_{d-2}) & 0 & \cdots & 0 & r \cos(\vartheta_{d-3}) \cos(\vartheta_{d-2}) & r \sin(\vartheta_{d-3}) \cos(\vartheta_{d-2}) \\
  \sin(\vartheta_{d-2}) & 0 & \cdots & \cdots & 0 & r \cos(\vartheta_{d-2})
 \end{smallmatrix} \right)

We observe that in the first column, we have only the spherical coordinates divided by r. If we fix r, the first column disappears. Let's call the resulting matrix J' and our parametrisation, namely spherical coordinates with constant r, \Psi. Then we have:

\Psi^*\omega_{\partial B_r(0)}(x)(v_1, \ldots, v_{d-1}) = \omega_{\partial B_r(0)}(\Psi(x))(J' v_1, \ldots, J' v_{d-1})
= \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} \Psi(x)_i e_1^* \cdots \wedge e_{i-1}^* \wedge e_{i+1}^* \wedge \cdots \wedge e_d^*(J' v_1, \ldots, J' v_{d-1})
= \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} \Psi(x)_i \det(e_j^*(J' v_k))_{j \neq i} = \det J \cdot \det(v_1, \ldots, v_{d-1})

Recalling that

\det J = r^{d-1}\cos(\phi_1)^{n-2}\cos(\phi_2)^{d-3}\cdots \cos(\phi_{d-2})

, the claim follows using the definition of the surface integral.


Theorem 6.13:

Let R > 0 and f: B_R(0) \to \mathbb R.

\int_{B_R(0)} f(x) dx = \int_0^R r^{d-1} \int_{\partial B_1(0)} f(rx) dx dr


Proof: Let again \Psi be the spherical coordinates. Due to transformation of variables, we obtain:

\int_{\R^d} f(x) dx = \int_0^\infty r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2} d\phi dr

and

\int_{B_\epsilon(0)} f(x) dx = \int_0^\epsilon r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2} d\phi dr

But due to the formula for integrating on the sphere surface, we also have that

\int_{\partial B_1(0)} f(rx) = \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(r\Psi(1, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2}

Combining this formula with r\Psi(1, \varphi, \vartheta_1, \ldots, \vartheta_{d-2}) = \Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2}) finishes the proof.

Harmonic functionsEdit

Definition 6.14: Let O \subseteq \mathbb R^d be open and let u : O \to \mathbb R be a function. If u \in \mathcal C^2(O) and

\forall x \in O : \Delta u = 0

u is called a harmonic function.

Theorem 6.15:

Let O \subseteq \mathbb R^d be open and let u \in \mathcal C^2(O). The following conditions are equivalent:

  • u is harmonic
  • \forall x \in O : \forall R \text{ such that } \overline{B_R(0)} \subset O : u(x) = \frac{1}{A_d(R)} \int_{\partial B_R(x)} u(y) dy
  • \forall x \in O : \forall R \text{ such that } \overline{B_R(0)} \subset O : u(x) = \frac{1}{V_d(R)} \int_{B_R(x)} u(y) dy

Proof: Let's define the following function:

\phi(r) = \frac{1}{r^{d-1}} \int_{\partial B_r(x)} u(y) dy

From first coordinate transformation with the diffeomorphism y \mapsto x + y and then applying our formula for integration on the unit sphere twice, we obtain:

\phi(r) = \frac{1}{r^{d-1}} \int_{\partial B_r(0)} u(y + x) dy = \int_{\partial B_1(0)} u(x + ry) dy

From first differentiation under the integral sign and then Gauss' theorem, we know that

\phi'(r) = \int_{\partial B_1(0)} \langle \nabla u(x + ry), y \rangle dy = \int_{B_1(0)} \Delta u (x + ry) dy = 0

Case 1: If u is harmonic, then we have

\int_{B_1(0)} \Delta u (x + ry) dy = 0

, which is why \phi is constant. Now we can use the dominated convergence theorem for the following calculation:

\lim_{r \to 0} \phi(r) = \int_{\partial B_1(0)} \lim_{r \to 0} u(x + ry) dy = c(1) u(x)

Therefore \phi(r) = c(1) u(x) for all r.

With the relationship

r^{d-1} c(1) = c(r)

, which is true because of our formula for c(x), x \in \R_{> 0}, we obtain that

u(x) = \frac{\phi(r)}{c(1)} = \frac{1}{c(1)} \frac{1}{r^{d-1}} \int_{\partial B_r(x)} u(y) dy = \frac{1}{c(r)} \int_{\partial B_r(x)} u(y) dy

, which proves the first formula.

Furthermore, we can prove the second formula by first transformation of variables, then integrating by onion skins, then using the first formula of this theorem and then integration by onion skins again:

\int_{B_r(x)} u(y) dy = \int_{B_r(0)} u(y + x) dy = \int_0^r s^{d-1} \int_{\partial B_1(0)} u(y + sx) dx ds = \int_0^r s^{d-1} u(x) \int_{\partial B_1(0)} 1 dx ds = u(x) d(r)

This shows that if u is harmonic, then the two formulas for calculating u, hold.

Case 2: Suppose that u is not harmonic. Then there exists an x \in \Omega such that -\Delta u(x) \neq 0. Without loss of generality, we assume that -\Delta u(x) > 0; the proof for -\Delta u(x) < 0 will be completely analoguous exept that the direction of the inequalities will interchange. Then, since as above, due to the dominated convergence theorem, we have

\lim_{r \to 0} \phi'(r) = \int_{B_1(0)} \lim_{r\to 0} \Delta u (x + ry) dy > 0

Since \phi' is continuous (by the dominated convergence theorem), this is why \phi grows at 0, which is a contradiction to the first formula.

The contradiction to the second formula can be obtained by observing that \phi' is continuous and therefore there exists a \sigma \in \R_{>0}

\forall r \in [0, \sigma) : \phi'(r) > 0

This means that since

\lim_{r \to 0} \phi(r) = \int_{\partial B_1(0)} \lim_{r \to 0} u(x + ry) dy = c(1) u(x)

and therefore

\phi(0) = c(1) u(x)

, that

\forall r \in (0, \sigma) : \phi(r) > c(1) u(x)

and therefore, by the same calculation as above,

\int_{B_r(x)} u(y) dy = \int_{B_r(0)} u(y + x) dy = \int_0^r s^{d-1} \int_{\partial B_1(0)} u(y + sx) dx ds > \int_0^r s^{d-1} u(x) \int_{\partial B_1(0)} 1 dx ds = u(x) d(r)

This shows (by proof with contradiction) that if one of the two formulas hold, then u \in C^2(\Omega) is harmonic.

Definition 6.16:

A domain is an open and connected subset of \mathbb R^d.

Theorem 6.17:

Let \Omega \subseteq \mathbb R^d be a domain and let u: \Omega \to \mathbb R be harmonic. If there exists an x \in \Omega such that

u(x) = \sup_{y \in \Omega} u(y)

, then u is constant.

Proof:

Let's define S := \sup_{x \in \Omega} u(x). Let A = \{y \in \Omega : u(y) = S\}. Due to the assumption, we have that A is not empty. Furthermore, since \Omega is open, there is an open ball around every y \in A such that B_r(y) \subseteq \Omega. With one of the mean-value formulas (see above), we obtain the inequality

S = u(y) = \frac{1}{d(r)} \int_{B_r(y)} u(x) dx \le \frac{1}{d(r)} \int_{B_r(y)} S dx = S \frac{d(r)}{d(r)} = s

, which implies that on B_r(y) it holds u(x) = S almost everywhere. But since the function which is constantly S and u are both continuous, we even have u(x) = S on the whole ball B_r(y). Thus B_r(y) \subseteq A, and therefore A is open.

But since A = u^{-1}(S), and u is continuous, we also have that A is relatively closed in \Omega, and since \Omega is connected, the only possibility is that A = \Omega.

Theorem 6.18:

Let \Omega \subseteq \mathbb R^d be a domain and let u: \Omega \to \mathbb R be harmonic. If there exists an x \in \Omega such that

u(x) = \inf_{y \in \Omega} u(y)

, then u is constant.

Proof: See exercise 6.

Corollary 6.19:

Let \Omega \subseteq \mathbb R^d be a bounded domain and let u: \overline \Omega \to \mathbb R be harmonic on \Omega and continuous on \overline \Omega. Then

\forall x \in \overline \Omega : \inf_{y \in \partial \Omega} u(y) \le u(x) \le \sup_{y \in \partial \Omega} u(y)

Proof:

Theorem 6.20:

Let O \subseteq \mathbb R^d be open and u: O \to \mathbb R be a harmonic function, let x_0 \in O and let R > 0 such that \overline{B_R(x_0)} \subset O. Then

\forall x \in B_R(x_0) : u(x) = \frac{R^2 + \|x - x_0\|^2}{R A_d(1)} \int_{\partial B_R(x_0)}\frac{u(y)}{\|x - y\|^d} dy

Proof:

What we will do next is showing that every harmonic function u \in \mathcal C^2(O) is in fact automatically contained in \mathcal C^\infty(O). But before we do so, we need another multiindex definition; using the definition of the usual binomial coefficient, we define a binomial coefficient for multiindices.

Definition 6.21:

If \alpha = (\alpha_1, \ldots, \alpha_d), \beta = (\beta_1, \ldots, \beta_d) \in \N_0^d are two d-dimensional multiindices, we define the binomial coefficient of \alpha over \beta as:

\binom{\alpha}{\beta} := \binom{\alpha_1}{\beta_1} \binom{\alpha_2}{\beta_2} \cdots \binom{\alpha_d}{\beta_d}

With these multiindex definitions, we are able to write down a more general version of Leibniz' product rule for derivatives. But in order to prove this rule, we need another lemma first.

Lemma 6.22:

If n \in \{1, \ldots, d\} and e_n := (0, \ldots, 0, 1, 0, \ldots, 0), where the 1 is at the n-th place, we have

\binom{\alpha - e_n}{\beta -e_n} + \binom{\alpha - e_n}{\beta} = \binom{\alpha}{\beta}

for arbitrary multiindices \alpha, \beta \in \mathbb N_0^d.

Proof:

For the ordinary binomial coefficients for natural numbers, we had the formula

\binom{n - 1}{k - 1} + \binom{n - 1}{k} = \binom{n}{k}.

Therefore,

\begin{align}
\binom{\alpha - e_n}{\beta -e_n} + \binom{\alpha - e_n}{\beta} &= \binom{\alpha_1}{\beta_1} \cdots \binom{\alpha_n - 1}{\beta_n - 1} \cdots \binom{\alpha_d}{\beta_d} + \binom{\alpha_1}{\beta_1} \cdots \binom{\alpha_n - 1}{\beta_n} \cdots \binom{\alpha_d}{\beta_d} \\
&= \binom{\alpha_1}{\beta_1} \cdots \left( \binom{\alpha_n - 1}{\beta_n - 1} + \binom{\alpha_n - 1}{\beta_n} \right) \cdots \binom{\alpha_d}{\beta_d} \\
&= \binom{\alpha_1}{\beta_1} \cdots \binom{\alpha_n}{\beta_n} \cdots \binom{\alpha_d}{\beta_d}
= \binom{\alpha}{\beta}
\end{align}
////

We also define less or equal relation on the set of multi-indices.

Definition 6.23:

Let \alpha = (\alpha_1, \ldots, \alpha_d), \beta = (\beta_1, \ldots, \beta_d) \in \N_0^d be two d-dimensional multiindices. We define \beta to be less or equal than \alpha iff

\beta \le \alpha :\Leftrightarrow \forall 1 \le i \le d : \beta_i \le \alpha_i

For d \ge 2, there are vectors \alpha, \beta \in \mathbb N_0^d such that neither \alpha \le \beta nor \beta \le \alpha. For d = 2, the following two vectors are examples for this:

\alpha = (1, 0), \beta = (0, 1)

This example can be generalised to higher dimensions (see exercise 7).

This is the general product rule:

Theorem 6.24:

Let f \in \mathcal C^n(\mathbb R^d) and let |\alpha| \le n. Then

\partial_\alpha (f(x) \cdot g(x)) = \sum_{\beta \le \alpha} \binom{\alpha}{\beta} \partial_\beta f(x) \cdot \partial_{\alpha - \beta} g(x)


Proof:

We prove the claim by induction over |\alpha|.

1.

We start with the induction base |\alpha| = 0. Then the formula just reads

f(x)g(x) = f(x)g(x)

, and this is true. Therefore, we have completed the induction base.

2.

Next, we do the induction step. Let's assume the claim is true for all \alpha \in \N_0^d such that |\alpha| = j. Let now \alpha \in \N_0^d such that |\alpha| = j+1. Let's choose i \in \{1, \ldots, d\} such that \alpha_i > 0 (we may do this because |\alpha| = j+1 > 0). We define again e_i = (0, \ldots, 0, 1, 0, \ldots, 0), where the 1 is at the i-th place. Then we have, due to the the Schwarz theorem and the not-generalized Leibniz product rule:

\frac{\partial^\alpha}{\partial x^\alpha} (f(x) \cdot g(x)) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} (f(x) \cdot g(x)) \right) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} f(x) \cdot g(x) + f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right)

Now we may use the linearity of derivation and the induction hypothesis to obtain:

\frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left(\frac{\partial}{\partial x_i} f(x) \cdot g(x) + f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} f(x) \cdot g(x) \right) + \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right)
 = \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} \frac{\partial}{\partial x_i} g(x)

Then, here comes a key ingredient for the proof: Noticing that

\frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} = \frac{\partial^{\alpha - (\beta + e_i)}}{\partial x^{\alpha - (\beta + e_i)}}

and

\{\beta \in \N_0^d | 0 \le \beta \le \alpha - e_i\} = \{\beta - e_i \in \N_0^d | e_i \le \beta \le \alpha\}

, we notice that we are allowed to shift indices in the first of the two above sums, and furthermore simplify both sums with the rule

\frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} = \frac{\partial^{\beta + e_i}}{\partial x^{\beta + e_i}}.

Therefore, we obtain:

\sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} \frac{\partial}{\partial x_i} g(x)
 = \sum_{e_i \le \beta \le \alpha} \binom{\alpha - e_i}{\beta - e_i} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)

Now we just sort the sum differently, and then apply our observation

\binom{\alpha - e_i}{\beta -e_i} + \binom{\alpha - e_i}{\beta} = \binom{\alpha}{\beta},

which we made immediately after we defined the binomial coefficients, as well as the observations that

\binom{\alpha - e_i}{0} = \binom{\alpha}{0} = 1 where 0 = (0, \ldots, 0) in \N_0^d, and \binom{\alpha - e_i}{\alpha - e_i} = \binom{\alpha}{\alpha} = 1 (these two rules may be checked from the definition of \binom{\alpha}{\beta})

, to find in conclusion:

\frac{\partial^\alpha}{\partial x^\alpha} (f(x) \cdot g(x)) = \sum_{e_i \le \beta \le \alpha} \binom{\alpha - e_i}{\beta - e_i} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)
 = \binom{\alpha - e_i}{0} f(x) \frac{\partial^\alpha}{\partial x^\alpha} g(x) + \sum_{e_i \le \beta \le \alpha - e_i} \left[ \binom{\alpha - e_i}{\beta - e_i} + \binom{\alpha - e_i}{\beta} \right] \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \binom{\alpha - e_i}{\alpha - e_i} f(x) \frac{\partial^\alpha}{\partial x^\alpha} g(x)
= \sum_{\beta \le \alpha} \binom{\alpha}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)
\Box

Theorem 6.25: Let O \subseteq \mathbb R^d be open, and let u : O \to \mathbb R be harmonic. Then u \in \mathcal C^\infty(O). Furthermore, for all n \in \mathbb N, there is a constant C_{d, n} depending only on the dimension d and n such that for all x_0 \in O and R > 0 such that B_R(x_0) \subseteq O

\forall \alpha \in \N^d \text{ with } |\alpha| = n : \partial_\alpha u(x) \le \frac{C_{d, n}}{R^{d+n}}\int_{B_R(x_0)}|u(y)| dy

Proof:

Definition 6.26:

Let (u_l)_{l \in \mathbb N} be a sequence of harmonic functions, and let u : O \to \mathbb R be a function. (u_l)_{l \in \mathbb N} converges locally uniformly to u iff

Theorem 6.27:

Let O \subseteq \mathbb R^d be open and let u_l : O \to \mathbb R, l \in \mathbb N be harmonic functions such that the sequence (u_l)_{l \in \mathbb N} converges locally uniformly to a function u : O \to \mathbb R. Then also u is harmonic.

Proof:

Definition 6.28:

Theorem 6.29: (Arzelà-Ascoli) Let F be a set of continuous functions, which are defined on a compact set K. Then the following two statements are equivalent:

  1. \overline F (the closure of F) is compact
  2. F is bounded and equicontinuous

Proof:

Definition 6.30:

Theorem 6.31:

Let (u_l)_{l \in \mathbb N} be a locally uniformly bounded sequence of harmonic functions. Then it has a locally uniformly convergent subsequence.

Proof:

Boundary value problemEdit

The dirichlet problem for the Poisson equation is to find a solution for

\begin{cases}
-\Delta u(x) = f(x) & x \in \Omega \\
u(x) = g(x) & x \in \partial \Omega
\end{cases}

Uniqueness of solutionsEdit

If \Omega is bounded, then we can know that if the problem

\begin{cases}
-\Delta u(x) = f(x) & x \in \Omega \\
u(x) = g(x) & x \in \partial \Omega
\end{cases}

has a solution u_1, then this solution is unique on \Omega.


Proof: Let u_2 be another solution. If we define u = u_1 - u_2, then u obviously solves the problem

\begin{cases}
-\Delta u(x) = 0 & , x \in \Omega \\
u(x) = 0 & x \in \partial \Omega
\end{cases}

, since -\Delta (u_1(x) - u_2(x)) = -\Delta u_1 (x) - (-\Delta u_2(x)) = f(x) - f(x) = 0 for x \in \Omega and u_1(x) - u_2(x) = g(x) - g(x) = 0 for x \in \partial \Omega.

Due to the above corollary from the minimum and maximum principle, we obtain that u is constantly zero not only on the boundary, but on the whole domain \Omega. Therefore u_1(x) - u_2(x) = 0 \Leftrightarrow u_1(x) = u_2(x) on \Omega. This is what we wanted to prove.

Green's functions of the first kindEdit

Let \Omega \subseteq \R^d be a domain. Let \tilde G be the Green's kernel of Poisson's equation, which we have calculated above, i.e.

\tilde G(x) :=
\begin{cases}
-\frac{1}{2} |x| & d=1 \\
-\frac{1}{2\pi}\ln \|x\| & d=2 \\
\frac{1}{(d-2)c} \frac{1}{\|x\|^{d-2}} & d \ge 3
\end{cases}

, where c := \int_{\partial B_1(0)} 1 dz denotes the surface area of B_1(0) \subset \R^d.

Suppose there is a function h: \Omega \times \Omega \to \R which satisfies

\begin{cases}
-\Delta h(x, \xi) = 0 & x \in \Omega \\
h(x, \xi) = \tilde G(x - \xi) & x \in \partial \Omega
\end{cases}

Then the Green's function of the first kind for -\Delta for \Omega is defined as follows:

\tilde G_\Omega(x, \xi) := \tilde G(x - \xi) - h(x, \xi)

\tilde G(x - \xi) - h(x, \xi) is automatically a Green's function for -\Delta. This is verified exactly the same way as veryfying that \tilde G is a Green's kernel. The only additional thing we need to know is that h does not play any role in the limit processes because it is bounded.

A property of this function is that it satisfies

\begin{cases}
-\Delta \tilde G_\Omega(x, \xi) = 0 & x \in \Omega \setminus \{\xi\} \\
\tilde G_\Omega(x, \xi) = 0 & x \in \partial \Omega
\end{cases}

The second of these equations is clear from the definition, and the first follows recalling that we calculated above (where we calculated the Green's kernel), that \Delta \tilde G(x) = 0 for x \neq 0.

Representation formulaEdit

Let \Omega \subseteq \R^d be a domain, and let u \in C^2(\Omega) be a solution to the Dirichlet problem

\begin{cases}
-\Delta u(x) = f(x) & x \in \Omega \\
u(x) = g(x) & x \in \partial \Omega
\end{cases}

. Then the following representation formula for u holds:

u(\xi) = \int_\Omega -\Delta u(y) \tilde G_\Omega(y, \xi) dy - \int_{\partial \Omega} u(y) \nu(y) \nabla_y \tilde G_\Omega(y, \xi) dy

, where \tilde G_\Omega is a Green's function of the first kind for \Omega.


Proof: Let's define

J(\epsilon) := \int_{\Omega \setminus B_\epsilon(\xi)} -\Delta u(y) \tilde G_\Omega(y, \xi) dy

. By the theorem of dominated convergence, we have that

\lim_{\epsilon \to 0} J(\epsilon) = \int_\Omega -\Delta u(y) \tilde G_\Omega(y, \xi) dy

Using multi-dimensional integration by parts, it can be obtained that:

J(\epsilon) = - \int_{\partial \Omega} \underbrace{\tilde G_\Omega(y, \xi)}_{=0} \langle \nabla u(y), \nu(y) \rangle dy + \int_{\partial B_\epsilon(\xi)} \tilde G_\Omega(y, \xi) \langle \nabla u(y), \frac{y - \xi}{\|y - \xi\|} \rangle dy + \int_{\Omega \setminus B_\epsilon(\xi)} \langle \nabla u(y), \nabla_x \tilde G_\Omega(y, \xi) \rangle dy
= \underbrace{\int_{\partial B_\epsilon(\xi)} \tilde G_\Omega(y, \xi) \langle \nabla u(y), \frac{y - \xi}{\|y - \xi\|} \rangle dy}_{:= J_1(\epsilon)} - \int_{\Omega \setminus B_\epsilon(\xi)} \Delta \tilde G_\Omega(y, \xi) u(y) dy
- \underbrace{\int_{\partial B_\epsilon(\xi)} u(y) \langle \nabla \tilde G_\Omega(y, \xi), \frac{y - \xi}{\|y - \xi\|} \rangle dy}_{:=J_2(\epsilon)} - \int_{\partial \Omega} u(y) \langle \nabla \tilde G_\Omega(y, \xi), \nu(y) \rangle dy

When we proved the formula for the Green's kernel of Poisson's equation, we had already shown that

\lim_{\epsilon \to 0} -J_2(\epsilon) = u(\xi) and
\lim_{\epsilon \to 0} J_1(\epsilon) = 0

The only additional thing which is needed to verify this is that h \in C^\infty(\Omega), which is why it stays bounded, while \tilde G goes to infinity as \epsilon \to 0, which is why h doesn't play a role in the limit process.

This proves the formula.

Harmonic functions on the ball: A special case of the Dirichlet problemEdit

Green's function of the first kind for the ballEdit

Let's choose

h(x, \xi) = \tilde G\left(\frac{\|\xi\|}{r}\left(x - \frac{r^2}{\|\xi\|^2} \xi \right)\right)

Then

\tilde G_{B_r(x_0)}(x, \xi) := \tilde G(x - \xi) - h(x - x_0, \xi - x_0)

is a Green's function of the first kind for B_r(x_0).

Proof: Since \xi - x_0 \in B_r(0) \Rightarrow \frac{r^2}{\|\xi - x_0\|^2} (\xi - x_0) \notin B_r(0) and therefore

\forall x, \xi \in B_r(0) : -\Delta_x h(x - x_0, \xi - x_0) = 0

Furthermore, we obtain:

\int_{B_r(x_0)} -\Delta \varphi(x) \tilde G_\Omega(x, \xi) dx = \int_{B_r(x_0)} -\Delta \varphi(x) \tilde G(x - \xi) dx + \int_{B_r(x_0)} \varphi(x) -\Delta h(x, \xi) dx = \varphi(\xi) + 0

, which is why \tilde G_\Omega(x, \xi) is a Green's function.

The property for the boundary comes from the following calculation:

\forall x \in \partial B_r(0) : \|x - \xi\|^2 = \langle x - \xi, x - \xi \rangle = r^2 + \|\xi\|^2 - 2 \langle x, \xi \rangle = \frac{\|\xi\|^2}{r^2} (\langle x - \frac{r^2}{\|\xi\|^2} \xi, x - \frac{r^2}{\|\xi\|^2} \xi \rangle) = \frac{\|\xi\|^2}{r^2} \|x - \frac{r^2}{\|\xi\|^2} \xi\|^2

, which is why x \in \partial B_r(0) \Rightarrow h(x, \xi) = \tilde G(x, \xi), since \tilde G is radially symmetric.

Solution formulaEdit

Let's consider the following problem:

\begin{cases}
-\Delta u(x) = 0 & x \in B_r(0) \\
u(x) = \varphi(x) & x \in \partial B_r(0)
\end{cases}

Here \varphi shall be continuous on \partial B_r(0). Then the following holds: The unique solution u \in C(\overline{B_r(0)}) \cap C^2(B_r(0)) for this problem is given by:

u(\xi) = \begin{cases}
\int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, \xi) \rangle \varphi(y) dy & \xi \in B_r(0) \\
\varphi(\xi) & \xi \in \partial B_r(0)
\end{cases}

Proof: Uniqueness we have already proven; we have shown that for all Dirichlet problems for -\Delta on bounded domains (and the unit ball is of course bounded), the solutions are unique.

Therefore, it only remains to show that the above function is a solution to the problem. To do so, we note first that

-\Delta \int_{\partial B_r(0)} \langle -\nu(y),  \tilde \nabla_y G_{B_r(0)}(y, \xi) \rangle \varphi(y) dy = -\Delta \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y (\tilde G(y - \xi) - h(y, \xi)) \rangle \varphi(y) dy

Let 0 <s < r be arbitrary. Since \tilde G_{B_r(0)} is continuous in B_s(0), we have that on B_s(0) it is bounded. Therefore, by the fundamental estimate, we know that the integral is bounded, since the sphere, the set over which is integrated, is a bounded set, and therefore the whole integral must be always below a certain constant. But this means, that we are allowed to differentiate under the integral sign on B_s(0), and since r> s > 0 was arbitrary, we can directly conclude that on B_r(0),

-\Delta u(\xi) = \int_{\partial B_r(0)} \overbrace{-\Delta(\langle -\nu(y),  \tilde \nabla_y \tilde G(x - \xi) - h(x, \xi) \rangle \varphi(y))}^{=0} dy = 0

Furthermore, we have to show that \forall x \in \partial B_r(0): \lim_{y \to x} u(y) = \varphi(x), i. e. that u is continuous on the boundary.

To do this, we notice first that

\int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, \xi) \rangle dy = 1

This follows due to the fact that if u \equiv 1, then u solves the problem

\begin{cases}
-\Delta u(x) = 0 & x \in B_r(0) \\
u(x) = 1 & x \in \partial B_r(0)
\end{cases}

and the application of the representation formula.

Furthermore, if \|x - x^*\| < \frac{1}{2} \delta and \|y - x^*\| \ge \delta, we have due to the second triangle inequality:

\|x - y\| \ge | \|y - x^*\| - \|x^* - x\| | \ge \frac{1}{2} \delta

In addition, another application of the second triangle inequality gives:

(r^2 - \|x\|^2) = (r + \|x\|)(r - \|x\|) = (r + \|x\|)(\|x^*\| - \|x\|) \le 2r \|x^* - x\|

Let then \epsilon > 0 be arbitrary, and let x^* \in \partial B_r(0). Then, due to the continuity of \varphi, we are allowed to choose \delta > 0 such that

\|x - x^*\| < \delta \Rightarrow |\varphi(x) - \varphi(x^*)| < \frac{\epsilon}{2}.

In the end, with the help of all the previous estimations we have made, we may unleash the last chain of inequalities which shows that the representation formula is true:

|u(x) - u(x^*)| = |u(x) - 1 \cdot \varphi(x^*)| = \left| \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle (\varphi(x) - \varphi(x^*)) dy \right|
\le \frac{\epsilon}{2} \int_{\partial B_r(0) \cap B_\delta(x^*)} |\langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle| dy + 2 \|\varphi\|_\infty \int_{\partial B_r(0) \setminus B_\delta(x^*)} |\langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle| dy
\le \frac{\epsilon}{2} + 2 \|\varphi\|_\infty \int_{\partial B_r(0) \setminus B_\delta(x^*)} \frac{r^2 - \|x\|^2}{rc(1)\left(\frac{\delta}{2}\right)^d} dy \le \frac{\epsilon}{2} + 2 \|\varphi\|_\infty r^{d-2} \frac{r^2 - \|x\|^2}{\left(\frac{\delta}{2}\right)^d}

Since x \to x^* implies r^2 - \|x\|^2 \to 0, we might choose x close enough to x^* such that

2 \|\varphi\|_\infty r^{d-2} \frac{r^2 - \|x\|^2}{\left(\frac{\delta}{2}\right)^d} < \frac{\epsilon}{2}. Since \epsilon > 0 was arbitrary, this finishes the proof.

BarriersEdit

Let \Omega \sub \R^d be a domain. A function b: \R^d \to \R is called a barrier with respect to y \in \partial \Omega if and only if the following properties are satisfied:

  1. b is continuous
  2. b is superharmonic on \Omega
  3. b(y) = 0
  4. \forall x \in \R^d \setminus \Omega : b(x) > 0

Exterior sphere conditionEdit

Let \Omega \subseteq \R^d be a domain. We say that it satisfies the exterior sphere condition, if and only if for all x \in \partial \Omega there is a ball B_r(z) \subseteq \R^d \setminus \Omega such that x \in \partial B_r(z) for some z \in \R^d \setminus \Omega and r \in \R_{\ge 0}.

Subharmonic and superharmonic functionsEdit

Let \Omega \subseteq \R^d be a domain and v \in C(\Omega).

We call v subharmonic if and only if:

v(x) \le \frac{1}{d(r)} \int_{B_r(x)} v(y) dy

We call v superharmonic if and only if:

v(x) \ge \frac{1}{d(r)} \int_{B_r(x)} v(y) dy

From this definition we can see that a function is harmonic if and only if it is subharmonic and superharmonic.

Minimum principle for superharmonic functionsEdit

A superharmonic function u on \Omega attains it's minimum on \Omega's border \partial \Omega.

Proof: Almost the same as the proof of the minimum and maximum principle for harmonic functions. As an exercise, you might try to prove this minimum principle yourself.

Harmonic loweringEdit

Let u \in \mathcal S_\varphi(\Omega), and let B_r(x_0) \subset \Omega. If we define

\tilde u(x) = \begin{cases}
u(x) & x \notin B_r(x_0) \\
\int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle \varphi(y) dy & x \in B_r(x_0)
\end{cases}

, then \tilde u \in \mathcal S_\varphi(\Omega).

Proof: For this proof, the very important thing to notice is that the formula for \tilde u inside B_r(x_0) is nothing but the solution formula for the Dirichlet problem on the ball. Therefore, we immediately obtain that \tilde u is superharmonic, and furthermore, the values on \partial \Omega don't change, which is why \tilde u \in \mathcal S_\varphi(\Omega). This was to show.

Definition 3.1Edit

Let \varphi \in C(\partial \Omega). Then we define the following set:

\mathcal S_\varphi(\Omega) := \{u \in C(\overline{\Omega}) : u \text{ superharmonic and } x \in \partial \Omega \Rightarrow u(x) \ge \varphi(x)\}

Lemma 3.2Edit

\mathcal S_\varphi(\Omega) is not empty and

\forall u \in \mathcal S_\varphi(\Omega) : \forall x \in \Omega : u(x) \ge \min_{y \in \partial \Omega} \varphi(y)

Proof: The first part follows by choosing the constant function u(x) = \max_{y \in \partial \Omega} \varphi(y), which is harmonic and therefore superharmonic. The second part follows from the minimum principle for superharmonic functions.

Lemma 3.3Edit

Let u_1, u_2 \in \mathcal S_\varphi(\Omega). If we now define u(x) = \min\{u_1(x), u_2(x)\}, then u \in \mathcal S_\varphi(\Omega).

Proof: The condition on the border is satisfied, because

\forall x \in \partial \Omega : u_1(x) \ge \varphi(x) \wedge u_2(x) \ge \varphi(x)

u is superharmonic because, if we (without loss of generality) assume that u(x) = u_1(x), then it follows that

u(x) = u_1(x) \ge \frac{1}{d(r)} \int_{B_r(x)} u_1(y) dy \ge \frac{1}{d(r)} \int_{B_r(x)} u(y) dy

, due to the monotony of the integral. This argument is valid for all x \in \Omega, and therefore u is superharmonic.

Lemma 3.4Edit

If \Omega \sub \R^d is bounded and \varphi \in C(\partial \Omega), then the function

u(x) = \inf \{v(x) | v \in \mathcal S_\varphi(\Omega) \}

is harmonic.

Proof:

Lemma 3.5Edit

If \Omega satisfies the exterior sphere condition, then for all y \in \partial \Omega there is a barrier function.

Existence theorem of PerronEdit

Let \Omega \subset \R^d be a bounded domain which satisfies the exterior sphere condition. Then the Dirichlet problem for the Poisson equation, which is, writing it again:

\begin{cases}
-\Delta u(x) = f(x) & x \in \Omega \\
u(x) = g(x) & x \in \partial \Omega
\end{cases}

has a solution u \in C^\infty(\Omega) \cap C(\overline{\Omega}).

Proof:

Let's summarise the results of this section.

Corollary 6.last:

Let \Omega \subset \mathbb R^d be a domain satisfying the exterior sphere condition, let f \in \mathcal C^2(\mathbb R^d), let g: \partial \Omega \to \mathbb R be continuous and let P_\Omega be a Green's function of the first kind for \Omega. Then

u(x) = \int_\Omega f(y) P_\Omega(y, x) dy - \int_{\partial \Omega} g(y) \nu(y) \cdot \nabla_y P_\Omega(y, x) dy

is the unique continuous solution to the boundary value problem

\begin{cases}
\forall x \in \Omega :& - \Delta u(x) = f(x) \\
\forall x \in \partial \Omega :& u(x) = g(x)
\end{cases}

In the next chapter, we will have a look at the heat equation.

ExercisesEdit

  1. Prove theorem 6.3 using theorem 6.2 (Hint: Choose \mathbf V (x) = \mathbf W (x) f(x) in theorem 6.2).
  2. Prove that \forall n \in \mathbb N : \Gamma(n+1) = n!, where n! is the factorial of n.
  3. Calculate V_d'(R). Have you seen the obtained function before?
  4. Prove that for d = 1, the function P_d as defined in theorem 6.11 is a Green's kernel for Poisson's equation (hint: use integration by parts twice).
  5. For all d \ge 2 and x \in \mathbb R^d \setminus \{0\}, calculate \nabla P_d(x) and \Delta P_d(x).
  6. Prove theorem 6.18 by copying the proof of theorem 6.17, exchanging the \le sign for a \ge sign and changing all ocurrences of \sup to \inf.
  7. For all dimensions d \ge 2, give an example for vectors \alpha, \beta \in \mathbb N_0^d such that neither \alpha \le \beta nor \beta \le \alpha.

SourcesEdit

Partial Differential Equations
 ← Fundamental solutions, Green's functions and Green's kernels Poisson's equation Heat equation →