# Partial Differential Equations/Poisson's equation

In this chapter, we will consider Poisson's equation:

$-\Delta u = f, f \in C^2(\R^d)$

## Green's kernelEdit

### The Gaussian integralEdit

The following integral formula holds:

$\int_\R e^{-x^2} dx = \sqrt{\pi}$

Proof:

$\left( \int_\R e^{-x^2} \right)^2 = \left( \int_\R e^{-x^2} \right) \cdot \left( \int_\R e^{-y^2} \right)$ $= \int_\R \int_\R e^{-(x^2 + y^2)} dx\, dy = \int_{\R^2} e^{-\|z\|^2} dz$

Now we transform variables by using two dimensional polar coordinates, and then transform one-dimensional with $r \mapsto \sqrt{r}$:

$= \int_0^\infty \int_0^{2\pi} r e^{-r^2} d \varphi dr = 2 \pi \int_0^\infty r e^{-r^2} dr = 2 \pi \int_0^\infty \frac{1}{2\sqrt{r}}\sqrt{r} e^{-r} dr = \pi$

Taking the square root on both sides of the equation finishes the proof.

### The Gamma functionEdit

The so-called Gamma function is defined as follows:

$\Gamma: \{t \in \C : \Re (t) > 0\} \to \C, \Gamma(t) := \int_0^\infty x^{t-1} e^{-x} dx$

If one shifts the Gamma function by -1, it interpolates the factorial:

For $t \in \R_+$, the Gamma function satisfies the functional equation of the Gamma function:

$\Gamma(t+1) = t\Gamma(t)$

This functional equation can be proven in the following way (with integration by parts):

$\int_0^\infty x^t e^{-x} dx = \overbrace{- x^t e^{-x} \big|_0^\infty}^{=0} + \int_0^\infty t x^{t-1} e^{-x} dx$

### Integration over the surface of d-dimensional spheresEdit

Let

$\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2}) =\left( \begin{smallmatrix} r \cos(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) \\ r \sin(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) \\ r \sin(\vartheta_1) \cdots \cos(\vartheta_{d-2}) \\ \vdots \\ r \sin(\vartheta_{d-3}) \cos(\vartheta_{d-2}) \\ r \sin(\vartheta_{d-2}) \\ \end{smallmatrix} \right)$

be the spherical coordinates. Then we know:

$\int_{\partial B_r(0)} f(x) dx = r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\vartheta_1) \cos(\vartheta_2)^2 \cdots \cos(\vartheta_{d-2})^{d-2} d\vartheta_1 d\vartheta_2 \cdots d\vartheta_{d-2} d\varphi$

Stereographic projection of the 4-sphere into 3-dimensional space

Proof: We choose as an orientation the border orientation of the sphere. We know that for $\partial B_r(0)$, an outward normal vector field is given by $\nu(x) = \frac{x}{r}$. As a parametrisation of $B_r(0)$, we only choose the identity function, obtaining that the basis for the tangent space there is the standard basis, which in turn means that the volume form of $B_r(0)$ is

$\omega_{B_r(0)}(x) = e_1^* \wedge \cdots \wedge e_d^*$

Now, we use the normal vector field to obtain the volume form of $\partial B_r(0)$:

$\omega_{\partial B_r(0)}(x)(v_1, \ldots, v_{d-1}) = \omega_{B_r(0)}(x)(\nu(x), v_1, \ldots, v_{d-1})$

We insert the formula for $\omega_{B_r(0)}(x)$ and then use Laplace's determinant formula:

$=e_1^* \wedge \cdots \wedge e_d^* (\nu(x), v_1, \ldots, v_{d-1}) = \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} x_i e_1^* \cdots \wedge e_{i-1}^* \wedge e_{i+1}^* \wedge \cdots \wedge e_d^*(v_1, \ldots, v_{d-1})$

As a parametrisation of $\partial B_r(x)$ we choose spherical coordinates with constant radius $r$.

We calculate the Jacobian matrix for the spherical coordinates:

$J = \left( \begin{smallmatrix} \cos(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & r -\sin(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & -r \cos(\varphi) \sin(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & \cdots & \cdots & -r \cos(\varphi) \cos(\vartheta_1) \cdots \sin(\vartheta_{d-2}) \\ \sin(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & r \cos(\varphi) \cos(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & -r \sin(\varphi) \sin(\vartheta_1) \cdots \cos(\vartheta_{d-2}) & \cdots & \cdots & -r \sin(\varphi) \cos(\vartheta_1) \cdots \sin(\vartheta_{d-2}) \\ \vdots & 0 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & \ddots & \\ \sin(\vartheta_{d-3}) \cos(\vartheta_{d-2}) & 0 & \cdots & 0 & r \cos(\vartheta_{d-3}) \cos(\vartheta_{d-2}) & r \sin(\vartheta_{d-3}) \cos(\vartheta_{d-2}) \\ \sin(\vartheta_{d-2}) & 0 & \cdots & \cdots & 0 & r \cos(\vartheta_{d-2}) \end{smallmatrix} \right)$

We observe that in the first column, we have only the spherical coordinates divided by $r$. If we fix $r$, the first column disappears. Let's call the resulting matrix $J'$ and our parametrisation, namely spherical coordinates with constant $r$, $\Psi$. Then we have:

$\Psi^*\omega_{\partial B_r(0)}(x)(v_1, \ldots, v_{d-1}) = \omega_{\partial B_r(0)}(\Psi(x))(J' v_1, \ldots, J' v_{d-1})$
$= \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} \Psi(x)_i e_1^* \cdots \wedge e_{i-1}^* \wedge e_{i+1}^* \wedge \cdots \wedge e_d^*(J' v_1, \ldots, J' v_{d-1})$
$= \frac{1}{r} \sum_{i=1}^d (-1)^{i+1} \Psi(x)_i \det(e_j^*(J' v_k))_{j \neq i} = \det J \cdot \det(v_1, \ldots, v_{d-1})$

Recalling that

$\det J = r^{d-1}\cos(\phi_1)^{n-2}\cos(\phi_2)^{d-3}\cdots \cos(\phi_{d-2})$

, the claim follows using the definition of the surface integral.

### The surface area of d-dimensional spheresEdit

Let $x \in \R^d, d \ge 2$. Then:

$\int_{\partial B_r(x)} 1 dx = d r^{d-1} \frac{\sqrt{\pi}^d}{\Gamma\left(\frac{d}{2} + 1\right)}$

Proof: Due to the above formula for integration over $\partial B_r(0)$, we obtain:

$\int_{\partial B_r(x)} 1 dx = \int_{\partial B_r(0)} 1 dx = r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2} d\phi$
$= r^{d-1} \left(\int_0^{2\pi} 1 d\phi\right) \left(\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cos(\varphi_1) d\varphi_1\right) \cdots \left(\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cos(\varphi_{d-2})^{d-2}d\varphi_{d-2} \right)$

Let's also define

$I_i := \int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cos(\varphi)^i d\varphi$

Through direct calculation, we easily obtain that $I_0 = \pi$ and $I_1 = 2$. With one-dimensional integration by parts, we can obtain the following result:

$I_i = \int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cos(\varphi)^i d\varphi = \overbrace{\cos(x)^{i-1} \sin(x) \big|_{-\frac{\pi}{2}}^\frac{\pi}{2}}^{=0} - (i-1) \int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cos(\varphi)^{i-2} (-\sin(x)) \sin(x) dx$

With the help of the equation $-\sin(x)^2 = \cos(x)^2 - 1$, we obtain the result

$I_i = \frac{1}{i} \cos(x)^{i-1} \sin(x) \big|_{-\frac{\pi}{2}}^\frac{\pi}{2} + \frac{i-1}{i} I_{i-2} = \frac{i-1}{i} I_{i-2}$

From this we see that

$I_{2i} = \frac{2i - 1}{i} \frac{2i - 3}{2i - 2} \cdots \frac{1}{2} I_0$ and
$I_{2i+1} = \frac{2i}{2i + 1} \frac{2i - 2}{2i - 1} \cdots \frac{2}{3} I_1$

, and therefore

$I_i I_{i-1} = \frac{1}{i} I_0 I_1 = \frac{2\pi}{i}$.

From this we obtain that

$\int_{\partial B_r(x)} 1 dx = r^{d-1} 2 \pi I_{d-2} \cdots I_1 = \begin{cases} r^{d-1} \frac{\pi^{\frac{d}{2}}d}{\left(\frac{d}{2}\right)!} & d \text{ even} \\ r^{d-1} \frac{2^{\frac{d+1}{2}} \pi^{\frac{d-1}{2}}d}{(d-2) \cdots 3} & d \text{ odd} \end{cases}$

This is already an expression, but it is still complicated. To bring it into the form stated in the claim of this lemma, we can use induction:

Let's define

$\omega_d = \begin{cases} r^{d-1} \frac{\pi^{\frac{d}{2}}d}{\left(\frac{d}{2}\right)!} & d \text{ even} \\ r^{d-1} \frac{2^{\frac{d+1}{2}} \pi^{\frac{d-1}{2}}d}{(d-2) \cdots 3} & d \text{ odd} \end{cases}$ and
$\kappa_d = d r^{d-1} \frac{\sqrt{\pi}^d}{\Gamma\left(\frac{d}{2} + 1\right)}$

Let's evaluate first $\omega_1, \omega_2, \kappa_1$ and $\kappa_2$:

We calculate $\omega_1$, $\omega_2$ and $\kappa_2$:

$\omega_1 = 2$
$\omega_2 = 2 \pi r$
$\kappa_2 = 2 \pi r$

Then we note that

$\Gamma(\frac{1}{2}) = \int_0^\infty x^{-\frac{1}{2}} e^{-x} dx = \int_0^\infty \frac{2s}{s} e^{-s^2} ds = 2 \frac{\sqrt{\pi}}{2} = \sqrt{\pi}$

which is why

$\kappa_1 = \frac{\sqrt{\pi}}{\Gamma\left(\frac{1}{2} + 1\right)} = \frac{\sqrt{\pi}}{\frac{1}{2}\Gamma\left(\frac{1}{2}\right)} = 2$

Furthermore, we have for $d \ge 2$, that

$\omega_{d+2} = r^2 I_d I_{d-1} \omega_d = r^2 \frac{2\pi}{d} \omega_d$

And the important thing is now: $\kappa_d$ has the same recursion equation; for $d \ge 2$:

$\frac{\kappa_{d+2}}{\kappa_d} = \frac{(d+2) r^2 \pi \Gamma\left(\frac{d}{2} + 1\right)}{\Gamma\left(\frac{d+2}{2} + 1\right) d} = \frac{2 r^2 \pi \Gamma\left(\frac{d}{2} + 1\right)}{\Gamma\left(\frac{d}{2} + 1\right) d} = r^2 \frac{2\pi}{d}$

Therefore, the two expressions must be equal for all $d \ge 2$, which proves the claim.

### The volume of d-dimensional spheresEdit

Let $x \in \R^d$. Then:

$\int_{B_r(x)} 1 dx = r^d \frac{\sqrt{\pi}^d}{\Gamma\left(\frac{d}{2} + 1\right)}$

Proof: With Gauss' theorem, we find:

$d \int_{B_r(x)} 1 dx = \int_{B_r(x)} \underbrace{\nabla \cdot x}_{=d} dx = \int_{\partial B_r(x)} \underbrace{\langle x, \frac{x}{\|x\|} \rangle}_{=r} dx$

We only need our formula for the surface of the sphere to finish the proof.

### Integration by onion skinsEdit

Let f be an integrable function. Then:

$\int_{\R^d} f(x) dx = \int_0^\infty r^{d-1} \int_{\partial B_1(0)} f(rx) dx dr$ and $\int_{B_\epsilon(0)} f(x) dx = \int_0^\epsilon r^{d-1} \int_{\partial B_1(0)} f(rx) dx dr$

Proof: Let again $\Psi$ be the spherical coordinates. Due to transformation of variables, we obtain:

$\int_{\R^d} f(x) dx = \int_0^\infty r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2} d\phi dr$

and

$\int_{B_\epsilon(0)} f(x) dx = \int_0^\epsilon r^{d-1} \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(\Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2} d\phi dr$

But due to the formula for integrating on the sphere surface, we also have that

$\int_{\partial B_1(0)} f(rx) = \int_0^{2\pi} \underbrace{\int_{-\frac{\pi}{2}}^\frac{\pi}{2} \cdots \int_{-\frac{\pi}{2}}^\frac{\pi}{2}}_{d-2 \text{ times}} f(r\Psi(1, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})) \cos(\varphi_1) \cos(\varphi_2)^2 \cdots \cos(\varphi_{d-2})^{d-2} d\varphi_1 d\varphi_2 \cdots d\varphi_{d-2}$

Combining this formula with $r\Psi(1, \varphi, \vartheta_1, \ldots, \vartheta_{d-2}) = \Psi(r, \varphi, \vartheta_1, \ldots, \vartheta_{d-2})$ finishes the proof.

### Green's kernel of Poisson's equationEdit

The Poisson's equation has, depending on it's dimension, the following Green's kernel:

$\tilde G(x) := \begin{cases} -\frac{1}{2} |x| & d=1 \\ -\frac{1}{2\pi}\ln \|x\| & d=2 \\ \frac{1}{(d-2)c} \frac{1}{\|x\|^{d-2}} & d \ge 3 \end{cases}$

, where $c := \int_{\partial B_1(0)} 1 dz$ denotes the surface area of $B_1(0) \subset \R^d$.

Proof:

First, we show that $\tilde G$ is locally integrable. Let's choose an arbitrary compact $\Omega \subset \R^d$ and $R > 0$ such that $\Omega \subseteq B_R(0)$. For $d=1$, we can see:

$\int_\Omega -\frac{1}{2} |x| dx \le -\frac{1}{2} \int_{-R}^R |x| dx = - \int_0^R r dr = -\frac{1}{2} R^2 < \infty$

By transformation with polar coordinates, we obtain for $d \ge 2$:

$d=2: ~ \int_\Omega -\frac{1}{2\pi}\ln \|x\| dx \le \int_{B_R(0)} -\frac{1}{2\pi}\ln \|x\| dx = -\frac{1}{2\pi} \int_0^R r \ln r dr = -\frac{1}{8\pi} R^2 (2\ln(R) - 1) < \infty$
$d \ge 3: ~ \int_\Omega \frac{1}{(d-2)c} \frac{1}{\|x\|^{d-2}} \le \frac{1}{(d-2)c} \int_{B_R(0)} \frac{1}{\|x\|^{d-2}} dx$
$= \frac{1}{(d-2)c} \int_0^R \frac{r^{d-1}}{r^{d-2}} \underbrace{\int_{\partial B_1(0)} 1 dz}_{=: c} dr = \frac{1}{2(d-2)} R^2 < \infty$

This shows us that we are allowed to apply lemma 2.4, which shows us that $\xi \mapsto T_{G(\cdot - \xi)}$ is continuous. Well-definedness follows from theorem 1.3.

Furthermore, we calculate now the gradient and the laplacian of $\tilde G(z)$ for $z \in \R^d \setminus \{0\}$, because we will need them later:

$d=1: ~ \nabla -\frac{1}{2} |z| = -\frac{z}{2|z|}$
$\Delta -\frac{1}{2} |z| = 0$, since $\nabla -\frac{1}{2} |z|$ is continuous and has constant absolute value for $x \neq 0$

$d=2: ~ \nabla -\frac{1}{2\pi}\ln \|z\| = -\frac{z}{2\pi \|z\|^2}$
$\Delta -\frac{1}{2\pi}\ln \|z\| = - \frac{1}{2\pi}\frac{\|z\|^2 - 2z_1^2 + \|z\|^2 - 2z_2^2}{\|z\|^4} = 0$

$d \ge 3: ~ \nabla \frac{1}{(d-2)c} \frac{1}{\|z\|^{d-2}} = \frac{1}{c} \frac{1}{\|z\|^{d-1}} \cdot \frac{z}{\|z\|} = \frac{1}{c} \frac{z}{\|z\|^d}$
$\Delta \frac{1}{(d-2)c} \frac{1}{\|z\|^{d-2}} = \frac{1}{c} \frac{\|z\|^{2d} - d \|z\|^{2d - 2} z_1^2 + \cdots + \|z\|^{2d} - d \|z\|^{2d - 2} z_d^2}{\|z\|^d} = 0$

Let first $d \ge 2$.

We define now

$J_0(R) := -\int_{\R^d \setminus B_R(\xi)} \tilde G(x - \xi) \Delta \varphi(x) dx$

Due to the dominated convergence theorem, we have

$\lim_{R \to 0} J_0(R) = -\int_{\R^d} \tilde G(x - \xi) \Delta \varphi(x) 1_{\R^d \setminus B_R(\xi)}(x) dx = -\Delta T_{G(\cdot - \xi)}(\varphi)$

Let's furthermore choose $w(x) = \tilde G(x - \xi) \nabla \varphi(x)$. Then

$\nabla \cdot w(x) = \Delta \varphi(x) \tilde G(x - \xi) + \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle$.

From Gauß' theorem, we obtain

$\int_{\R^d \setminus B_R(\xi)} \Delta \varphi(x) \tilde G(x - \xi) + \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle dx = -\int_{\partial B_R(\xi)} \langle \tilde G(x - \xi) \nabla \varphi(x), \frac{x-\xi}{\|x-\xi\|} \rangle dx$

, where the minus in the right hand side occurs because we need the inward normal vector. From this follows immediately that

$\int_{\R^d \setminus B_R(\xi)} -\Delta \varphi(x) \tilde G(x - \xi) = \underbrace{\int_{\partial B_R(\xi)} \langle \tilde G(x - \xi) \nabla \varphi(x), \frac{x-\xi}{\|x-\xi\|} \rangle dx }_{:= J_1(R)} - \underbrace{\int_{\R^d \setminus B_R(\xi)} \langle \nabla \tilde G(x - \xi), \nabla \varphi(x) \rangle dx}_{:=J_2(R)}$

We can now calculate the following, using the Cauchy-Schwartz inequality:

$|J_1(R)| \le \int_{\partial B_R(\xi)} \| \tilde G(x - \xi) \nabla \varphi(x)\| \overbrace{\| \frac{x-\xi}{\|x-\xi\|} \|}^{=1} dx$
$= \begin{cases} \displaystyle\int_{\partial B_R(\xi)} -\frac{1}{2\pi}\ln |x-\xi| \|\nabla \varphi(x)\|dx = \int_{\partial B_1(\xi)} -R \frac{1}{2\pi}\ln \|R(x-\xi)\| \|\nabla \varphi(Rx)\|dx & d=2 \\ \displaystyle\int_{\partial B_R(\xi)} \frac{1}{(d-2)c} \frac{1}{|x-\xi|^{d-2}} \|\nabla \varphi(x)\|dx = \int_{\partial B_1(\xi)} R^{d-1} \frac{1}{(d-2)c} \frac{1}{|R(x-\xi)|^{d-2}} & d \ge 3 \end{cases}$
$\le \begin{cases} \displaystyle\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \int_{\partial B_1(\xi)} -R \frac{1}{2\pi}\ln R^2 dx = -\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \frac{c}{2\pi}R \ln R^2 \to 0, R \to 0 & d=2 \\ \displaystyle\max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \int_{\partial B_1(\xi)} \frac{1}{(d-2)c} R dx = \max\limits_{x \in \R^d} \|\nabla \varphi(Rx)\| \frac{R}{d-2} \to 0, R \to 0 & d \ge 3 \end{cases}$

Now we define $v(x) = \varphi(x) \nabla \tilde G(x - \xi)$, which gives:

$\nabla \cdot v(x) = \varphi(x) \underbrace{\Delta \tilde G(x - \xi)}_{=0, x \neq \xi} + \langle \nabla \varphi(x), \nabla \tilde G(x - \xi) \rangle$

Applying Gauß' theorem on $v$ gives us therefore

$J_2(R) = \int_{\partial B_R(\xi)} \varphi(x) \langle \nabla \tilde G(x - \xi), \frac{x-\xi}{\|x-\xi\|} \rangle dx$
$= \int_{\partial B_R(\xi)} \varphi(x) \langle -\frac{x-\xi}{c \|x-\xi\|^d}, \frac{x-\xi}{\|x-\xi\|} \rangle dx = -\frac{1}{c}\int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} \varphi(x) dx$

, noting that $d = 2 \Rightarrow c = 2\pi$.

We furthermore note that

$\varphi(\xi) = \frac{1}{c} \int_{\partial B_1(\xi)} \varphi(\xi) dx = \frac{1}{c} \int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} \varphi(\xi) dx$

Therefore, we have

$\lim_{R \to 0} |-J_2(R) - \varphi(\xi)| \le \frac{1}{c} \lim_{R \to 0} \int_{\partial B_R(\xi)} \frac{1}{R^{d-1}} |\varphi(\xi) - \varphi(x)| dx \le \lim_{R \to 0} \frac{1}{c} \max_{x \in B_R(\xi)} |\varphi(x) - \varphi(\xi)| \int_{\partial B_1(\xi)} 1 dx$
$= \lim_{R \to 0} \max_{x \in B_R(\xi)} |\varphi(x) - \varphi(\xi)| = 0$

due to the continuity of $\varphi$.

Thus we can conclude that

$\forall \Omega \text{ domain of } \R^d: \forall \varphi \in \mathcal D(\Omega): -\Delta T_{\tilde G( \cdot - \xi)}(\varphi) = \lim_{R \to 0} J_0(R) = \lim_{R \to 0} J_1(R) - J_2(R) = 0 + \varphi(\xi) = \delta_\xi (\varphi)$.

Therefore, $\tilde G$ is a Green's kernel for the Poisson's equation for $d \ge 2$.

For $d = 1$, we can calculate directly, using one-dimensional integration by parts:

$-2\Delta T_{\tilde G( \cdot - \xi)} = \int_\R |x - \xi| \varphi''(x) dx = \int_\xi^{\infty} (x - \xi) \varphi''(x) dx + \int_{-\infty}^\xi (\xi - x) \varphi''(x) dx$
$= \overbrace{-(\xi - \xi)\varphi'(\xi) + (\xi - \xi) \varphi'(\xi)}^{=0} + \overbrace{\lim_{x \to \infty} (x - \xi) \varphi'(x) - \lim_{x \to -\infty} (\xi - x) \varphi'(x)}^{= 0 \text{ since supp } \varphi \text{ is compact} } - \int_\xi^{\infty} 1 \varphi'(x) dx - \int_{-\infty}^\xi (- 1) \varphi'(x) dx$
$= 1 \varphi(\xi) -(- 1) \varphi(\xi) + 0 = 2\varphi(\xi) = 2 \delta_\xi(\varphi)$

, and dividing by 2 gives the result that we wanted.

QED.

## Harmonic functions: Elementary propertiesEdit

### Definitions: Laplace's equation and harmonic functionsEdit

The special case of the Poisson's equation where $f=0$, i. e.

$-\Delta u = 0$

is called Laplace's equation. A function $u$ which satisfies this equation is called a harmonic function.

### Mean-value formulasEdit

Let $u$ be a harmonic function, i. e. $-\Delta u = 0$, and let $u$ be defined on a superset of $\overline{B_r(x)}$. Then the following is true:

$u(x) = \frac{1}{c(r)} \int_{\partial B_r(x)} u(y) dy = \frac{1}{d(r)} \int_{B_r(x)} u(y) dy$

, where $c(r) = \int_{\partial B_r(x)} 1 dy$ is the surface area and $d(r) = \int_{B_r(0)} 1 dx$ is the volume of the ball of radius $r$. The two above formulas are average value formulas: They tell us that $u(x)$ is equal to it's own average value on the border of a ball and equal to it's own average value on the whole ball.

Also, the following holds: If $\Omega \subseteq \R^d$ is a domain and $u$ is two times continuously differentiable on $\Omega$ (i. e. $u \in C^2(\Omega)$), and if $u$ satisfies one of the two formulas

$u(x) = \frac{1}{c(r)} \int_{\partial B_r(x)} u(y) dy$ or $u(x)= \frac{1}{d(r)} \int_{B_r(x)} u(y) dy$

for all $r>0$ below a certain constant, then $u$ is harmonic.

Proof: Let's define the following function:

$\phi(r) = \frac{1}{r^{d-1}} \int_{\partial B_r(x)} u(y) dy$

From first coordinate transformation with the diffeomorphism $y \mapsto x + y$ and then applying our formula for integration on the unit sphere twice, we obtain:

$\phi(r) = \frac{1}{r^{d-1}} \int_{\partial B_r(0)} u(y + x) dy = \int_{\partial B_1(0)} u(x + ry) dy$

From first differentiation under the integral sign and then Gauss' theorem, we know that

$\phi'(r) = \int_{\partial B_1(0)} \langle \nabla u(x + ry), y \rangle dy = \int_{B_1(0)} \Delta u (x + ry) dy = 0$

Case 1: If $u$ is harmonic, then we have

$\int_{B_1(0)} \Delta u (x + ry) dy = 0$

, which is why $\phi$ is constant. Now we can use the dominated convergence theorem for the following calculation:

$\lim_{r \to 0} \phi(r) = \int_{\partial B_1(0)} \lim_{r \to 0} u(x + ry) dy = c(1) u(x)$

Therefore $\phi(r) = c(1) u(x)$ for all $r$.

With the relationship

$r^{d-1} c(1) = c(r)$

, which is true because of our formula for $c(x), x \in \R_{> 0}$, we obtain that

$u(x) = \frac{\phi(r)}{c(1)} = \frac{1}{c(1)} \frac{1}{r^{d-1}} \int_{\partial B_r(x)} u(y) dy = \frac{1}{c(r)} \int_{\partial B_r(x)} u(y) dy$

, which proves the first formula.

Furthermore, we can prove the second formula by first transformation of variables, then integrating by onion skins, then using the first formula of this theorem and then integration by onion skins again:

$\int_{B_r(x)} u(y) dy = \int_{B_r(0)} u(y + x) dy = \int_0^r s^{d-1} \int_{\partial B_1(0)} u(y + sx) dx ds = \int_0^r s^{d-1} u(x) \int_{\partial B_1(0)} 1 dx ds = u(x) d(r)$

This shows that if $u$ is harmonic, then the two formulas for calculating $u$, hold.

Case 2: Suppose that $u$ is not harmonic. Then there exists an $x \in \Omega$ such that $-\Delta u(x) \neq 0$. Without loss of generality, we assume that $-\Delta u(x) > 0$; the proof for $-\Delta u(x) < 0$ will be completely analoguous exept that the direction of the inequalities will interchange. Then, since as above, due to the dominated convergence theorem, we have

$\lim_{r \to 0} \phi'(r) = \int_{B_1(0)} \lim_{r\to 0} \Delta u (x + ry) dy > 0$

Since $\phi'$ is continuous (by the dominated convergence theorem), this is why $\phi$ grows at $0$, which is a contradiction to the first formula.

The contradiction to the second formula can be obtained by observing that $\phi'$ is continuous and therefore there exists a $\sigma \in \R_{>0}$

$\forall r \in [0, \sigma) : \phi'(r) > 0$

This means that since

$\lim_{r \to 0} \phi(r) = \int_{\partial B_1(0)} \lim_{r \to 0} u(x + ry) dy = c(1) u(x)$

and therefore

$\phi(0) = c(1) u(x)$

, that

$\forall r \in (0, \sigma) : \phi(r) > c(1) u(x)$

and therefore, by the same calculation as above,

$\int_{B_r(x)} u(y) dy = \int_{B_r(0)} u(y + x) dy = \int_0^r s^{d-1} \int_{\partial B_1(0)} u(y + sx) dx ds > \int_0^r s^{d-1} u(x) \int_{\partial B_1(0)} 1 dx ds = u(x) d(r)$

This shows (by proof with contradiction) that if one of the two formulas hold, then $u \in C^2(\Omega)$ is harmonic.

### Multi-dimensional mollifiersEdit

In the chapter about distributions, an example for a bump function was the standard mollifier, given by

$\eta(x) = \frac{1}{z}\begin{cases} e^{-\frac{1}{1-\|x\|^2}}& \text{ if } \|x\| < 1\\ 0& \text{ if } \|x\|\geq 1 \end{cases}$

, where $z := \int_{B_1(0)} e^{-\frac{1}{1-\|x\|^2}} dx$.

We can also define mollifiers with different support sizes as follows:

$\eta_\epsilon(x) = \frac{1}{\epsilon^n} \eta\left( \frac{x}{\epsilon} \right)$

With transformation of variables, we have that

$\int_{B_\epsilon(0)} \eta_\epsilon(x) dx = \int_{B_1(0)} \eta(x) dx = 1$

### Minimum and maximum principlesEdit

Let $u$ be a harmonic function on the connected domain $\Omega \subset \R^d$ such that $\sup_{x \in \Omega} u(x) < \infty$ or $\inf_{x \in \Omega} u(x) > -\infty$. If $u$ attains maximum or minimum in $\Omega$, i. e. if $u(y) = \sup_{x \in \Omega} u(x)$ or $u(y) = \inf_{x \in \Omega} u(x)$ for a $y \in \Omega$, then $u$ is a constant function.

Proof: We prove the statement for the supremum, and the case for infimum is completely analoguous; it just reverses the only inequality in the proof.

Let's define $S := \sup_{x \in \Omega} u(x)$. Let $A = \{y \in \Omega : u(y) = S\}$. Due to the assumption, we have that $A$ is not empty. Furthermore, since $\Omega$ is open, there is an open ball around every $y \in A$ such that $B_r(y) \subseteq \Omega$. With one of the mean-value formulas (see above), we obtain the inequality

$S = u(y) = \frac{1}{d(r)} \int_{B_r(y)} u(x) dx \le \frac{1}{d(r)} \int_{B_r(y)} S dx = S \frac{d(r)}{d(r)} = s$

, which implies that on $B_r(y)$ it holds $u(x) = S$ almost everywhere. But since the function which is constantly $S$ and $u$ are both continuous, we even have $u(x) = S$ on the whole ball $B_r(y)$. Thus $B_r(y) \subseteq A$, and therefore $A$ is open.

But since $A = u^{-1}(S)$, and $u$ is continuous, we also have that $A$ is relatively closed in $\Omega$, and since $\Omega$ is connected, the only possibility is that $A = \Omega$.

#### CorollaryEdit

Let $u$ be a harmonic function on the connected and bounded domain $\Omega \subset \R^d$ and also continuous on $\overline{\Omega}$. Then:

$\forall x \in \overline{\Omega}: \inf_{y \in \partial \Omega} u(y) \le u(x) \le \sup_{y \in \partial \Omega} u(y)$

## Dirichlet problem: Elementary propertiesEdit

The dirichlet problem for the Poisson equation is to find a solution for

$\begin{cases} -\Delta u(x) = f(x) & x \in \Omega \\ u(x) = g(x) & x \in \partial \Omega \end{cases}$

### Uniqueness of solutionsEdit

If $\Omega$ is bounded, then we can know that if the problem

$\begin{cases} -\Delta u(x) = f(x) & x \in \Omega \\ u(x) = g(x) & x \in \partial \Omega \end{cases}$

has a solution $u_1$, then this solution is unique on $\Omega$.

Proof: Let $u_2$ be another solution. If we define $u = u_1 - u_2$, then $u$ obviously solves the problem

$\begin{cases} -\Delta u(x) = 0 & , x \in \Omega \\ u(x) = 0 & x \in \partial \Omega \end{cases}$

, since $-\Delta (u_1(x) - u_2(x)) = -\Delta u_1 (x) - (-\Delta u_2(x)) = f(x) - f(x) = 0$ for $x \in \Omega$ and $u_1(x) - u_2(x) = g(x) - g(x) = 0$ for $x \in \partial \Omega$.

Due to the above corollary from the minimum and maximum principle, we obtain that $u$ is constantly zero not only on the boundary, but on the whole domain $\Omega$. Therefore $u_1(x) - u_2(x) = 0 \Leftrightarrow u_1(x) = u_2(x)$ on $\Omega$. This is what we wanted to prove.

### Green's functions of the first kindEdit

Let $\Omega \subseteq \R^d$ be a domain. Let $\tilde G$ be the Green's kernel of Poisson's equation, which we have calculated above, i.e.

$\tilde G(x) := \begin{cases} -\frac{1}{2} |x| & d=1 \\ -\frac{1}{2\pi}\ln \|x\| & d=2 \\ \frac{1}{(d-2)c} \frac{1}{\|x\|^{d-2}} & d \ge 3 \end{cases}$

, where $c := \int_{\partial B_1(0)} 1 dz$ denotes the surface area of $B_1(0) \subset \R^d$.

Suppose there is a function $h: \Omega \times \Omega \to \R$ which satisfies

$\begin{cases} -\Delta h(x, \xi) = 0 & x \in \Omega \\ h(x, \xi) = \tilde G(x - \xi) & x \in \partial \Omega \end{cases}$

Then the Green's function of the first kind for $-\Delta$ for $\Omega$ is defined as follows:

$\tilde G_\Omega(x, \xi) := \tilde G(x - \xi) - h(x, \xi)$

$\tilde G(x - \xi) - h(x, \xi)$ is automatically a Green's function for $-\Delta$. This is verified exactly the same way as veryfying that $\tilde G$ is a Green's kernel. The only additional thing we need to know is that $h$ does not play any role in the limit processes because it is bounded.

A property of this function is that it satisfies

$\begin{cases} -\Delta \tilde G_\Omega(x, \xi) = 0 & x \in \Omega \setminus \{\xi\} \\ \tilde G_\Omega(x, \xi) = 0 & x \in \partial \Omega \end{cases}$

The second of these equations is clear from the definition, and the first follows recalling that we calculated above (where we calculated the Green's kernel), that $\Delta \tilde G(x) = 0$ for $x \neq 0$.

### Representation formulaEdit

Let $\Omega \subseteq \R^d$ be a domain, and let $u \in C^2(\Omega)$ be a solution to the Dirichlet problem

$\begin{cases} -\Delta u(x) = f(x) & x \in \Omega \\ u(x) = g(x) & x \in \partial \Omega \end{cases}$

. Then the following representation formula for $u$ holds:

$u(\xi) = \int_\Omega -\Delta u(y) \tilde G_\Omega(y, \xi) dy - \int_{\partial \Omega} u(y) \nu(y) \nabla_y \tilde G_\Omega(y, \xi) dy$

, where $\tilde G_\Omega$ is a Green's function of the first kind for $\Omega$.

Proof: Let's define

$J(\epsilon) := \int_{\Omega \setminus B_\epsilon(\xi)} -\Delta u(y) \tilde G_\Omega(y, \xi) dy$

. By the theorem of dominated convergence, we have that

$\lim_{\epsilon \to 0} J(\epsilon) = \int_\Omega -\Delta u(y) \tilde G_\Omega(y, \xi) dy$

Using multi-dimensional integration by parts, it can be obtained that:

$J(\epsilon) = - \int_{\partial \Omega} \underbrace{\tilde G_\Omega(y, \xi)}_{=0} \langle \nabla u(y), \nu(y) \rangle dy + \int_{\partial B_\epsilon(\xi)} \tilde G_\Omega(y, \xi) \langle \nabla u(y), \frac{y - \xi}{\|y - \xi\|} \rangle dy + \int_{\Omega \setminus B_\epsilon(\xi)} \langle \nabla u(y), \nabla_x \tilde G_\Omega(y, \xi) \rangle dy$
$= \underbrace{\int_{\partial B_\epsilon(\xi)} \tilde G_\Omega(y, \xi) \langle \nabla u(y), \frac{y - \xi}{\|y - \xi\|} \rangle dy}_{:= J_1(\epsilon)} - \int_{\Omega \setminus B_\epsilon(\xi)} \Delta \tilde G_\Omega(y, \xi) u(y) dy$
$- \underbrace{\int_{\partial B_\epsilon(\xi)} u(y) \langle \nabla \tilde G_\Omega(y, \xi), \frac{y - \xi}{\|y - \xi\|} \rangle dy}_{:=J_2(\epsilon)} - \int_{\partial \Omega} u(y) \langle \nabla \tilde G_\Omega(y, \xi), \nu(y) \rangle dy$

When we proved the formula for the Green's kernel of Poisson's equation, we had already shown that

$\lim_{\epsilon \to 0} -J_2(\epsilon) = u(\xi)$ and
$\lim_{\epsilon \to 0} J_1(\epsilon) = 0$

The only additional thing which is needed to verify this is that $h \in C^\infty(\Omega)$, which is why it stays bounded, while $\tilde G$ goes to infinity as $\epsilon \to 0$, which is why $h$ doesn't play a role in the limit process.

This proves the formula.

### Harmonic functions on the ball: A special case of the Dirichlet problemEdit

#### Green's function of the first kind for the ballEdit

Let's choose

$h(x, \xi) = \tilde G\left(\frac{\|\xi\|}{r}\left(x - \frac{r^2}{\|\xi\|^2} \xi \right)\right)$

Then

$\tilde G_{B_r(x_0)}(x, \xi) := \tilde G(x - \xi) - h(x - x_0, \xi - x_0)$

is a Green's function of the first kind for $B_r(x_0)$.

Proof: Since $\xi - x_0 \in B_r(0) \Rightarrow \frac{r^2}{\|\xi - x_0\|^2} (\xi - x_0) \notin B_r(0)$ and therefore

$\forall x, \xi \in B_r(0) : -\Delta_x h(x - x_0, \xi - x_0) = 0$

Furthermore, we obtain:

$\int_{B_r(x_0)} -\Delta \varphi(x) \tilde G_\Omega(x, \xi) dx = \int_{B_r(x_0)} -\Delta \varphi(x) \tilde G(x - \xi) dx + \int_{B_r(x_0)} \varphi(x) -\Delta h(x, \xi) dx = \varphi(\xi) + 0$

, which is why $\tilde G_\Omega(x, \xi)$ is a Green's function.

The property for the boundary comes from the following calculation:

$\forall x \in \partial B_r(0) : \|x - \xi\|^2 = \langle x - \xi, x - \xi \rangle = r^2 + \|\xi\|^2 - 2 \langle x, \xi \rangle = \frac{\|\xi\|^2}{r^2} (\langle x - \frac{r^2}{\|\xi\|^2} \xi, x - \frac{r^2}{\|\xi\|^2} \xi \rangle) = \frac{\|\xi\|^2}{r^2} \|x - \frac{r^2}{\|\xi\|^2} \xi\|^2$

, which is why $x \in \partial B_r(0) \Rightarrow h(x, \xi) = \tilde G(x, \xi)$, since $\tilde G$ is radially symmetric.

#### Solution formulaEdit

Let's consider the following problem:

$\begin{cases} -\Delta u(x) = 0 & x \in B_r(0) \\ u(x) = \varphi(x) & x \in \partial B_r(0) \end{cases}$

Here $\varphi$ shall be continuous on $\partial B_r(0)$. Then the following holds: The unique solution $u \in C(\overline{B_r(0)}) \cap C^2(B_r(0))$ for this problem is given by:

$u(\xi) = \begin{cases} \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, \xi) \rangle \varphi(y) dy & \xi \in B_r(0) \\ \varphi(\xi) & \xi \in \partial B_r(0) \end{cases}$

Proof: Uniqueness we have already proven; we have shown that for all Dirichlet problems for $-\Delta$ on bounded domains (and the unit ball is of course bounded), the solutions are unique.

Therefore, it only remains to show that the above function is a solution to the problem. To do so, we note first that

$-\Delta \int_{\partial B_r(0)} \langle -\nu(y), \tilde \nabla_y G_{B_r(0)}(y, \xi) \rangle \varphi(y) dy = -\Delta \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y (\tilde G(y - \xi) - h(y, \xi)) \rangle \varphi(y) dy$

Let $0 be arbitrary. Since $\tilde G_{B_r(0)}$ is continuous in $B_s(0)$, we have that on $B_s(0)$ it is bounded. Therefore, by the fundamental estimate, we know that the integral is bounded, since the sphere, the set over which is integrated, is a bounded set, and therefore the whole integral must be always below a certain constant. But this means, that we are allowed to differentiate under the integral sign on $B_s(0)$, and since $r> s > 0$ was arbitrary, we can directly conclude that on $B_r(0)$,

$-\Delta u(\xi) = \int_{\partial B_r(0)} \overbrace{-\Delta(\langle -\nu(y), \tilde \nabla_y \tilde G(x - \xi) - h(x, \xi) \rangle \varphi(y))}^{=0} dy = 0$

Furthermore, we have to show that $\forall x \in \partial B_r(0): \lim_{y \to x} u(y) = \varphi(x)$, i. e. that $u$ is continuous on the boundary.

To do this, we notice first that

$\int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, \xi) \rangle dy = 1$

This follows due to the fact that if $u \equiv 1$, then $u$ solves the problem

$\begin{cases} -\Delta u(x) = 0 & x \in B_r(0) \\ u(x) = 1 & x \in \partial B_r(0) \end{cases}$

and the application of the representation formula.

Furthermore, if $\|x - x^*\| < \frac{1}{2} \delta$ and $\|y - x^*\| \ge \delta$, we have due to the second triangle inequality:

$\|x - y\| \ge | \|y - x^*\| - \|x^* - x\| | \ge \frac{1}{2} \delta$

In addition, another application of the second triangle inequality gives:

$(r^2 - \|x\|^2) = (r + \|x\|)(r - \|x\|) = (r + \|x\|)(\|x^*\| - \|x\|) \le 2r \|x^* - x\|$

Let then $\epsilon > 0$ be arbitrary, and let $x^* \in \partial B_r(0)$. Then, due to the continuity of $\varphi$, we are allowed to choose $\delta > 0$ such that

$\|x - x^*\| < \delta \Rightarrow |\varphi(x) - \varphi(x^*)| < \frac{\epsilon}{2}$.

In the end, with the help of all the previous estimations we have made, we may unleash the last chain of inequalities which shows that the representation formula is true:

$|u(x) - u(x^*)| = |u(x) - 1 \cdot \varphi(x^*)| = \left| \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle (\varphi(x) - \varphi(x^*)) dy \right|$
$\le \frac{\epsilon}{2} \int_{\partial B_r(0) \cap B_\delta(x^*)} |\langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle| dy + 2 \|\varphi\|_\infty \int_{\partial B_r(0) \setminus B_\delta(x^*)} |\langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle| dy$
$\le \frac{\epsilon}{2} + 2 \|\varphi\|_\infty \int_{\partial B_r(0) \setminus B_\delta(x^*)} \frac{r^2 - \|x\|^2}{rc(1)\left(\frac{\delta}{2}\right)^d} dy \le \frac{\epsilon}{2} + 2 \|\varphi\|_\infty r^{d-2} \frac{r^2 - \|x\|^2}{\left(\frac{\delta}{2}\right)^d}$

Since $x \to x^*$ implies $r^2 - \|x\|^2 \to 0$, we might choose $x$ close enough to $x^*$ such that

$2 \|\varphi\|_\infty r^{d-2} \frac{r^2 - \|x\|^2}{\left(\frac{\delta}{2}\right)^d} < \frac{\epsilon}{2}$. Since $\epsilon > 0$ was arbitrary, this finishes the proof.

### Multiindex binomial coefficient and multiindex orderEdit

In order to proceed, we need a new version of the binomial coefficient, and an order for multiindices. We start with the multi-index version of the binomial coefficient. We want to define a binomial coefficient, where we put in two multiindices, instead of two integers. We define it in the following way: If $\alpha = (\alpha_1, \ldots, \alpha_d), \beta = (\beta_1, \ldots, \beta_d) \in \N_0^d$,

$\binom{\alpha}{\beta} := \binom{\alpha_1}{\beta_1} \binom{\alpha_2}{\beta_2} \cdots \binom{\alpha_d}{\beta_d}$.

We directly observe one property of these multi-index binomial coefficients: If $i \in \{1, \ldots, d\} \subset \N$ is arbitrary, and $e_i = (0, \ldots, 0, 1, 0, \ldots, 0)$, where the $1$ is at the $i$-th place, we find:

$\binom{\alpha - e_i}{\beta -e_i} + \binom{\alpha - e_i}{\beta} = \binom{\alpha}{\beta}$

This formula follows from the definition of $\binom{\alpha}{\beta}$ and the formula

$\binom{n - 1}{k - 1} + \binom{n - 1}{k} = \binom{n}{k}$.

Next, we want to define an order for multi-indices: Let $\alpha = (\alpha_1, \ldots, \alpha_d), \beta = (\beta_1, \ldots, \beta_d) \in \N_0^d$, then we say:

$\beta \le \alpha :\Leftrightarrow \forall 1 \le i \le d : \beta_i \le \alpha_i$.

Notice, that there might be vectors $\alpha, \beta$ such that neither $\alpha \le \beta$ nor $\beta \le \alpha$. An example for this are $\alpha, \beta \in \N_0^2$,

$\alpha = (1, 0), \beta = (0, 1)$

Another way to say this is the order is not total.

### Generalized Leibniz product ruleEdit

Now, having defined these two things for multi-indices, we may prove the following generalized form of the Leibniz product rule:

$\frac{\partial^\alpha}{\partial x^\alpha} (f(x) \cdot g(x)) = \sum_{\beta \le \alpha} \binom{\alpha}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)$

Proof: We prove the claim by applying induction, where the induction variable is $|\alpha|$.

We start with the induction base $|\alpha| = 0$. Then the formula just reads

$f(x)g(x) = f(x)g(x)$

, and this is true. Therefore, we have completed the induction base.

Next, we do the induction step. Let's assume the claim is true for all $\alpha \in \N_0^d$ such that $|\alpha| = j$. Let now $\alpha \in \N_0^d$ such that $|\alpha| = j+1$. Let's choose $i \in \{1, \ldots, d\}$ such that $\alpha_i > 0$ (we may do this because $|\alpha| = j+1 > 0$). We define again $e_i = (0, \ldots, 0, 1, 0, \ldots, 0)$, where the $1$ is at the $i$-th place. Then we have, due to the the Schwarz theorem and the not-generalized Leibniz product rule:

$\frac{\partial^\alpha}{\partial x^\alpha} (f(x) \cdot g(x)) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} (f(x) \cdot g(x)) \right) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} f(x) \cdot g(x) + f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right)$

Now we may use the linearity of derivation and the induction hypothesis to obtain:

$\frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left(\frac{\partial}{\partial x_i} f(x) \cdot g(x) + f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right) = \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( \frac{\partial}{\partial x_i} f(x) \cdot g(x) \right) + \frac{\partial^{\alpha - e_i}}{\partial x^{\alpha - e_i}} \left( f(x) \cdot \frac{\partial}{\partial x_i} g(x) \right)$
$= \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} \frac{\partial}{\partial x_i} g(x)$

Then, here comes a key ingredient for the proof: Noticing that

$\frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} = \frac{\partial^{\alpha - (\beta + e_i)}}{\partial x^{\alpha - (\beta + e_i)}}$

and

$\{\beta \in \N_0^d | 0 \le \beta \le \alpha - e_i\} = \{\beta - e_i \in \N_0^d | e_i \le \beta \le \alpha\}$

, we notice that we are allowed to shift indices in the first of the two above sums, and furthermore simplify both sums with the rule

$\frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} = \frac{\partial^{\beta + e_i}}{\partial x^{\beta + e_i}}$.

Therefore, we obtain:

$\sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} \frac{\partial}{\partial x_i} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - e_i - \beta}}{\partial x^{\alpha - e_i - \beta}} \frac{\partial}{\partial x_i} g(x)$
$= \sum_{e_i \le \beta \le \alpha} \binom{\alpha - e_i}{\beta - e_i} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)$

Now we just sort the sum differently, and then apply our observation

$\binom{\alpha - e_i}{\beta -e_i} + \binom{\alpha - e_i}{\beta} = \binom{\alpha}{\beta}$,

which we made immediately after we defined the binomial coefficients, as well as the observations that

$\binom{\alpha - e_i}{0} = \binom{\alpha}{0} = 1$ where $0 = (0, \ldots, 0)$ in $\N_0^d$, and $\binom{\alpha - e_i}{\alpha - e_i} = \binom{\alpha}{\alpha} = 1$ (these two rules may be checked from the definition of $\binom{\alpha}{\beta}$)

, to find in conclusion:

$\frac{\partial^\alpha}{\partial x^\alpha} (f(x) \cdot g(x)) = \sum_{e_i \le \beta \le \alpha} \binom{\alpha - e_i}{\beta - e_i} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \sum_{\beta \le \alpha - e_i} \binom{\alpha - e_i}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)$
$= \binom{\alpha - e_i}{0} f(x) \frac{\partial^\alpha}{\partial x^\alpha} g(x) + \sum_{e_i \le \beta \le \alpha - e_i} \left[ \binom{\alpha - e_i}{\beta - e_i} + \binom{\alpha - e_i}{\beta} \right] \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x) + \binom{\alpha - e_i}{\alpha - e_i} f(x) \frac{\partial^\alpha}{\partial x^\alpha} g(x)$
$= \sum_{\beta \le \alpha} \binom{\alpha}{\beta} \frac{\partial^\beta}{\partial x^\beta} f(x) \cdot \frac{\partial^{\alpha - \beta}}{\partial x^{\alpha - \beta}} g(x)$
$\Box$

### Representation formulaEdit

Let $\Omega \subseteq \R^d$ be a domain, and let $\overline{B_r(x_0)} \subset \Omega$, and let $u \in C^\infty(\Omega)$ be a harmonic function on $\Omega$, i. e. $-\Delta u = 0$. Then the following representation formula for $u$ holds:

$u(x) = \frac{r^2 + \|x - x_0\|^2}{r c(1)} \int_{B_r(x_0)}\frac{u(y)}{\|x - y\|^n} dy$

Proof: The proof of this theorem is just calculating the solution formula for the Dirichlet problem

$\begin{cases} -\Delta u (x) = 0 & x \in B_r(x_0) \\ u(x) = u(x) & x \in \partial B_r(x_0) \end{cases}$

explicitly.

Below we calculated that a Green's function of the first kind for $B_r(x_0)$ is given by

$\tilde G_{B_r(x_0)}(x, \xi) := \tilde G(x - \xi) - h(x - x_0, \xi - x_0)$

, where

$h(x, \xi) = \tilde G\left(\frac{\|\xi\|}{r}\left(x - \frac{r^2}{\|\xi\|^2} \xi \right)\right)$

Furthermore, we have shown below that the representation formula for the general Dirichlet problem

$\begin{cases} -\Delta u(x) = f(x) & x \in \Omega \\ u(x) = g(x) & x \in \partial \Omega \end{cases}$

is

$u(\xi) = \int_\Omega -\Delta u(y) \tilde G_\Omega(y, \xi) dy - \int_{\partial \Omega} u(y) \nu(y) \nabla_y \tilde G_\Omega(y, \xi) dy$

. But since in our case, we have $\Omega = B_r(x_0)$ and $-\Delta u (x) = 0$ for $x \in B_r(x_0)$, we know that the first term vanishes, which leads to the expression

$u(\xi) = \int_{\partial B_r(x_0)} u(y) \nu(y) \nabla_y \tilde G_{B_r(x_0)}(y, \xi) dy$

Calculating this expression explicitly gives the theorem.

### Smoothness and boundedness of the derivativesEdit

Let $\Omega \subseteq \R^d$ be a domain. If $u \in C^2(\Omega)$ is a harmonic function, then it is automatically infinitely often differentiable, i. e. $u \in C^\infty(\Omega)$. Furthermore, for every $i \in \N$, there is a constant $C_{d, i}$ such that:

$\forall B_r(x_0) \in \Omega: \forall \alpha \in \N^d: |\alpha| = i \Rightarrow \frac{\partial^\alpha}{\partial x^\alpha} u(x) \le \frac{C_{d, i}}{r^{d+i}}\int_{B_r(x_0)}|u(y)| dy$

Proof: Due to our representation formula, we obtain:

$u(x) =$

### Convergence theorem for harmonic functionsEdit

If $u_i \to u$ locally uniformly, and all the $u_i$ are harmonic, then also the limit $u$ is harmonic.

Proof:

### Theorem of Arzela-AscoliEdit

Let $F$ be a set of continuous functions, which are defined on a compact set $K$. Then the following two statements are equivalent:

1. $\overline F$ (the closure of $F$) is compact
2. $F$ is bounded and equicontinuous

Proof:

### Bolzano-Weierstrass-like theorem for harmonic functionsEdit

If $(u_i)_{i \in \N}$ is a sequence of harmonic functions which is locally uniformly bounded, we can find a subsequence $(u_{i_l})_{l \in \N}$ which converges locally uniformly towards a harmonic function $u$.

Proof:

## Dirichlet problem: Existence of solutionsEdit

### BarriersEdit

Let $\Omega \sub \R^d$ be a domain. A function $b: \R^d \to \R$ is called a barrier with respect to $y \in \partial \Omega$ if and only if the following properties are satisfied:

1. $b$ is continuous
2. $b$ is superharmonic on $\Omega$
3. $b(y) = 0$
4. $\forall x \in \R^d \setminus \Omega : b(x) > 0$

### Exterior sphere conditionEdit

Let $\Omega \subseteq \R^d$ be a domain. We say that it satisfies the exterior sphere condition, if and only if for all $x \in \partial \Omega$ there is a ball $B_r(z) \subseteq \R^d \setminus \Omega$ such that $x \in \partial B_r(z)$ for some $z \in \R^d \setminus \Omega$ and $r \in \R_{\ge 0}$.

### Subharmonic and superharmonic functionsEdit

Let $\Omega \subseteq \R^d$ be a domain and $v \in C(\Omega)$.

We call $v$ subharmonic if and only if:

$v(x) \le \frac{1}{d(r)} \int_{B_r(x)} v(y) dy$

We call $v$ superharmonic if and only if:

$v(x) \ge \frac{1}{d(r)} \int_{B_r(x)} v(y) dy$

From this definition we can see that a function is harmonic if and only if it is subharmonic and superharmonic.

### Minimum principle for superharmonic functionsEdit

A superharmonic function $u$ on $\Omega$ attains it's minimum on $\Omega$'s border $\partial \Omega$.

Proof: Almost the same as the proof of the minimum and maximum principle for harmonic functions. As an exercise, you might try to prove this minimum principle yourself.

### Harmonic loweringEdit

Let $u \in \mathcal S_\varphi(\Omega)$, and let $B_r(x_0) \subset \Omega$. If we define

$\tilde u(x) = \begin{cases} u(x) & x \notin B_r(x_0) \\ \int_{\partial B_r(0)} \langle -\nu(y), \nabla_y \tilde G_{B_r(0)}(y, x) \rangle \varphi(y) dy & x \in B_r(x_0) \end{cases}$

, then $\tilde u \in \mathcal S_\varphi(\Omega)$.

Proof: For this proof, the very important thing to notice is that the formula for $\tilde u$ inside $B_r(x_0)$ is nothing but the solution formula for the Dirichlet problem on the ball. Therefore, we immediately obtain that $\tilde u$ is superharmonic, and furthermore, the values on $\partial \Omega$ don't change, which is why $\tilde u \in \mathcal S_\varphi(\Omega)$. This was to show.

### Definition 3.1Edit

Let $\varphi \in C(\partial \Omega)$. Then we define the following set:

$\mathcal S_\varphi(\Omega) := \{u \in C(\overline{\Omega}) : u \text{ superharmonic and } x \in \partial \Omega \Rightarrow u(x) \ge \varphi(x)\}$

### Lemma 3.2Edit

$\mathcal S_\varphi(\Omega)$ is not empty and

$\forall u \in \mathcal S_\varphi(\Omega) : \forall x \in \Omega : u(x) \ge \min_{y \in \partial \Omega} \varphi(y)$

Proof: The first part follows by choosing the constant function $u(x) = \max_{y \in \partial \Omega} \varphi(y)$, which is harmonic and therefore superharmonic. The second part follows from the minimum principle for superharmonic functions.

### Lemma 3.3Edit

Let $u_1, u_2 \in \mathcal S_\varphi(\Omega)$. If we now define $u(x) = \min\{u_1(x), u_2(x)\}$, then $u \in \mathcal S_\varphi(\Omega)$.

Proof: The condition on the border is satisfied, because

$\forall x \in \partial \Omega : u_1(x) \ge \varphi(x) \wedge u_2(x) \ge \varphi(x)$

$u$ is superharmonic because, if we (without loss of generality) assume that $u(x) = u_1(x)$, then it follows that

$u(x) = u_1(x) \ge \frac{1}{d(r)} \int_{B_r(x)} u_1(y) dy \ge \frac{1}{d(r)} \int_{B_r(x)} u(y) dy$

, due to the monotony of the integral. This argument is valid for all $x \in \Omega$, and therefore $u$ is superharmonic.

### Lemma 3.4Edit

If $\Omega \sub \R^d$ is bounded and $\varphi \in C(\partial \Omega)$, then the function

$u(x) = \inf \{v(x) | v \in \mathcal S_\varphi(\Omega) \}$

is harmonic.

Proof:

### Lemma 3.5Edit

If $\Omega$ satisfies the exterior sphere condition, then for all $y \in \partial \Omega$ there is a barrier function.

### Existence theorem of PerronEdit

Let $\Omega \subset \R^d$ be a bounded domain which satisfies the exterior sphere condition. Then the Dirichlet problem for the Poisson equation, which is, writing it again:

$\begin{cases} -\Delta u(x) = f(x) & x \in \Omega \\ u(x) = g(x) & x \in \partial \Omega \end{cases}$

has a solution $u \in C^\infty(\Omega) \cap C(\overline{\Omega})$.

Proof: