This chapter deals with Poisson's equation
∀
x
∈
R
d
:
−
Δ
u
(
x
)
=
f
(
x
)
{\displaystyle \forall x\in \mathbb {R} ^{d}:-\Delta u(x)=f(x)}
Provided that
f
∈
C
2
(
R
d
)
{\displaystyle f\in {\mathcal {C}}^{2}(\mathbb {R} ^{d})}
, we will through distribution theory prove a solution formula, and for domains with boundaries satisfying a certain property we will even show a solution formula for the boundary value problem. We will also study solutions of the homogenous Poisson's equation
∀
x
∈
R
d
:
−
Δ
u
(
x
)
=
0
{\displaystyle \forall x\in \mathbb {R} ^{d}:-\Delta u(x)=0}
The solutions to the homogenous Poisson's equation are called harmonic functions .
Important theorems from multi-dimensional integration
edit
In section 2, we had seen Leibniz' integral rule, and in section 4, Fubini's theorem. In this section, we repeat the other theorems from multi-dimensional integration which we need in order to carry on with applying the theory of distributions to partial differential equations. Proofs will not be given, since understanding the proofs of these theorems is not very important for the understanding of this wikibook. The only exception will be theorem 6.3, which follows from theorem 6.2. The proof of this theorem is an exercise.
Theorem 6.2 : (Divergence theorem)
Let
K
⊂
R
d
{\displaystyle K\subset \mathbb {R} ^{d}}
a compact set with smooth boundary. If
V
:
K
→
R
d
{\displaystyle \mathbf {V} :K\to \mathbb {R} ^{d}}
is a vector field, then
∫
K
∇
⋅
V
(
x
)
d
x
=
∫
∂
K
ν
(
x
)
⋅
V
(
x
)
d
x
{\displaystyle \int _{K}\nabla \cdot \mathbf {V} (x)dx=\int _{\partial K}\nu (x)\cdot \mathbf {V} (x)dx}
, where
ν
:
∂
K
→
R
d
{\displaystyle \nu :\partial K\to \mathbb {R} ^{d}}
is the outward normal vector.
Proof : See exercise 1.
The volume and surface area of d-dimensional spheres
edit
Definition 6.5 :
The Gamma function
Γ
:
R
>
0
→
R
{\displaystyle \Gamma :\mathbb {R} _{>0}\to \mathbb {R} }
is defined by
Γ
(
x
)
:=
∫
0
∞
s
x
−
1
e
−
s
d
s
{\displaystyle \Gamma (x):=\int _{0}^{\infty }s^{x-1}e^{-s}ds}
The Gamma function satisfies the following equation:
Theorem 6.6 :
∀
x
∈
R
>
0
:
Γ
(
x
+
1
)
=
x
Γ
(
x
)
{\displaystyle \forall x\in \mathbb {R} _{>0}:\Gamma (x+1)=x\Gamma (x)}
Proof :
Γ
(
x
+
1
)
=
∫
0
∞
s
x
e
−
s
d
s
=
integration by parts
−
s
x
e
−
s
|
s
=
0
s
=
∞
⏟
=
0
−
∫
0
∞
−
x
s
x
−
1
e
−
s
d
s
=
x
Γ
(
x
)
{\displaystyle \Gamma (x+1)=\int _{0}^{\infty }s^{x}e^{-s}ds{\overset {\text{integration by parts}}{=}}\underbrace {-s^{x}e^{-s}{\big |}_{s=0}^{s=\infty }} _{=0}-\int _{0}^{\infty }-xs^{x-1}e^{-s}ds=x\Gamma (x)}
◻
{\displaystyle \Box }
If the Gamma function is shifted by 1, it is an interpolation of the factorial (see exercise 2):
As you can see, in the above plot the Gamma function also has values on negative numbers. This is because what is plotted above is some sort of a natural continuation of the Gamma function which one can construct using complex analysis.
Definition and theorem 6.7 :
The
d
{\displaystyle d}
-dimensional spherical coordinates , given by
Ψ
:
(
0
,
∞
)
×
(
0
,
2
π
)
×
(
−
π
/
2
,
π
/
2
)
d
−
2
→
R
d
∖
{
(
x
1
,
…
,
x
d
)
∈
R
d
:
x
1
≥
0
∧
x
2
=
0
}
{\displaystyle \Psi :(0,\infty )\times (0,2\pi )\times (-\pi /2,\pi /2)^{d-2}\to \mathbb {R} ^{d}\setminus \{(x_{1},\ldots ,x_{d})\in \mathbb {R} ^{d}:x_{1}\geq 0\wedge x_{2}=0\}}
Ψ
(
r
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
=
(
r
cos
(
Φ
)
cos
(
Θ
1
)
⋯
cos
(
Θ
d
−
2
)
r
sin
(
Φ
)
cos
(
Θ
1
)
⋯
cos
(
Θ
d
−
2
)
r
sin
(
Θ
1
)
cos
(
Θ
2
)
⋯
cos
(
Θ
d
−
2
)
⋮
r
sin
(
Θ
d
−
3
)
cos
(
Θ
d
−
2
)
r
sin
(
Θ
d
−
2
)
)
{\displaystyle \Psi (r,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2})={\begin{pmatrix}r\cos(\Phi )\cos(\Theta _{1})\cdots \cos(\Theta _{d-2})\\r\sin(\Phi )\cos(\Theta _{1})\cdots \cos(\Theta _{d-2})\\r\sin(\Theta _{1})\cos(\Theta _{2})\cdots \cos(\Theta _{d-2})\\\vdots \\r\sin(\Theta _{d-3})\cos(\Theta _{d-2})\\r\sin(\Theta _{d-2})\\\end{pmatrix}}}
are a diffeomorphism. The determinant of the Jacobian matrix of
Ψ
{\displaystyle \Psi }
,
det
J
Ψ
{\displaystyle \det J_{\Psi }}
, is given by
det
J
Ψ
(
r
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
=
r
d
−
1
cos
(
Θ
1
)
cos
(
Θ
2
)
2
⋯
cos
(
Θ
d
−
2
)
d
−
2
{\displaystyle \det J_{\Psi }(r,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2})=r^{d-1}\cos(\Theta _{1})\cos(\Theta _{2})^{2}\cdots \cos(\Theta _{d-2})^{d-2}}
Proof :
Proof :
The surface area and the volume of the
d
{\displaystyle d}
-dimensional ball with radius
R
∈
R
>
0
{\displaystyle R\in \mathbb {R} _{>0}}
are related to each other "in a differential way" (see exercise 3).
Proof :
A
d
(
R
)
:=
d
π
d
/
2
Γ
(
d
/
2
+
1
)
R
d
−
1
=
d
R
V
d
(
R
)
=
d
R
∫
B
R
(
0
)
1
d
x
theorem 6.8
=
∫
B
R
(
0
)
1
R
∇
⋅
x
d
x
=
∫
∂
B
R
(
0
)
1
R
x
R
⋅
x
d
x
divergence theorem
=
∫
∂
B
R
(
0
)
1
d
x
{\displaystyle {\begin{aligned}A_{d}(R)&:={\frac {d\pi ^{d/2}}{\Gamma (d/2+1)}}R^{d-1}&\\&={\frac {d}{R}}V_{d}(R)&\\&={\frac {d}{R}}\int _{B_{R}(0)}1dx&{\text{ theorem 6.8}}\\&=\int _{B_{R}(0)}{\frac {1}{R}}\nabla \cdot xdx&\\&=\int _{\partial B_{R}(0)}{\frac {1}{R}}{\frac {x}{R}}\cdot xdx&{\text{ divergence theorem}}\\&=\int _{\partial B_{R}(0)}1dx&\end{aligned}}}
◻
{\displaystyle \Box }
We recall a fact from integration theory:
Lemma 6.11 :
f
{\displaystyle f}
is integrable
⇔
{\displaystyle \Leftrightarrow }
|
f
|
{\displaystyle |f|}
is integrable.
We omit the proof.
Theorem 6.12 :
The function
P
:
R
d
→
R
{\displaystyle P:\mathbb {R} ^{d}\to \mathbb {R} }
, given by
P
(
x
)
:=
{
−
1
2
|
x
|
d
=
1
−
1
2
π
ln
(
‖
x
‖
)
d
=
2
1
(
d
−
2
)
A
d
(
1
)
‖
x
‖
d
−
2
d
≥
3
{\displaystyle P(x):={\begin{cases}-{\frac {1}{2}}|x|&d=1\\-{\frac {1}{2\pi }}\ln(\|x\|)&d=2\\{\frac {1}{(d-2)A_{d}(1)\|x\|^{d-2}}}&d\geq 3\end{cases}}}
is a Green's kernel for Poisson's equation.
We only prove the theorem for
d
≥
2
{\displaystyle d\geq 2}
. For
d
=
1
{\displaystyle d=1}
see exercise 4.
Proof :
1.
We show that
P
{\displaystyle P}
is locally integrable. Let
K
⊆
R
d
{\displaystyle K\subseteq \mathbb {R} ^{d}}
be compact. We have to show that
∫
K
P
(
x
)
d
x
{\displaystyle \int _{K}P(x)dx}
is a real number, which by lemma 6.11 is equivalent to
∫
K
|
P
(
x
)
|
d
x
{\displaystyle \int _{K}|P(x)|dx}
is a real number. As compact in
R
d
{\displaystyle \mathbb {R} ^{d}}
is equivalent to bounded and closed, we may choose an
R
>
0
{\displaystyle R>0}
such that
K
⊂
B
R
(
0
)
{\displaystyle K\subset B_{R}(0)}
. Without loss of generality we choose
R
>
1
{\displaystyle R>1}
, since if it turns out that the chosen
R
{\displaystyle R}
is
≤
1
{\displaystyle \leq 1}
, any
R
>
1
{\displaystyle R>1}
will do as well. Then we have
∫
K
|
P
(
x
)
|
d
x
≤
∫
B
R
(
0
)
|
P
(
x
)
|
d
x
{\displaystyle \int _{K}|P(x)|dx\leq \int _{B_{R}(0)}|P(x)|dx}
For
d
=
2
{\displaystyle d=2}
,
∫
B
R
(
0
)
|
P
(
x
)
|
d
x
=
∫
B
1
(
0
)
−
1
2
π
ln
(
‖
x
‖
)
d
x
+
∫
B
R
(
0
)
∖
B
1
(
0
)
1
2
π
ln
(
‖
x
‖
)
d
x
=
∫
B
1
(
0
)
−
1
2
π
ln
(
‖
x
‖
)
d
x
+
∫
B
R
(
0
)
1
2
π
ln
(
‖
x
‖
)
d
x
−
∫
B
1
(
0
)
1
2
π
ln
(
‖
x
‖
)
d
x
=
∫
0
1
−
r
π
ln
(
r
)
d
r
+
∫
0
R
r
2
π
ln
(
r
)
d
r
int. by subst. using spherical coords.
=
1
4
π
+
R
2
4
π
(
ln
(
R
)
−
1
2
R
)
<
∞
{\displaystyle {\begin{aligned}\int _{B_{R}(0)}|P(x)|dx&=\int _{B_{1}(0)}-{\frac {1}{2\pi }}\ln(\|x\|)dx+\int _{B_{R}(0)\setminus B_{1}(0)}{\frac {1}{2\pi }}\ln(\|x\|)dx&\\&=\int _{B_{1}(0)}-{\frac {1}{2\pi }}\ln(\|x\|)dx+\int _{B_{R}(0)}{\frac {1}{2\pi }}\ln(\|x\|)dx-\int _{B_{1}(0)}{\frac {1}{2\pi }}\ln(\|x\|)dx&\\&=\int _{0}^{1}-{\frac {r}{\pi }}\ln(r)dr+\int _{0}^{R}{\frac {r}{2\pi }}\ln(r)dr&{\text{int. by subst. using spherical coords.}}\\&={\frac {1}{4\pi }}+{\frac {R^{2}}{4\pi }}\left(\ln(R)-{\frac {1}{2}}R\right)<\infty \end{aligned}}}
For
d
≥
3
{\displaystyle d\geq 3}
,
∫
B
R
(
0
)
|
P
(
x
)
|
d
x
=
∫
B
R
(
0
)
1
(
d
−
2
)
A
d
(
1
)
‖
x
‖
d
−
2
=
∫
0
R
∫
0
2
π
∫
−
π
2
π
2
⋯
∫
−
π
2
π
2
⏟
d
−
2
times
|
r
d
−
1
cos
(
Θ
1
)
⋯
cos
(
Θ
d
−
2
)
d
−
2
|
⏞
≤
r
d
−
1
1
(
d
−
2
)
A
d
(
1
)
r
d
−
2
d
Θ
1
⋯
d
Θ
d
−
2
d
Φ
d
r
≤
∫
0
R
2
r
d
−
1
(
d
−
2
)
A
d
(
1
)
r
d
−
2
π
d
−
1
d
r
=
π
d
−
1
(
d
−
2
)
A
d
(
1
)
R
2
{\displaystyle {\begin{aligned}\int _{B_{R}(0)}|P(x)|dx&=\int _{B_{R}(0)}{\frac {1}{(d-2)A_{d}(1)\|x\|^{d-2}}}\\&=\int _{0}^{R}\int _{0}^{2\pi }\underbrace {\int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}\cdots \int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}} _{d-2{\text{ times}}}\overbrace {|r^{d-1}\cos(\Theta _{1})\cdots \cos(\Theta _{d-2})^{d-2}|} ^{\leq r^{d-1}}{\frac {1}{(d-2)A_{d}(1)r^{d-2}}}d\Theta _{1}\cdots d\Theta _{d-2}d\Phi dr\\&\leq \int _{0}^{R}2{\frac {r^{d-1}}{(d-2)A_{d}(1)r^{d-2}}}\pi ^{d-1}dr\\&={\frac {\pi ^{d-1}}{(d-2)A_{d}(1)}}R^{2}\end{aligned}}}
, where we applied integration by substitution using spherical coordinates from the first to the second line.
2.
We calculate some derivatives of
P
{\displaystyle P}
(see exercise 5):
For
d
=
2
{\displaystyle d=2}
, we have
∀
x
∈
R
d
∖
{
0
}
:
∇
P
(
x
)
=
−
x
2
π
‖
x
‖
2
{\displaystyle \forall x\in \mathbb {R} ^{d}\setminus \{0\}:\nabla P(x)=-{\frac {x}{2\pi \|x\|^{2}}}}
For
d
≥
3
{\displaystyle d\geq 3}
, we have
∀
x
∈
R
d
∖
{
0
}
:
∇
P
(
x
)
=
x
A
d
(
1
)
‖
x
‖
d
{\displaystyle \forall x\in \mathbb {R} ^{d}\setminus \{0\}:\nabla P(x)={\frac {x}{A_{d}(1)\|x\|^{d}}}}
For all
d
≥
2
{\displaystyle d\geq 2}
, we have
∀
x
∈
R
d
∖
{
0
}
:
Δ
P
(
x
)
=
0
{\displaystyle \forall x\in \mathbb {R} ^{d}\setminus \{0\}:\Delta P(x)=0}
3.
We show that
∀
x
∈
R
d
:
−
Δ
T
P
(
⋅
−
x
)
=
δ
x
{\displaystyle \forall x\in \mathbb {R} ^{d}:-\Delta {\mathcal {T}}_{P(\cdot -x)}=\delta _{x}}
Let
x
∈
R
d
{\displaystyle x\in \mathbb {R} ^{d}}
and
φ
∈
D
(
R
d
)
{\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} ^{d})}
be arbitrary. In this last step of the proof, we will only manipulate the term
−
Δ
T
P
(
⋅
−
x
)
(
φ
)
{\displaystyle -\Delta {\mathcal {T}}_{P(\cdot -x)}(\varphi )}
. Since
φ
∈
D
(
R
d
)
{\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} ^{d})}
,
φ
{\displaystyle \varphi }
has compact support. Let's define
K
:=
supp
φ
∪
B
1
(
x
)
¯
{\displaystyle K:={\text{supp }}\varphi \cup {\overline {B_{1}(x)}}}
Since the support of
−
Δ
T
P
(
⋅
−
x
)
(
φ
)
=
T
P
(
⋅
−
x
)
(
−
Δ
φ
)
=
∫
R
d
P
(
y
−
x
)
(
−
Δ
φ
(
y
)
)
d
y
=
∫
K
P
(
y
−
x
)
(
−
Δ
φ
(
y
)
)
d
y
=
lim
ϵ
↓
0
∫
K
P
(
y
−
x
)
(
−
Δ
φ
(
y
)
)
χ
K
∖
B
ϵ
(
x
)
(
y
)
d
y
dominated convergence
=
lim
ϵ
↓
0
∫
K
∖
B
ϵ
(
x
)
P
(
y
−
x
)
(
−
Δ
φ
(
y
)
)
d
y
{\displaystyle {\begin{aligned}-\Delta {\mathcal {T}}_{P(\cdot -x)}(\varphi )&={\mathcal {T}}_{P(\cdot -x)}(-\Delta \varphi )&\\&=\int _{\mathbb {R} ^{d}}P(y-x)(-\Delta \varphi (y))dy&\\&=\int _{K}P(y-x)(-\Delta \varphi (y))dy&\\&=\lim _{\epsilon \downarrow 0}\int _{K}P(y-x)(-\Delta \varphi (y))\chi _{K\setminus B_{\epsilon }(x)}(y)dy&{\text{dominated convergence}}\\&=\lim _{\epsilon \downarrow 0}\int _{K\setminus B_{\epsilon }(x)}P(y-x)(-\Delta \varphi (y))dy&\\\end{aligned}}}
, where
χ
K
∖
B
ϵ
(
0
)
{\displaystyle \chi _{K\setminus B_{\epsilon }(0)}}
is the characteristic function of
K
∖
B
ϵ
(
0
)
{\displaystyle K\setminus B_{\epsilon }(0)}
.
The last integral is taken over
K
∖
B
ϵ
(
x
)
{\displaystyle K\setminus B_{\epsilon }(x)}
(which is bounded and as the intersection of the closed sets
K
{\displaystyle K}
and
R
d
∖
B
ϵ
(
x
)
{\displaystyle \mathbb {R} ^{d}\setminus B_{\epsilon }(x)}
closed and thus compact as well). In this area, due to the above second part of this proof,
P
(
y
−
x
)
{\displaystyle P(y-x)}
is continuously differentiable. Therefore, we are allowed to integrate by parts. Thus, noting that
x
−
y
‖
y
−
x
‖
{\displaystyle {\frac {x-y}{\|y-x\|}}}
is the outward normal vector in
y
∈
∂
B
ϵ
(
x
)
{\displaystyle y\in \partial B_{\epsilon }(x)}
of
K
∖
B
ϵ
(
x
)
{\displaystyle K\setminus B_{\epsilon }(x)}
, we obtain
∫
K
∖
B
ϵ
(
x
)
P
(
y
−
x
)
(
−
Δ
φ
(
y
)
)
⏞
=
−
∇
⋅
∇
φ
(
y
)
d
y
=
∫
∂
B
ϵ
(
x
)
P
(
y
−
x
)
∇
φ
(
y
)
⋅
x
−
y
‖
y
−
x
‖
d
y
−
∫
R
d
∖
B
ϵ
(
x
)
∇
φ
(
y
)
⋅
∇
P
(
y
−
x
)
d
y
{\displaystyle \int _{K\setminus B_{\epsilon }(x)}P(y-x)\overbrace {(-\Delta \varphi (y))} {=-\nabla \cdot \nabla \varphi (y)}dy=\int _{\partial B_{\epsilon }(x)}P(y-x)\nabla \varphi (y)\cdot {\frac {x-y}{\|y-x\|}}dy-\int _{\mathbb {R} ^{d}\setminus B_{\epsilon }(x)}\nabla \varphi (y)\cdot \nabla P(y-x)dy}
Let's furthermore choose
w
(
x
)
=
G
~
(
x
−
ξ
)
∇
φ
(
x
)
{\displaystyle w(x)={\tilde {G}}(x-\xi )\nabla \varphi (x)}
. Then
∇
⋅
w
(
x
)
=
Δ
φ
(
x
)
G
~
(
x
−
ξ
)
+
⟨
∇
G
~
(
x
−
ξ
)
,
∇
φ
(
x
)
⟩
{\displaystyle \nabla \cdot w(x)=\Delta \varphi (x){\tilde {G}}(x-\xi )+\langle \nabla {\tilde {G}}(x-\xi ),\nabla \varphi (x)\rangle }
.
From Gauß' theorem, we obtain
∫
R
d
∖
B
R
(
ξ
)
Δ
φ
(
x
)
G
~
(
x
−
ξ
)
+
⟨
∇
G
~
(
x
−
ξ
)
,
∇
φ
(
x
)
⟩
d
x
=
−
∫
∂
B
R
(
ξ
)
⟨
G
~
(
x
−
ξ
)
∇
φ
(
x
)
,
x
−
ξ
‖
x
−
ξ
‖
⟩
d
x
{\displaystyle \int _{\mathbb {R} ^{d}\setminus B_{R}(\xi )}\Delta \varphi (x){\tilde {G}}(x-\xi )+\langle \nabla {\tilde {G}}(x-\xi ),\nabla \varphi (x)\rangle dx=-\int _{\partial B_{R}(\xi )}\langle {\tilde {G}}(x-\xi )\nabla \varphi (x),{\frac {x-\xi }{\|x-\xi \|}}\rangle dx}
, where the minus in the right hand side occurs because we need the inward normal vector. From this follows immediately that
∫
R
d
∖
B
R
(
ξ
)
−
Δ
φ
(
x
)
G
~
(
x
−
ξ
)
=
∫
∂
B
R
(
ξ
)
⟨
G
~
(
x
−
ξ
)
∇
φ
(
x
)
,
x
−
ξ
‖
x
−
ξ
‖
⟩
d
x
⏟
:=
J
1
(
R
)
−
∫
R
d
∖
B
R
(
ξ
)
⟨
∇
G
~
(
x
−
ξ
)
,
∇
φ
(
x
)
⟩
d
x
⏟
:=
J
2
(
R
)
{\displaystyle \int _{\mathbb {R} ^{d}\setminus B_{R}(\xi )}-\Delta \varphi (x){\tilde {G}}(x-\xi )=\underbrace {\int _{\partial B_{R}(\xi )}\langle {\tilde {G}}(x-\xi )\nabla \varphi (x),{\frac {x-\xi }{\|x-\xi \|}}\rangle dx} _{:=J_{1}(R)}-\underbrace {\int _{\mathbb {R} ^{d}\setminus B_{R}(\xi )}\langle \nabla {\tilde {G}}(x-\xi ),\nabla \varphi (x)\rangle dx} _{:=J_{2}(R)}}
We can now calculate the following, using the Cauchy-Schwartz inequality:
|
J
1
(
R
)
|
≤
∫
∂
B
R
(
ξ
)
‖
G
~
(
x
−
ξ
)
∇
φ
(
x
)
‖
‖
x
−
ξ
‖
x
−
ξ
‖
‖
⏞
=
1
d
x
{\displaystyle |J_{1}(R)|\leq \int _{\partial B_{R}(\xi )}\|{\tilde {G}}(x-\xi )\nabla \varphi (x)\|\overbrace {\|{\frac {x-\xi }{\|x-\xi \|}}\|} ^{=1}dx}
=
{
∫
∂
B
R
(
ξ
)
−
1
2
π
ln
|
x
−
ξ
|
‖
∇
φ
(
x
)
‖
d
x
=
∫
∂
B
1
(
ξ
)
−
R
1
2
π
ln
‖
R
(
x
−
ξ
)
‖
‖
∇
φ
(
R
x
)
‖
d
x
d
=
2
∫
∂
B
R
(
ξ
)
1
(
d
−
2
)
c
1
|
x
−
ξ
|
d
−
2
‖
∇
φ
(
x
)
‖
d
x
=
∫
∂
B
1
(
ξ
)
R
d
−
1
1
(
d
−
2
)
c
1
|
R
(
x
−
ξ
)
|
d
−
2
d
≥
3
{\displaystyle ={\begin{cases}\displaystyle \int _{\partial B_{R}(\xi )}-{\frac {1}{2\pi }}\ln |x-\xi |\|\nabla \varphi (x)\|dx=\int _{\partial B_{1}(\xi )}-R{\frac {1}{2\pi }}\ln \|R(x-\xi )\|\|\nabla \varphi (Rx)\|dx&d=2\\\displaystyle \int _{\partial B_{R}(\xi )}{\frac {1}{(d-2)c}}{\frac {1}{|x-\xi |^{d-2}}}\|\nabla \varphi (x)\|dx=\int _{\partial B_{1}(\xi )}R^{d-1}{\frac {1}{(d-2)c}}{\frac {1}{|R(x-\xi )|^{d-2}}}&d\geq 3\end{cases}}}
≤
{
max
x
∈
R
d
‖
∇
φ
(
R
x
)
‖
∫
∂
B
1
(
ξ
)
−
R
1
2
π
ln
R
2
d
x
=
−
max
x
∈
R
d
‖
∇
φ
(
R
x
)
‖
c
2
π
R
ln
R
2
→
0
,
R
→
0
d
=
2
max
x
∈
R
d
‖
∇
φ
(
R
x
)
‖
∫
∂
B
1
(
ξ
)
1
(
d
−
2
)
c
R
d
x
=
max
x
∈
R
d
‖
∇
φ
(
R
x
)
‖
R
d
−
2
→
0
,
R
→
0
d
≥
3
{\displaystyle \leq {\begin{cases}\displaystyle \max \limits _{x\in \mathbb {R} ^{d}}\|\nabla \varphi (Rx)\|\int _{\partial B_{1}(\xi )}-R{\frac {1}{2\pi }}\ln R^{2}dx=-\max \limits _{x\in \mathbb {R} ^{d}}\|\nabla \varphi (Rx)\|{\frac {c}{2\pi }}R\ln R^{2}\to 0,R\to 0&d=2\\\displaystyle \max \limits _{x\in \mathbb {R} ^{d}}\|\nabla \varphi (Rx)\|\int _{\partial B_{1}(\xi )}{\frac {1}{(d-2)c}}Rdx=\max \limits _{x\in \mathbb {R} ^{d}}\|\nabla \varphi (Rx)\|{\frac {R}{d-2}}\to 0,R\to 0&d\geq 3\end{cases}}}
Now we define
v
(
x
)
=
φ
(
x
)
∇
G
~
(
x
−
ξ
)
{\displaystyle v(x)=\varphi (x)\nabla {\tilde {G}}(x-\xi )}
, which gives:
∇
⋅
v
(
x
)
=
φ
(
x
)
Δ
G
~
(
x
−
ξ
)
⏟
=
0
,
x
≠
ξ
+
⟨
∇
φ
(
x
)
,
∇
G
~
(
x
−
ξ
)
⟩
{\displaystyle \nabla \cdot v(x)=\varphi (x)\underbrace {\Delta {\tilde {G}}(x-\xi )} _{=0,x\neq \xi }+\langle \nabla \varphi (x),\nabla {\tilde {G}}(x-\xi )\rangle }
Applying Gauß' theorem on
v
{\displaystyle v}
gives us therefore
J
2
(
R
)
=
∫
∂
B
R
(
ξ
)
φ
(
x
)
⟨
∇
G
~
(
x
−
ξ
)
,
x
−
ξ
‖
x
−
ξ
‖
⟩
d
x
{\displaystyle J_{2}(R)=\int _{\partial B_{R}(\xi )}\varphi (x)\langle \nabla {\tilde {G}}(x-\xi ),{\frac {x-\xi }{\|x-\xi \|}}\rangle dx}
=
∫
∂
B
R
(
ξ
)
φ
(
x
)
⟨
−
x
−
ξ
c
‖
x
−
ξ
‖
d
,
x
−
ξ
‖
x
−
ξ
‖
⟩
d
x
=
−
1
c
∫
∂
B
R
(
ξ
)
1
R
d
−
1
φ
(
x
)
d
x
{\displaystyle =\int _{\partial B_{R}(\xi )}\varphi (x)\langle -{\frac {x-\xi }{c\|x-\xi \|^{d}}},{\frac {x-\xi }{\|x-\xi \|}}\rangle dx=-{\frac {1}{c}}\int _{\partial B_{R}(\xi )}{\frac {1}{R^{d-1}}}\varphi (x)dx}
, noting that
d
=
2
⇒
c
=
2
π
{\displaystyle d=2\Rightarrow c=2\pi }
.
We furthermore note that
φ
(
ξ
)
=
1
c
∫
∂
B
1
(
ξ
)
φ
(
ξ
)
d
x
=
1
c
∫
∂
B
R
(
ξ
)
1
R
d
−
1
φ
(
ξ
)
d
x
{\displaystyle \varphi (\xi )={\frac {1}{c}}\int _{\partial B_{1}(\xi )}\varphi (\xi )dx={\frac {1}{c}}\int _{\partial B_{R}(\xi )}{\frac {1}{R^{d-1}}}\varphi (\xi )dx}
Therefore, we have
lim
R
→
0
|
−
J
2
(
R
)
−
φ
(
ξ
)
|
≤
1
c
lim
R
→
0
∫
∂
B
R
(
ξ
)
1
R
d
−
1
|
φ
(
ξ
)
−
φ
(
x
)
|
d
x
≤
lim
R
→
0
1
c
max
x
∈
B
R
(
ξ
)
|
φ
(
x
)
−
φ
(
ξ
)
|
∫
∂
B
1
(
ξ
)
1
d
x
{\displaystyle \lim _{R\to 0}|-J_{2}(R)-\varphi (\xi )|\leq {\frac {1}{c}}\lim _{R\to 0}\int _{\partial B_{R}(\xi )}{\frac {1}{R^{d-1}}}|\varphi (\xi )-\varphi (x)|dx\leq \lim _{R\to 0}{\frac {1}{c}}\max _{x\in B_{R}(\xi )}|\varphi (x)-\varphi (\xi )|\int _{\partial B_{1}(\xi )}1dx}
=
lim
R
→
0
max
x
∈
B
R
(
ξ
)
|
φ
(
x
)
−
φ
(
ξ
)
|
=
0
{\displaystyle =\lim _{R\to 0}\max _{x\in B_{R}(\xi )}|\varphi (x)-\varphi (\xi )|=0}
due to the continuity of
φ
{\displaystyle \varphi }
.
Thus we can conclude that
∀
Ω
domain of
R
d
:
∀
φ
∈
D
(
Ω
)
:
−
Δ
T
G
~
(
⋅
−
ξ
)
(
φ
)
=
lim
R
→
0
J
0
(
R
)
=
lim
R
→
0
J
1
(
R
)
−
J
2
(
R
)
=
0
+
φ
(
ξ
)
=
δ
ξ
(
φ
)
{\displaystyle \forall \Omega {\text{ domain of }}\mathbb {R} ^{d}:\forall \varphi \in {\mathcal {D}}(\Omega ):-\Delta T_{{\tilde {G}}(\cdot -\xi )}(\varphi )=\lim _{R\to 0}J_{0}(R)=\lim _{R\to 0}J_{1}(R)-J_{2}(R)=0+\varphi (\xi )=\delta _{\xi }(\varphi )}
.
Therefore,
G
~
{\displaystyle {\tilde {G}}}
is a Green's kernel for the Poisson's equation for
d
≥
2
{\displaystyle d\geq 2}
.
QED.
Integration over spheres
edit
Theorem 6.12 :
Let
f
:
R
d
→
R
{\displaystyle f:\mathbb {R} ^{d}\to \mathbb {R} }
be a function.
∫
∂
B
R
(
0
)
f
(
x
)
d
x
=
R
d
−
1
∫
0
2
π
∫
−
π
2
π
2
⋯
∫
−
π
2
π
2
⏟
d
−
2
times
f
(
Ψ
(
r
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
)
cos
(
Θ
1
)
cos
(
Θ
2
)
2
⋯
cos
(
Θ
d
−
2
)
d
−
2
d
Θ
1
d
Θ
2
⋯
d
Θ
d
−
2
d
Φ
{\displaystyle \int _{\partial B_{R}(0)}f(x)dx=R^{d-1}\int _{0}^{2\pi }\underbrace {\int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}\cdots \int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}} _{d-2{\text{ times}}}f\left(\Psi (r,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2})\right)\cos(\Theta _{1})\cos(\Theta _{2})^{2}\cdots \cos(\Theta _{d-2})^{d-2}d\Theta _{1}d\Theta _{2}\cdots d\Theta _{d-2}d\Phi }
Proof : We choose as an orientation the border orientation of the sphere. We know that for
∂
B
r
(
0
)
{\displaystyle \partial B_{r}(0)}
, an outward normal vector field is given by
ν
(
x
)
=
x
r
{\displaystyle \nu (x)={\frac {x}{r}}}
. As a parametrisation of
B
r
(
0
)
{\displaystyle B_{r}(0)}
, we only choose the identity function, obtaining that the basis for the tangent space there is the standard basis, which in turn means that the volume form of
B
r
(
0
)
{\displaystyle B_{r}(0)}
is
ω
B
r
(
0
)
(
x
)
=
e
1
∗
∧
⋯
∧
e
d
∗
{\displaystyle \omega _{B_{r}(0)}(x)=e_{1}^{*}\wedge \cdots \wedge e_{d}^{*}}
Now, we use the normal vector field to obtain the volume form of
∂
B
r
(
0
)
{\displaystyle \partial B_{r}(0)}
:
ω
∂
B
r
(
0
)
(
x
)
(
v
1
,
…
,
v
d
−
1
)
=
ω
B
r
(
0
)
(
x
)
(
ν
(
x
)
,
v
1
,
…
,
v
d
−
1
)
{\displaystyle \omega _{\partial B_{r}(0)}(x)(v_{1},\ldots ,v_{d-1})=\omega _{B_{r}(0)}(x)(\nu (x),v_{1},\ldots ,v_{d-1})}
We insert the formula for
ω
B
r
(
0
)
(
x
)
{\displaystyle \omega _{B_{r}(0)}(x)}
and then use Laplace's determinant formula:
=
e
1
∗
∧
⋯
∧
e
d
∗
(
ν
(
x
)
,
v
1
,
…
,
v
d
−
1
)
=
1
r
∑
i
=
1
d
(
−
1
)
i
+
1
x
i
e
1
∗
⋯
∧
e
i
−
1
∗
∧
e
i
+
1
∗
∧
⋯
∧
e
d
∗
(
v
1
,
…
,
v
d
−
1
)
{\displaystyle =e_{1}^{*}\wedge \cdots \wedge e_{d}^{*}(\nu (x),v_{1},\ldots ,v_{d-1})={\frac {1}{r}}\sum _{i=1}^{d}(-1)^{i+1}x_{i}e_{1}^{*}\cdots \wedge e_{i-1}^{*}\wedge e_{i+1}^{*}\wedge \cdots \wedge e_{d}^{*}(v_{1},\ldots ,v_{d-1})}
As a parametrisation of
∂
B
r
(
x
)
{\displaystyle \partial B_{r}(x)}
we choose spherical coordinates with constant radius
r
{\displaystyle r}
.
We calculate the Jacobian matrix for the spherical coordinates:
J
=
(
cos
(
φ
)
cos
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
r
−
sin
(
φ
)
cos
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
−
r
cos
(
φ
)
sin
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
⋯
⋯
−
r
cos
(
φ
)
cos
(
ϑ
1
)
⋯
sin
(
ϑ
d
−
2
)
sin
(
φ
)
cos
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
r
cos
(
φ
)
cos
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
−
r
sin
(
φ
)
sin
(
ϑ
1
)
⋯
cos
(
ϑ
d
−
2
)
⋯
⋯
−
r
sin
(
φ
)
cos
(
ϑ
1
)
⋯
sin
(
ϑ
d
−
2
)
⋮
0
⋱
⋱
⋱
⋮
⋮
⋮
⋱
⋱
⋱
sin
(
ϑ
d
−
3
)
cos
(
ϑ
d
−
2
)
0
⋯
0
r
cos
(
ϑ
d
−
3
)
cos
(
ϑ
d
−
2
)
r
sin
(
ϑ
d
−
3
)
cos
(
ϑ
d
−
2
)
sin
(
ϑ
d
−
2
)
0
⋯
⋯
0
r
cos
(
ϑ
d
−
2
)
)
{\displaystyle J=\left({\begin{smallmatrix}\cos(\varphi )\cos(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&r-\sin(\varphi )\cos(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&-r\cos(\varphi )\sin(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&\cdots &\cdots &-r\cos(\varphi )\cos(\vartheta _{1})\cdots \sin(\vartheta _{d-2})\\\sin(\varphi )\cos(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&r\cos(\varphi )\cos(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&-r\sin(\varphi )\sin(\vartheta _{1})\cdots \cos(\vartheta _{d-2})&\cdots &\cdots &-r\sin(\varphi )\cos(\vartheta _{1})\cdots \sin(\vartheta _{d-2})\\\vdots &0&\ddots &\ddots &\ddots &\vdots \\\vdots &\vdots &\ddots &\ddots &\ddots &\\\sin(\vartheta _{d-3})\cos(\vartheta _{d-2})&0&\cdots &0&r\cos(\vartheta _{d-3})\cos(\vartheta _{d-2})&r\sin(\vartheta _{d-3})\cos(\vartheta _{d-2})\\\sin(\vartheta _{d-2})&0&\cdots &\cdots &0&r\cos(\vartheta _{d-2})\end{smallmatrix}}\right)}
We observe that in the first column, we have only the spherical coordinates divided by
r
{\displaystyle r}
. If we fix
r
{\displaystyle r}
, the first column disappears. Let's call the resulting matrix
J
′
{\displaystyle J'}
and our parametrisation, namely spherical coordinates with constant
r
{\displaystyle r}
,
Ψ
{\displaystyle \Psi }
. Then we have:
Ψ
∗
ω
∂
B
r
(
0
)
(
x
)
(
v
1
,
…
,
v
d
−
1
)
=
ω
∂
B
r
(
0
)
(
Ψ
(
x
)
)
(
J
′
v
1
,
…
,
J
′
v
d
−
1
)
{\displaystyle \Psi ^{*}\omega _{\partial B_{r}(0)}(x)(v_{1},\ldots ,v_{d-1})=\omega _{\partial B_{r}(0)}(\Psi (x))(J'v_{1},\ldots ,J'v_{d-1})}
=
1
r
∑
i
=
1
d
(
−
1
)
i
+
1
Ψ
(
x
)
i
e
1
∗
⋯
∧
e
i
−
1
∗
∧
e
i
+
1
∗
∧
⋯
∧
e
d
∗
(
J
′
v
1
,
…
,
J
′
v
d
−
1
)
{\displaystyle ={\frac {1}{r}}\sum _{i=1}^{d}(-1)^{i+1}\Psi (x)_{i}e_{1}^{*}\cdots \wedge e_{i-1}^{*}\wedge e_{i+1}^{*}\wedge \cdots \wedge e_{d}^{*}(J'v_{1},\ldots ,J'v_{d-1})}
=
1
r
∑
i
=
1
d
(
−
1
)
i
+
1
Ψ
(
x
)
i
det
(
e
j
∗
(
J
′
v
k
)
)
j
≠
i
=
det
J
⋅
det
(
v
1
,
…
,
v
d
−
1
)
{\displaystyle ={\frac {1}{r}}\sum _{i=1}^{d}(-1)^{i+1}\Psi (x)_{i}\det(e_{j}^{*}(J'v_{k}))_{j\neq i}=\det J\cdot \det(v_{1},\ldots ,v_{d-1})}
Recalling that
det
J
=
r
d
−
1
cos
(
ϕ
1
)
n
−
2
cos
(
ϕ
2
)
d
−
3
⋯
cos
(
ϕ
d
−
2
)
{\displaystyle \det J=r^{d-1}\cos(\phi _{1})^{n-2}\cos(\phi _{2})^{d-3}\cdots \cos(\phi _{d-2})}
, the claim follows using the definition of the surface integral.
Theorem 6.13 :
Let
f
:
R
d
→
R
{\displaystyle f:\mathbb {R} ^{d}\to \mathbb {R} }
be a function. Then
∫
R
d
f
(
x
)
d
x
=
∫
0
∞
r
d
−
1
∫
∂
B
1
(
0
)
f
(
r
x
)
d
x
d
r
{\displaystyle \int _{\mathbb {R} ^{d}}f(x)dx=\int _{0}^{\infty }r^{d-1}\int _{\partial B_{1}(0)}f(rx)dxdr}
Proof :
We have
r
Ψ
(
1
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
=
Ψ
(
r
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
{\displaystyle r\Psi (1,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2})=\Psi (r,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2})}
, where
Ψ
{\displaystyle \Psi }
are the spherical coordinates. Therefore, by integration by substitution, Fubini's theorem and the above formula for integration over the unit sphere,
∫
R
d
f
(
x
)
d
x
=
∫
(
0
,
∞
)
×
(
0
,
2
π
)
×
(
−
π
/
2
,
π
/
2
)
d
−
2
f
(
Ψ
(
x
)
)
|
det
J
Ψ
(
x
)
|
d
x
=
∫
0
∞
∫
0
2
π
∫
−
π
2
π
2
⋯
∫
−
π
2
π
2
⏟
d
−
2
times
f
(
Ψ
(
r
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
)
r
d
−
1
cos
(
Θ
1
)
⋯
cos
(
Θ
d
−
2
)
d
−
2
d
Θ
1
⋯
d
Θ
d
−
2
d
Φ
d
r
=
∫
0
∞
r
d
−
1
∫
0
2
π
∫
−
π
2
π
2
⋯
∫
−
π
2
π
2
⏟
d
−
2
times
f
(
r
Ψ
(
1
,
Φ
,
Θ
1
,
…
,
Θ
d
−
2
)
)
cos
(
Θ
1
)
⋯
cos
(
Θ
d
−
2
)
d
−
2
d
Θ
1
⋯
d
Θ
d
−
2
d
Φ
d
r
=
∫
0
∞
r
d
−
1
∫
∂
B
1
(
0
)
f
(
r
x
)
d
r
{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{d}}f(x)dx&=\int _{(0,\infty )\times (0,2\pi )\times (-\pi /2,\pi /2)^{d-2}}f(\Psi (x))|\det J_{\Psi }(x)|dx\\&=\int _{0}^{\infty }\int _{0}^{2\pi }\underbrace {\int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}\cdots \int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}} _{d-2{\text{ times}}}f(\Psi (r,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2}))r^{d-1}\cos(\Theta _{1})\cdots \cos(\Theta _{d-2})^{d-2}d\Theta _{1}\cdots d\Theta _{d-2}d\Phi dr\\&=\int _{0}^{\infty }r^{d-1}\int _{0}^{2\pi }\underbrace {\int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}\cdots \int _{-{\frac {\pi }{2}}}^{\frac {\pi }{2}}} _{d-2{\text{ times}}}f(r\Psi (1,\Phi ,\Theta _{1},\ldots ,\Theta _{d-2}))\cos(\Theta _{1})\cdots \cos(\Theta _{d-2})^{d-2}d\Theta _{1}\cdots d\Theta _{d-2}d\Phi dr\\&=\int _{0}^{\infty }r^{d-1}\int _{\partial B_{1}(0)}f(rx)dr\end{aligned}}}
◻
{\displaystyle \Box }
Proof : Let's define the following function:
ϕ
(
r
)
=
1
r
d
−
1
∫
∂
B
r
(
x
)
u
(
y
)
d
y
{\displaystyle \phi (r)={\frac {1}{r^{d-1}}}\int _{\partial B_{r}(x)}u(y)dy}
From first coordinate transformation with the diffeomorphism
y
↦
x
+
y
{\displaystyle y\mapsto x+y}
and then applying our formula for integration on the unit sphere twice, we obtain:
ϕ
(
r
)
=
1
r
d
−
1
∫
∂
B
r
(
0
)
u
(
y
+
x
)
d
y
=
∫
∂
B
1
(
0
)
u
(
x
+
r
y
)
d
y
{\displaystyle \phi (r)={\frac {1}{r^{d-1}}}\int _{\partial B_{r}(0)}u(y+x)dy=\int _{\partial B_{1}(0)}u(x+ry)dy}
From first differentiation under the integral sign and then Gauss' theorem, we know that
ϕ
′
(
r
)
=
∫
∂
B
1
(
0
)
⟨
∇
u
(
x
+
r
y
)
,
y
⟩
d
y
=
∫
B
1
(
0
)
Δ
u
(
x
+
r
y
)
d
y
=
0
{\displaystyle \phi '(r)=\int _{\partial B_{1}(0)}\langle \nabla u(x+ry),y\rangle dy=\int _{B_{1}(0)}\Delta u(x+ry)dy=0}
Case 1 :
If
u
{\displaystyle u}
is harmonic, then we have
∫
B
1
(
0
)
Δ
u
(
x
+
r
y
)
d
y
=
0
{\displaystyle \int _{B_{1}(0)}\Delta u(x+ry)dy=0}
, which is why
ϕ
{\displaystyle \phi }
is constant. Now we can use the dominated convergence theorem for the following calculation:
lim
r
→
0
ϕ
(
r
)
=
∫
∂
B
1
(
0
)
lim
r
→
0
u
(
x
+
r
y
)
d
y
=
c
(
1
)
u
(
x
)
{\displaystyle \lim _{r\to 0}\phi (r)=\int _{\partial B_{1}(0)}\lim _{r\to 0}u(x+ry)dy=c(1)u(x)}
Therefore
ϕ
(
r
)
=
c
(
1
)
u
(
x
)
{\displaystyle \phi (r)=c(1)u(x)}
for all
r
{\displaystyle r}
.
With the relationship
r
d
−
1
c
(
1
)
=
c
(
r
)
{\displaystyle r^{d-1}c(1)=c(r)}
, which is true because of our formula for
c
(
x
)
,
x
∈
R
>
0
{\displaystyle c(x),x\in \mathbb {R} _{>0}}
, we obtain that
u
(
x
)
=
ϕ
(
r
)
c
(
1
)
=
1
c
(
1
)
1
r
d
−
1
∫
∂
B
r
(
x
)
u
(
y
)
d
y
=
1
c
(
r
)
∫
∂
B
r
(
x
)
u
(
y
)
d
y
{\displaystyle u(x)={\frac {\phi (r)}{c(1)}}={\frac {1}{c(1)}}{\frac {1}{r^{d-1}}}\int _{\partial B_{r}(x)}u(y)dy={\frac {1}{c(r)}}\int _{\partial B_{r}(x)}u(y)dy}
, which proves the first formula.
Furthermore, we can prove the second formula by first transformation of variables, then integrating by onion skins, then using the first formula of this theorem and then integration by onion skins again:
∫
B
r
(
x
)
u
(
y
)
d
y
=
∫
B
r
(
0
)
u
(
y
+
x
)
d
y
=
∫
0
r
s
d
−
1
∫
∂
B
1
(
0
)
u
(
y
+
s
x
)
d
x
d
s
=
∫
0
r
s
d
−
1
u
(
x
)
∫
∂
B
1
(
0
)
1
d
x
d
s
=
u
(
x
)
d
(
r
)
{\displaystyle \int _{B_{r}(x)}u(y)dy=\int _{B_{r}(0)}u(y+x)dy=\int _{0}^{r}s^{d-1}\int _{\partial B_{1}(0)}u(y+sx)dxds=\int _{0}^{r}s^{d-1}u(x)\int _{\partial B_{1}(0)}1dxds=u(x)d(r)}
This shows that if
u
{\displaystyle u}
is harmonic, then the two formulas for calculating
u
{\displaystyle u}
, hold.
Case 2 :
Suppose that
u
{\displaystyle u}
is not harmonic. Then there exists an
x
∈
Ω
{\displaystyle x\in \Omega }
such that
−
Δ
u
(
x
)
≠
0
{\displaystyle -\Delta u(x)\neq 0}
. Without loss of generality, we assume that
−
Δ
u
(
x
)
>
0
{\displaystyle -\Delta u(x)>0}
; the proof for
−
Δ
u
(
x
)
<
0
{\displaystyle -\Delta u(x)<0}
will be completely analogous exept that the direction of the inequalities will interchange. Then, since as above, due to the dominated convergence theorem, we have
lim
r
→
0
ϕ
′
(
r
)
=
∫
B
1
(
0
)
lim
r
→
0
Δ
u
(
x
+
r
y
)
d
y
>
0
{\displaystyle \lim _{r\to 0}\phi '(r)=\int _{B_{1}(0)}\lim _{r\to 0}\Delta u(x+ry)dy>0}
Since
ϕ
′
{\displaystyle \phi '}
is continuous (by the dominated convergence theorem), this is why
ϕ
{\displaystyle \phi }
grows at
0
{\displaystyle 0}
, which is a contradiction to the first formula.
The contradiction to the second formula can be obtained by observing that
ϕ
′
{\displaystyle \phi '}
is continuous and therefore there exists a
σ
∈
R
>
0
{\displaystyle \sigma \in \mathbb {R} _{>0}}
∀
r
∈
[
0
,
σ
)
:
ϕ
′
(
r
)
>
0
{\displaystyle \forall r\in [0,\sigma ):\phi '(r)>0}
This means that since
lim
r
→
0
ϕ
(
r
)
=
∫
∂
B
1
(
0
)
lim
r
→
0
u
(
x
+
r
y
)
d
y
=
c
(
1
)
u
(
x
)
{\displaystyle \lim _{r\to 0}\phi (r)=\int _{\partial B_{1}(0)}\lim _{r\to 0}u(x+ry)dy=c(1)u(x)}
and therefore
ϕ
(
0
)
=
c
(
1
)
u
(
x
)
{\displaystyle \phi (0)=c(1)u(x)}
, that
∀
r
∈
(
0
,
σ
)
:
ϕ
(
r
)
>
c
(
1
)
u
(
x
)
{\displaystyle \forall r\in (0,\sigma ):\phi (r)>c(1)u(x)}
and therefore, by the same calculation as above,
∫
B
r
(
x
)
u
(
y
)
d
y
=
∫
B
r
(
0
)
u
(
y
+
x
)
d
y
=
∫
0
r
s
d
−
1
∫
∂
B
1
(
0
)
u
(
y
+
s
x
)
d
x
d
s
>
∫
0
r
s
d
−
1
u
(
x
)
∫
∂
B
1
(
0
)
1
d
x
d
s
=
u
(
x
)
d
(
r
)
{\displaystyle \int _{B_{r}(x)}u(y)dy=\int _{B_{r}(0)}u(y+x)dy=\int _{0}^{r}s^{d-1}\int _{\partial B_{1}(0)}u(y+sx)dxds>\int _{0}^{r}s^{d-1}u(x)\int _{\partial B_{1}(0)}1dxds=u(x)d(r)}
This shows (by proof with contradiction) that if one of the two formulas hold, then
u
∈
C
2
(
Ω
)
{\displaystyle u\in C^{2}(\Omega )}
is harmonic.
Definition 6.16 :
A domain is an open and connected subset of
R
d
{\displaystyle \mathbb {R} ^{d}}
.
For the proof of the next theorem, we need two theorems from other subjects, the first from integration theory and the second from topology.
Theorem 6.17 :
Let
B
⊆
R
d
{\displaystyle B\subseteq \mathbb {R} ^{d}}
and let
f
:
B
→
R
{\displaystyle f:B\to \mathbb {R} }
be a function. If
∫
B
|
f
(
x
)
|
d
x
=
0
{\displaystyle \int _{B}|f(x)|dx=0}
then
f
(
x
)
=
0
{\displaystyle f(x)=0}
for almost every
x
∈
B
{\displaystyle x\in B}
.
Theorem 6.18 :
In a connected topological space, the only simultaneously open and closed sets are the whole space and the empty set.
We will omit the proofs.
Proof :
We choose
B
:=
{
x
∈
Ω
:
u
(
x
)
=
sup
y
∈
Ω
u
(
y
)
}
{\displaystyle B:=\left\{x\in \Omega :u(x)=\sup _{y\in \Omega }u(y)\right\}}
Since
Ω
{\displaystyle \Omega }
is open by assumption and
B
⊆
Ω
{\displaystyle B\subseteq \Omega }
, for every
x
∈
B
{\displaystyle x\in B}
exists an
R
∈
R
>
0
{\displaystyle R\in \mathbb {R} _{>0}}
such that
B
R
(
x
)
¯
⊆
Ω
{\displaystyle {\overline {B_{R}(x)}}\subseteq \Omega }
By theorem 6.15, we obtain in this case:
sup
y
∈
Ω
u
(
y
)
=
u
(
x
)
=
1
V
d
(
R
)
∫
B
R
(
x
)
u
(
z
)
d
z
{\displaystyle \sup _{y\in \Omega }u(y)=u(x)={\frac {1}{V_{d}(R)}}\int _{B_{R}(x)}u(z)dz}
Further,
sup
y
∈
Ω
u
(
y
)
=
sup
y
∈
Ω
u
(
y
)
V
d
(
R
)
V
d
(
R
)
=
1
V
d
(
R
)
∫
B
r
(
x
)
sup
y
∈
Ω
u
(
y
)
d
z
{\displaystyle \sup _{y\in \Omega }u(y)=\sup _{y\in \Omega }u(y){\frac {V_{d}(R)}{V_{d}(R)}}={\frac {1}{V_{d}(R)}}\int _{B_{r}(x)}\sup _{y\in \Omega }u(y)dz}
, which is why
1
V
d
(
R
)
∫
B
R
(
x
)
u
(
z
)
d
z
=
∫
B
R
(
x
)
sup
y
∈
Ω
u
(
y
)
d
z
⇔
1
V
d
(
R
)
∫
B
R
(
x
)
(
u
(
z
)
−
sup
y
∈
Ω
u
(
y
)
)
d
z
=
0
{\displaystyle {\begin{aligned}{\frac {1}{V_{d}(R)}}\int _{B_{R}(x)}u(z)dz=\int _{B_{R}(x)}\sup _{y\in \Omega }u(y)dz\\\Leftrightarrow {\frac {1}{V_{d}(R)}}\int _{B_{R}(x)}(u(z)-\sup _{y\in \Omega }u(y))dz=0\end{aligned}}}
Since
∀
z
∈
Ω
:
sup
y
∈
Ω
u
(
y
)
≥
u
(
z
)
{\displaystyle \forall z\in \Omega :\sup _{y\in \Omega }u(y)\geq u(z)}
, we have even
0
=
1
V
d
(
R
)
∫
B
R
(
x
)
(
u
(
z
)
−
sup
y
∈
Ω
u
(
y
)
)
d
z
=
−
1
V
d
(
R
)
∫
B
R
(
x
)
|
u
(
z
)
−
sup
y
∈
Ω
u
(
y
)
|
d
z
{\displaystyle 0={\frac {1}{V_{d}(R)}}\int _{B_{R}(x)}(u(z)-\sup _{y\in \Omega }u(y))dz=-{\frac {1}{V_{d}(R)}}\int _{B_{R}(x)}|u(z)-\sup _{y\in \Omega }u(y)|dz}
By theorem 6.17 we conclude that
u
(
z
)
=
sup
y
∈
Ω
u
(
y
)
{\displaystyle u(z)=\sup _{y\in \Omega }u(y)}
almost everywhere in
B
R
(
x
)
{\displaystyle B_{R}(x)}
, and since
z
↦
u
(
z
)
−
sup
y
∈
Ω
u
(
y
)
{\displaystyle z\mapsto u(z)-\sup _{y\in \Omega }u(y)}
is continuous, even
u
(
z
)
=
sup
y
∈
Ω
u
(
y
)
{\displaystyle u(z)=\sup _{y\in \Omega }u(y)}
really everywhere in
B
R
(
x
)
{\displaystyle B_{R}(x)}
(see exercise 6). Therefore
B
R
(
0
)
⊆
B
{\displaystyle B_{R}(0)\subseteq B}
, and since
x
∈
B
{\displaystyle x\in B}
was arbitrary,
B
{\displaystyle B}
is open.
Also,
B
=
u
−
1
(
{
sup
y
∈
Ω
u
(
y
)
}
)
{\displaystyle B=u^{-1}\left(\left\{\sup _{y\in \Omega }u(y)\right\}\right)}
and
u
{\displaystyle u}
is continuous. Thus, as a one-point set is closed, lemma 3.13 says
B
{\displaystyle B}
is closed in
Ω
{\displaystyle \Omega }
. Thus
B
{\displaystyle B}
is simultaneously open and closed. By theorem 6.18, we obtain that either
B
=
∅
{\displaystyle B=\emptyset }
or
B
=
Ω
{\displaystyle B=\Omega }
. And since by assumtion
B
{\displaystyle B}
is not empty, we have
B
=
Ω
{\displaystyle B=\Omega }
.
◻
{\displaystyle \Box }
Proof : See exercise 7.
Proof :
Proof :
What we will do next is showing that every harmonic function
u
∈
C
2
(
O
)
{\displaystyle u\in {\mathcal {C}}^{2}(O)}
is in fact automatically contained in
C
∞
(
O
)
{\displaystyle {\mathcal {C}}^{\infty }(O)}
.
Proof :
Proof :
Proof :
Theorem 6.31 :
Let
(
u
l
)
l
∈
N
{\displaystyle (u_{l})_{l\in \mathbb {N} }}
be a locally uniformly bounded sequence of harmonic functions. Then it has a locally uniformly convergent subsequence.
Proof :
Boundary value problem
edit
The dirichlet problem for the Poisson equation is to find a solution for
{
−
Δ
u
(
x
)
=
f
(
x
)
x
∈
Ω
u
(
x
)
=
g
(
x
)
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta u(x)=f(x)&x\in \Omega \\u(x)=g(x)&x\in \partial \Omega \end{cases}}}
Uniqueness of solutions
edit
If
Ω
{\displaystyle \Omega }
is bounded, then we can know that if the problem
{
−
Δ
u
(
x
)
=
f
(
x
)
x
∈
Ω
u
(
x
)
=
g
(
x
)
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta u(x)=f(x)&x\in \Omega \\u(x)=g(x)&x\in \partial \Omega \end{cases}}}
has a solution
u
1
{\displaystyle u_{1}}
, then this solution is unique on
Ω
{\displaystyle \Omega }
.
Proof :
Let
u
2
{\displaystyle u_{2}}
be another solution. If we define
u
=
u
1
−
u
2
{\displaystyle u=u_{1}-u_{2}}
, then
u
{\displaystyle u}
obviously solves the problem
{
−
Δ
u
(
x
)
=
0
,
x
∈
Ω
u
(
x
)
=
0
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta u(x)=0&,x\in \Omega \\u(x)=0&x\in \partial \Omega \end{cases}}}
, since
−
Δ
(
u
1
(
x
)
−
u
2
(
x
)
)
=
−
Δ
u
1
(
x
)
−
(
−
Δ
u
2
(
x
)
)
=
f
(
x
)
−
f
(
x
)
=
0
{\displaystyle -\Delta (u_{1}(x)-u_{2}(x))=-\Delta u_{1}(x)-(-\Delta u_{2}(x))=f(x)-f(x)=0}
for
x
∈
Ω
{\displaystyle x\in \Omega }
and
u
1
(
x
)
−
u
2
(
x
)
=
g
(
x
)
−
g
(
x
)
=
0
{\displaystyle u_{1}(x)-u_{2}(x)=g(x)-g(x)=0}
for
x
∈
∂
Ω
{\displaystyle x\in \partial \Omega }
.
Due to the above corollary from the minimum and maximum principle, we obtain that
u
{\displaystyle u}
is constantly zero not only on the boundary, but on the whole domain
Ω
{\displaystyle \Omega }
. Therefore
u
1
(
x
)
−
u
2
(
x
)
=
0
⇔
u
1
(
x
)
=
u
2
(
x
)
{\displaystyle u_{1}(x)-u_{2}(x)=0\Leftrightarrow u_{1}(x)=u_{2}(x)}
on
Ω
{\displaystyle \Omega }
. This is what we wanted to prove.
Green's functions of the first kind
edit
Let
Ω
⊆
R
d
{\displaystyle \Omega \subseteq \mathbb {R} ^{d}}
be a domain. Let
G
~
{\displaystyle {\tilde {G}}}
be the Green's kernel of Poisson's equation, which we have calculated above, i.e.
G
~
(
x
)
:=
{
−
1
2
|
x
|
d
=
1
−
1
2
π
ln
‖
x
‖
d
=
2
1
(
d
−
2
)
c
1
‖
x
‖
d
−
2
d
≥
3
{\displaystyle {\tilde {G}}(x):={\begin{cases}-{\frac {1}{2}}|x|&d=1\\-{\frac {1}{2\pi }}\ln \|x\|&d=2\\{\frac {1}{(d-2)c}}{\frac {1}{\|x\|^{d-2}}}&d\geq 3\end{cases}}}
, where
c
:=
∫
∂
B
1
(
0
)
1
d
z
{\displaystyle c:=\int _{\partial B_{1}(0)}1dz}
denotes the surface area of
B
1
(
0
)
⊂
R
d
{\displaystyle B_{1}(0)\subset \mathbb {R} ^{d}}
.
Suppose there is a function
h
:
Ω
×
Ω
→
R
{\displaystyle h:\Omega \times \Omega \to \mathbb {R} }
which satisfies
{
−
Δ
h
(
x
,
ξ
)
=
0
x
∈
Ω
h
(
x
,
ξ
)
=
G
~
(
x
−
ξ
)
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta h(x,\xi )=0&x\in \Omega \\h(x,\xi )={\tilde {G}}(x-\xi )&x\in \partial \Omega \end{cases}}}
Then the Green's function of the first kind for
−
Δ
{\displaystyle -\Delta }
for
Ω
{\displaystyle \Omega }
is defined as follows:
G
~
Ω
(
x
,
ξ
)
:=
G
~
(
x
−
ξ
)
−
h
(
x
,
ξ
)
{\displaystyle {\tilde {G}}_{\Omega }(x,\xi ):={\tilde {G}}(x-\xi )-h(x,\xi )}
G
~
(
x
−
ξ
)
−
h
(
x
,
ξ
)
{\displaystyle {\tilde {G}}(x-\xi )-h(x,\xi )}
is automatically a Green's function for
−
Δ
{\displaystyle -\Delta }
. This is verified exactly the same way as veryfying that
G
~
{\displaystyle {\tilde {G}}}
is a Green's kernel. The only additional thing we need to know is that
h
{\displaystyle h}
does not play any role in the limit processes because it is bounded.
A property of this function is that it satisfies
{
−
Δ
G
~
Ω
(
x
,
ξ
)
=
0
x
∈
Ω
∖
{
ξ
}
G
~
Ω
(
x
,
ξ
)
=
0
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta {\tilde {G}}_{\Omega }(x,\xi )=0&x\in \Omega \setminus \{\xi \}\\{\tilde {G}}_{\Omega }(x,\xi )=0&x\in \partial \Omega \end{cases}}}
The second of these equations is clear from the definition, and the first follows recalling that we calculated above (where we calculated the Green's kernel), that
Δ
G
~
(
x
)
=
0
{\displaystyle \Delta {\tilde {G}}(x)=0}
for
x
≠
0
{\displaystyle x\neq 0}
.
Let
Ω
⊆
R
d
{\displaystyle \Omega \subseteq \mathbb {R} ^{d}}
be a domain, and let
u
∈
C
2
(
Ω
)
{\displaystyle u\in C^{2}(\Omega )}
be a solution to the Dirichlet problem
{
−
Δ
u
(
x
)
=
f
(
x
)
x
∈
Ω
u
(
x
)
=
g
(
x
)
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta u(x)=f(x)&x\in \Omega \\u(x)=g(x)&x\in \partial \Omega \end{cases}}}
. Then the following representation formula for
u
{\displaystyle u}
holds:
u
(
ξ
)
=
∫
Ω
−
Δ
u
(
y
)
G
~
Ω
(
y
,
ξ
)
d
y
−
∫
∂
Ω
u
(
y
)
ν
(
y
)
∇
y
G
~
Ω
(
y
,
ξ
)
d
y
{\displaystyle u(\xi )=\int _{\Omega }-\Delta u(y){\tilde {G}}_{\Omega }(y,\xi )dy-\int _{\partial \Omega }u(y)\nu (y)\nabla _{y}{\tilde {G}}_{\Omega }(y,\xi )dy}
, where
G
~
Ω
{\displaystyle {\tilde {G}}_{\Omega }}
is a Green's function of the first kind for
Ω
{\displaystyle \Omega }
.
Proof :
Let's define
J
(
ϵ
)
:=
∫
Ω
∖
B
ϵ
(
ξ
)
−
Δ
u
(
y
)
G
~
Ω
(
y
,
ξ
)
d
y
{\displaystyle J(\epsilon ):=\int _{\Omega \setminus B_{\epsilon }(\xi )}-\Delta u(y){\tilde {G}}_{\Omega }(y,\xi )dy}
. By the theorem of dominated convergence, we have that
lim
ϵ
→
0
J
(
ϵ
)
=
∫
Ω
−
Δ
u
(
y
)
G
~
Ω
(
y
,
ξ
)
d
y
{\displaystyle \lim _{\epsilon \to 0}J(\epsilon )=\int _{\Omega }-\Delta u(y){\tilde {G}}_{\Omega }(y,\xi )dy}
Using multi-dimensional integration by parts, it can be obtained that:
J
(
ϵ
)
=
−
∫
∂
Ω
G
~
Ω
(
y
,
ξ
)
⏟
=
0
⟨
∇
u
(
y
)
,
ν
(
y
)
⟩
d
y
+
∫
∂
B
ϵ
(
ξ
)
G
~
Ω
(
y
,
ξ
)
⟨
∇
u
(
y
)
,
y
−
ξ
‖
y
−
ξ
‖
⟩
d
y
+
∫
Ω
∖
B
ϵ
(
ξ
)
⟨
∇
u
(
y
)
,
∇
x
G
~
Ω
(
y
,
ξ
)
⟩
d
y
{\displaystyle J(\epsilon )=-\int _{\partial \Omega }\underbrace {{\tilde {G}}_{\Omega }(y,\xi )} _{=0}\langle \nabla u(y),\nu (y)\rangle dy+\int _{\partial B_{\epsilon }(\xi )}{\tilde {G}}_{\Omega }(y,\xi )\langle \nabla u(y),{\frac {y-\xi }{\|y-\xi \|}}\rangle dy+\int _{\Omega \setminus B_{\epsilon }(\xi )}\langle \nabla u(y),\nabla _{x}{\tilde {G}}_{\Omega }(y,\xi )\rangle dy}
=
∫
∂
B
ϵ
(
ξ
)
G
~
Ω
(
y
,
ξ
)
⟨
∇
u
(
y
)
,
y
−
ξ
‖
y
−
ξ
‖
⟩
d
y
⏟
:=
J
1
(
ϵ
)
−
∫
Ω
∖
B
ϵ
(
ξ
)
Δ
G
~
Ω
(
y
,
ξ
)
u
(
y
)
d
y
{\displaystyle =\underbrace {\int _{\partial B_{\epsilon }(\xi )}{\tilde {G}}_{\Omega }(y,\xi )\langle \nabla u(y),{\frac {y-\xi }{\|y-\xi \|}}\rangle dy} _{:=J_{1}(\epsilon )}-\int _{\Omega \setminus B_{\epsilon }(\xi )}\Delta {\tilde {G}}_{\Omega }(y,\xi )u(y)dy}
−
∫
∂
B
ϵ
(
ξ
)
u
(
y
)
⟨
∇
G
~
Ω
(
y
,
ξ
)
,
y
−
ξ
‖
y
−
ξ
‖
⟩
d
y
⏟
:=
J
2
(
ϵ
)
−
∫
∂
Ω
u
(
y
)
⟨
∇
G
~
Ω
(
y
,
ξ
)
,
ν
(
y
)
⟩
d
y
{\displaystyle -\underbrace {\int _{\partial B_{\epsilon }(\xi )}u(y)\langle \nabla {\tilde {G}}_{\Omega }(y,\xi ),{\frac {y-\xi }{\|y-\xi \|}}\rangle dy} _{:=J_{2}(\epsilon )}-\int _{\partial \Omega }u(y)\langle \nabla {\tilde {G}}_{\Omega }(y,\xi ),\nu (y)\rangle dy}
When we proved the formula for the Green's kernel of Poisson's equation, we had already shown that
lim
ϵ
→
0
−
J
2
(
ϵ
)
=
u
(
ξ
)
{\displaystyle \lim _{\epsilon \to 0}-J_{2}(\epsilon )=u(\xi )}
and
lim
ϵ
→
0
J
1
(
ϵ
)
=
0
{\displaystyle \lim _{\epsilon \to 0}J_{1}(\epsilon )=0}
The only additional thing which is needed to verify this is that
h
∈
C
∞
(
Ω
)
{\displaystyle h\in C^{\infty }(\Omega )}
, which is why it stays bounded, while
G
~
{\displaystyle {\tilde {G}}}
goes to infinity as
ϵ
→
0
{\displaystyle \epsilon \to 0}
, which is why
h
{\displaystyle h}
doesn't play a role in the limit process.
This proves the formula.
Harmonic functions on the ball: A special case of the Dirichlet problem
edit
Green's function of the first kind for the ball
edit
Let's choose
h
(
x
,
ξ
)
=
G
~
(
‖
ξ
‖
r
(
x
−
r
2
‖
ξ
‖
2
ξ
)
)
{\displaystyle h(x,\xi )={\tilde {G}}\left({\frac {\|\xi \|}{r}}\left(x-{\frac {r^{2}}{\|\xi \|^{2}}}\xi \right)\right)}
Then
G
~
B
r
(
x
0
)
(
x
,
ξ
)
:=
G
~
(
x
−
ξ
)
−
h
(
x
−
x
0
,
ξ
−
x
0
)
{\displaystyle {\tilde {G}}_{B_{r}(x_{0})}(x,\xi ):={\tilde {G}}(x-\xi )-h(x-x_{0},\xi -x_{0})}
is a Green's function of the first kind for
B
r
(
x
0
)
{\displaystyle B_{r}(x_{0})}
.
Proof : Since
ξ
−
x
0
∈
B
r
(
0
)
⇒
r
2
‖
ξ
−
x
0
‖
2
(
ξ
−
x
0
)
∉
B
r
(
0
)
{\displaystyle \xi -x_{0}\in B_{r}(0)\Rightarrow {\frac {r^{2}}{\|\xi -x_{0}\|^{2}}}(\xi -x_{0})\notin B_{r}(0)}
and therefore
∀
x
,
ξ
∈
B
r
(
0
)
:
−
Δ
x
h
(
x
−
x
0
,
ξ
−
x
0
)
=
0
{\displaystyle \forall x,\xi \in B_{r}(0):-\Delta _{x}h(x-x_{0},\xi -x_{0})=0}
Furthermore, we obtain:
∫
B
r
(
x
0
)
−
Δ
φ
(
x
)
G
~
Ω
(
x
,
ξ
)
d
x
=
∫
B
r
(
x
0
)
−
Δ
φ
(
x
)
G
~
(
x
−
ξ
)
d
x
+
∫
B
r
(
x
0
)
φ
(
x
)
−
Δ
h
(
x
,
ξ
)
d
x
=
φ
(
ξ
)
+
0
{\displaystyle \int _{B_{r}(x_{0})}-\Delta \varphi (x){\tilde {G}}_{\Omega }(x,\xi )dx=\int _{B_{r}(x_{0})}-\Delta \varphi (x){\tilde {G}}(x-\xi )dx+\int _{B_{r}(x_{0})}\varphi (x)-\Delta h(x,\xi )dx=\varphi (\xi )+0}
, which is why
G
~
Ω
(
x
,
ξ
)
{\displaystyle {\tilde {G}}_{\Omega }(x,\xi )}
is a Green's function.
The property for the boundary comes from the following calculation:
∀
x
∈
∂
B
r
(
0
)
:
‖
x
−
ξ
‖
2
=
⟨
x
−
ξ
,
x
−
ξ
⟩
=
r
2
+
‖
ξ
‖
2
−
2
⟨
x
,
ξ
⟩
=
‖
ξ
‖
2
r
2
(
⟨
x
−
r
2
‖
ξ
‖
2
ξ
,
x
−
r
2
‖
ξ
‖
2
ξ
⟩
)
=
‖
ξ
‖
2
r
2
‖
x
−
r
2
‖
ξ
‖
2
ξ
‖
2
{\displaystyle \forall x\in \partial B_{r}(0):\|x-\xi \|^{2}=\langle x-\xi ,x-\xi \rangle =r^{2}+\|\xi \|^{2}-2\langle x,\xi \rangle ={\frac {\|\xi \|^{2}}{r^{2}}}(\langle x-{\frac {r^{2}}{\|\xi \|^{2}}}\xi ,x-{\frac {r^{2}}{\|\xi \|^{2}}}\xi \rangle )={\frac {\|\xi \|^{2}}{r^{2}}}\|x-{\frac {r^{2}}{\|\xi \|^{2}}}\xi \|^{2}}
, which is why
x
∈
∂
B
r
(
0
)
⇒
h
(
x
,
ξ
)
=
G
~
(
x
,
ξ
)
{\displaystyle x\in \partial B_{r}(0)\Rightarrow h(x,\xi )={\tilde {G}}(x,\xi )}
, since
G
~
{\displaystyle {\tilde {G}}}
is radially symmetric.
Let's consider the following problem:
{
−
Δ
u
(
x
)
=
0
x
∈
B
r
(
0
)
u
(
x
)
=
φ
(
x
)
x
∈
∂
B
r
(
0
)
{\displaystyle {\begin{cases}-\Delta u(x)=0&x\in B_{r}(0)\\u(x)=\varphi (x)&x\in \partial B_{r}(0)\end{cases}}}
Here
φ
{\displaystyle \varphi }
shall be continuous on
∂
B
r
(
0
)
{\displaystyle \partial B_{r}(0)}
. Then the following holds: The unique solution
u
∈
C
(
B
r
(
0
)
¯
)
∩
C
2
(
B
r
(
0
)
)
{\displaystyle u\in C({\overline {B_{r}(0)}})\cap C^{2}(B_{r}(0))}
for this problem is given by:
u
(
ξ
)
=
{
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
ξ
)
⟩
φ
(
y
)
d
y
ξ
∈
B
r
(
0
)
φ
(
ξ
)
ξ
∈
∂
B
r
(
0
)
{\displaystyle u(\xi )={\begin{cases}\int _{\partial B_{r}(0)}\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,\xi )\rangle \varphi (y)dy&\xi \in B_{r}(0)\\\varphi (\xi )&\xi \in \partial B_{r}(0)\end{cases}}}
Proof : Uniqueness we have already proven; we have shown that for all Dirichlet problems for
−
Δ
{\displaystyle -\Delta }
on bounded domains (and the unit ball is of course bounded), the solutions are unique.
Therefore, it only remains to show that the above function is a solution to the problem. To do so, we note first that
−
Δ
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
~
y
G
B
r
(
0
)
(
y
,
ξ
)
⟩
φ
(
y
)
d
y
=
−
Δ
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
y
(
G
~
(
y
−
ξ
)
−
h
(
y
,
ξ
)
)
⟩
φ
(
y
)
d
y
{\displaystyle -\Delta \int _{\partial B_{r}(0)}\langle -\nu (y),{\tilde {\nabla }}_{y}G_{B_{r}(0)}(y,\xi )\rangle \varphi (y)dy=-\Delta \int _{\partial B_{r}(0)}\langle -\nu (y),\nabla _{y}({\tilde {G}}(y-\xi )-h(y,\xi ))\rangle \varphi (y)dy}
Let
0
<
s
<
r
{\displaystyle 0<s<r}
be arbitrary. Since
G
~
B
r
(
0
)
{\displaystyle {\tilde {G}}_{B_{r}(0)}}
is continuous in
B
s
(
0
)
{\displaystyle B_{s}(0)}
, we have that on
B
s
(
0
)
{\displaystyle B_{s}(0)}
it is bounded. Therefore, by the fundamental estimate, we know that the integral is bounded, since the sphere, the set over which is integrated, is a bounded set, and therefore the whole integral must be always below a certain constant. But this means, that we are allowed to differentiate under the integral sign on
B
s
(
0
)
{\displaystyle B_{s}(0)}
, and since
r
>
s
>
0
{\displaystyle r>s>0}
was arbitrary, we can directly conclude that on
B
r
(
0
)
{\displaystyle B_{r}(0)}
,
−
Δ
u
(
ξ
)
=
∫
∂
B
r
(
0
)
−
Δ
(
⟨
−
ν
(
y
)
,
∇
~
y
G
~
(
x
−
ξ
)
−
h
(
x
,
ξ
)
⟩
φ
(
y
)
)
⏞
=
0
d
y
=
0
{\displaystyle -\Delta u(\xi )=\int _{\partial B_{r}(0)}\overbrace {-\Delta (\langle -\nu (y),{\tilde {\nabla }}_{y}{\tilde {G}}(x-\xi )-h(x,\xi )\rangle \varphi (y))} ^{=0}dy=0}
Furthermore, we have to show that
∀
x
∈
∂
B
r
(
0
)
:
lim
y
→
x
u
(
y
)
=
φ
(
x
)
{\displaystyle \forall x\in \partial B_{r}(0):\lim _{y\to x}u(y)=\varphi (x)}
, i. e. that
u
{\displaystyle u}
is continuous on the boundary.
To do this, we notice first that
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
ξ
)
⟩
d
y
=
1
{\displaystyle \int _{\partial B_{r}(0)}\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,\xi )\rangle dy=1}
This follows due to the fact that if
u
≡
1
{\displaystyle u\equiv 1}
, then
u
{\displaystyle u}
solves the problem
{
−
Δ
u
(
x
)
=
0
x
∈
B
r
(
0
)
u
(
x
)
=
1
x
∈
∂
B
r
(
0
)
{\displaystyle {\begin{cases}-\Delta u(x)=0&x\in B_{r}(0)\\u(x)=1&x\in \partial B_{r}(0)\end{cases}}}
and the application of the representation formula.
Furthermore, if
‖
x
−
x
∗
‖
<
1
2
δ
{\displaystyle \|x-x^{*}\|<{\frac {1}{2}}\delta }
and
‖
y
−
x
∗
‖
≥
δ
{\displaystyle \|y-x^{*}\|\geq \delta }
, we have due to the second triangle inequality:
‖
x
−
y
‖
≥
|
‖
y
−
x
∗
‖
−
‖
x
∗
−
x
‖
|
≥
1
2
δ
{\displaystyle \|x-y\|\geq |\|y-x^{*}\|-\|x^{*}-x\||\geq {\frac {1}{2}}\delta }
In addition, another application of the second triangle inequality gives:
(
r
2
−
‖
x
‖
2
)
=
(
r
+
‖
x
‖
)
(
r
−
‖
x
‖
)
=
(
r
+
‖
x
‖
)
(
‖
x
∗
‖
−
‖
x
‖
)
≤
2
r
‖
x
∗
−
x
‖
{\displaystyle (r^{2}-\|x\|^{2})=(r+\|x\|)(r-\|x\|)=(r+\|x\|)(\|x^{*}\|-\|x\|)\leq 2r\|x^{*}-x\|}
Let then
ϵ
>
0
{\displaystyle \epsilon >0}
be arbitrary, and let
x
∗
∈
∂
B
r
(
0
)
{\displaystyle x^{*}\in \partial B_{r}(0)}
. Then, due to the continuity of
φ
{\displaystyle \varphi }
, we are allowed to choose
δ
>
0
{\displaystyle \delta >0}
such that
‖
x
−
x
∗
‖
<
δ
⇒
|
φ
(
x
)
−
φ
(
x
∗
)
|
<
ϵ
2
{\displaystyle \|x-x^{*}\|<\delta \Rightarrow |\varphi (x)-\varphi (x^{*})|<{\frac {\epsilon }{2}}}
.
In the end, with the help of all the previous estimations we have made, we may unleash the last chain of inequalities which shows that the representation formula is true:
|
u
(
x
)
−
u
(
x
∗
)
|
=
|
u
(
x
)
−
1
⋅
φ
(
x
∗
)
|
=
|
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
x
)
⟩
(
φ
(
x
)
−
φ
(
x
∗
)
)
d
y
|
{\displaystyle |u(x)-u(x^{*})|=|u(x)-1\cdot \varphi (x^{*})|=\left|\int _{\partial B_{r}(0)}\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,x)\rangle (\varphi (x)-\varphi (x^{*}))dy\right|}
≤
ϵ
2
∫
∂
B
r
(
0
)
∩
B
δ
(
x
∗
)
|
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
x
)
⟩
|
d
y
+
2
‖
φ
‖
∞
∫
∂
B
r
(
0
)
∖
B
δ
(
x
∗
)
|
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
x
)
⟩
|
d
y
{\displaystyle \leq {\frac {\epsilon }{2}}\int _{\partial B_{r}(0)\cap B_{\delta }(x^{*})}|\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,x)\rangle |dy+2\|\varphi \|_{\infty }\int _{\partial B_{r}(0)\setminus B_{\delta }(x^{*})}|\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,x)\rangle |dy}
≤
ϵ
2
+
2
‖
φ
‖
∞
∫
∂
B
r
(
0
)
∖
B
δ
(
x
∗
)
r
2
−
‖
x
‖
2
r
c
(
1
)
(
δ
2
)
d
d
y
≤
ϵ
2
+
2
‖
φ
‖
∞
r
d
−
2
r
2
−
‖
x
‖
2
(
δ
2
)
d
{\displaystyle \leq {\frac {\epsilon }{2}}+2\|\varphi \|_{\infty }\int _{\partial B_{r}(0)\setminus B_{\delta }(x^{*})}{\frac {r^{2}-\|x\|^{2}}{rc(1)\left({\frac {\delta }{2}}\right)^{d}}}dy\leq {\frac {\epsilon }{2}}+2\|\varphi \|_{\infty }r^{d-2}{\frac {r^{2}-\|x\|^{2}}{\left({\frac {\delta }{2}}\right)^{d}}}}
Since
x
→
x
∗
{\displaystyle x\to x^{*}}
implies
r
2
−
‖
x
‖
2
→
0
{\displaystyle r^{2}-\|x\|^{2}\to 0}
, we might choose
x
{\displaystyle x}
close enough to
x
∗
{\displaystyle x^{*}}
such that
2
‖
φ
‖
∞
r
d
−
2
r
2
−
‖
x
‖
2
(
δ
2
)
d
<
ϵ
2
{\displaystyle 2\|\varphi \|_{\infty }r^{d-2}{\frac {r^{2}-\|x\|^{2}}{\left({\frac {\delta }{2}}\right)^{d}}}<{\frac {\epsilon }{2}}}
. Since
ϵ
>
0
{\displaystyle \epsilon >0}
was arbitrary, this finishes the proof.
Let
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
be a domain. A function
b
:
R
d
→
R
{\displaystyle b:\mathbb {R} ^{d}\to \mathbb {R} }
is called a barrier with respect to
y
∈
∂
Ω
{\displaystyle y\in \partial \Omega }
if and only if the following properties are satisfied:
b
{\displaystyle b}
is continuous
b
{\displaystyle b}
is superharmonic on
Ω
{\displaystyle \Omega }
b
(
y
)
=
0
{\displaystyle b(y)=0}
∀
x
∈
R
d
∖
Ω
:
b
(
x
)
>
0
{\displaystyle \forall x\in \mathbb {R} ^{d}\setminus \Omega :b(x)>0}
Exterior sphere condition
edit
Let
Ω
⊆
R
d
{\displaystyle \Omega \subseteq \mathbb {R} ^{d}}
be a domain. We say that it satisfies the exterior sphere condition, if and only if for all
x
∈
∂
Ω
{\displaystyle x\in \partial \Omega }
there is a ball
B
r
(
z
)
⊆
R
d
∖
Ω
{\displaystyle B_{r}(z)\subseteq \mathbb {R} ^{d}\setminus \Omega }
such that
x
∈
∂
B
r
(
z
)
{\displaystyle x\in \partial B_{r}(z)}
for some
z
∈
R
d
∖
Ω
{\displaystyle z\in \mathbb {R} ^{d}\setminus \Omega }
and
r
∈
R
≥
0
{\displaystyle r\in \mathbb {R} _{\geq 0}}
.
Subharmonic and superharmonic functions
edit
Let
Ω
⊆
R
d
{\displaystyle \Omega \subseteq \mathbb {R} ^{d}}
be a domain and
v
∈
C
(
Ω
)
{\displaystyle v\in C(\Omega )}
.
We call
v
{\displaystyle v}
subharmonic if and only if:
v
(
x
)
≤
1
d
(
r
)
∫
B
r
(
x
)
v
(
y
)
d
y
{\displaystyle v(x)\leq {\frac {1}{d(r)}}\int _{B_{r}(x)}v(y)dy}
We call
v
{\displaystyle v}
superharmonic if and only if:
v
(
x
)
≥
1
d
(
r
)
∫
B
r
(
x
)
v
(
y
)
d
y
{\displaystyle v(x)\geq {\frac {1}{d(r)}}\int _{B_{r}(x)}v(y)dy}
From this definition we can see that a function is harmonic if and only if it is subharmonic and superharmonic.
Minimum principle for superharmonic functions
edit
A superharmonic function
u
{\displaystyle u}
on
Ω
{\displaystyle \Omega }
attains it's minimum on
Ω
{\displaystyle \Omega }
's border
∂
Ω
{\displaystyle \partial \Omega }
.
Proof : Almost the same as the proof of the minimum and maximum principle for harmonic functions. As an exercise, you might try to prove this minimum principle yourself.
Let
u
∈
S
φ
(
Ω
)
{\displaystyle u\in {\mathcal {S}}_{\varphi }(\Omega )}
, and let
B
r
(
x
0
)
⊂
Ω
{\displaystyle B_{r}(x_{0})\subset \Omega }
. If we define
u
~
(
x
)
=
{
u
(
x
)
x
∉
B
r
(
x
0
)
∫
∂
B
r
(
0
)
⟨
−
ν
(
y
)
,
∇
y
G
~
B
r
(
0
)
(
y
,
x
)
⟩
φ
(
y
)
d
y
x
∈
B
r
(
x
0
)
{\displaystyle {\tilde {u}}(x)={\begin{cases}u(x)&x\notin B_{r}(x_{0})\\\int _{\partial B_{r}(0)}\langle -\nu (y),\nabla _{y}{\tilde {G}}_{B_{r}(0)}(y,x)\rangle \varphi (y)dy&x\in B_{r}(x_{0})\end{cases}}}
, then
u
~
∈
S
φ
(
Ω
)
{\displaystyle {\tilde {u}}\in {\mathcal {S}}_{\varphi }(\Omega )}
.
Proof : For this proof, the very important thing to notice is that the formula for
u
~
{\displaystyle {\tilde {u}}}
inside
B
r
(
x
0
)
{\displaystyle B_{r}(x_{0})}
is nothing but the solution formula for the Dirichlet problem on the ball. Therefore, we immediately obtain that
u
~
{\displaystyle {\tilde {u}}}
is superharmonic, and furthermore, the values on
∂
Ω
{\displaystyle \partial \Omega }
don't change, which is why
u
~
∈
S
φ
(
Ω
)
{\displaystyle {\tilde {u}}\in {\mathcal {S}}_{\varphi }(\Omega )}
. This was to show.
Let
φ
∈
C
(
∂
Ω
)
{\displaystyle \varphi \in C(\partial \Omega )}
. Then we define the following set:
S
φ
(
Ω
)
:=
{
u
∈
C
(
Ω
¯
)
:
u
superharmonic and
x
∈
∂
Ω
⇒
u
(
x
)
≥
φ
(
x
)
}
{\displaystyle {\mathcal {S}}_{\varphi }(\Omega ):=\{u\in C({\overline {\Omega }}):u{\text{ superharmonic and }}x\in \partial \Omega \Rightarrow u(x)\geq \varphi (x)\}}
S
φ
(
Ω
)
{\displaystyle {\mathcal {S}}_{\varphi }(\Omega )}
is not empty and
∀
u
∈
S
φ
(
Ω
)
:
∀
x
∈
Ω
:
u
(
x
)
≥
min
y
∈
∂
Ω
φ
(
y
)
{\displaystyle \forall u\in {\mathcal {S}}_{\varphi }(\Omega ):\forall x\in \Omega :u(x)\geq \min _{y\in \partial \Omega }\varphi (y)}
Proof : The first part follows by choosing the constant function
u
(
x
)
=
max
y
∈
∂
Ω
φ
(
y
)
{\displaystyle u(x)=\max _{y\in \partial \Omega }\varphi (y)}
, which is harmonic and therefore superharmonic. The second part follows from the minimum principle for superharmonic functions.
Let
u
1
,
u
2
∈
S
φ
(
Ω
)
{\displaystyle u_{1},u_{2}\in {\mathcal {S}}_{\varphi }(\Omega )}
. If we now define
u
(
x
)
=
min
{
u
1
(
x
)
,
u
2
(
x
)
}
{\displaystyle u(x)=\min\{u_{1}(x),u_{2}(x)\}}
, then
u
∈
S
φ
(
Ω
)
{\displaystyle u\in {\mathcal {S}}_{\varphi }(\Omega )}
.
Proof : The condition on the border is satisfied, because
∀
x
∈
∂
Ω
:
u
1
(
x
)
≥
φ
(
x
)
∧
u
2
(
x
)
≥
φ
(
x
)
{\displaystyle \forall x\in \partial \Omega :u_{1}(x)\geq \varphi (x)\wedge u_{2}(x)\geq \varphi (x)}
u
{\displaystyle u}
is superharmonic because, if we (without loss of generality) assume that
u
(
x
)
=
u
1
(
x
)
{\displaystyle u(x)=u_{1}(x)}
, then it follows that
u
(
x
)
=
u
1
(
x
)
≥
1
d
(
r
)
∫
B
r
(
x
)
u
1
(
y
)
d
y
≥
1
d
(
r
)
∫
B
r
(
x
)
u
(
y
)
d
y
{\displaystyle u(x)=u_{1}(x)\geq {\frac {1}{d(r)}}\int _{B_{r}(x)}u_{1}(y)dy\geq {\frac {1}{d(r)}}\int _{B_{r}(x)}u(y)dy}
, due to the monotony of the integral. This argument is valid for all
x
∈
Ω
{\displaystyle x\in \Omega }
, and therefore
u
{\displaystyle u}
is superharmonic.
If
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
is bounded and
φ
∈
C
(
∂
Ω
)
{\displaystyle \varphi \in C(\partial \Omega )}
, then the function
u
(
x
)
=
inf
{
v
(
x
)
|
v
∈
S
φ
(
Ω
)
}
{\displaystyle u(x)=\inf\{v(x)|v\in {\mathcal {S}}_{\varphi }(\Omega )\}}
is harmonic.
Proof :
If
Ω
{\displaystyle \Omega }
satisfies the exterior sphere condition, then for all
y
∈
∂
Ω
{\displaystyle y\in \partial \Omega }
there is a barrier function.
Existence theorem of Perron
edit
Let
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
be a bounded domain which satisfies the exterior sphere condition. Then the Dirichlet problem for the Poisson equation, which is, writing it again:
{
−
Δ
u
(
x
)
=
f
(
x
)
x
∈
Ω
u
(
x
)
=
g
(
x
)
x
∈
∂
Ω
{\displaystyle {\begin{cases}-\Delta u(x)=f(x)&x\in \Omega \\u(x)=g(x)&x\in \partial \Omega \end{cases}}}
has a solution
u
∈
C
∞
(
Ω
)
∩
C
(
Ω
¯
)
{\displaystyle u\in C^{\infty }(\Omega )\cap C({\overline {\Omega }})}
.
Proof :
Let's summarise the results of this section.
In the next chapter, we will have a look at the heat equation.
Prove theorem 6.3 using theorem 6.2 (Hint: Choose
V
(
x
)
=
W
(
x
)
f
(
x
)
{\displaystyle \mathbf {V} (x)=\mathbf {W} (x)f(x)}
in theorem 6.2).
Prove that
∀
n
∈
N
:
Γ
(
n
+
1
)
=
n
!
{\displaystyle \forall n\in \mathbb {N} :\Gamma (n+1)=n!}
, where
n
!
{\displaystyle n!}
is the factorial of
n
{\displaystyle n}
.
Calculate
V
d
′
(
R
)
{\displaystyle V_{d}'(R)}
. Have you seen the obtained function before?
Prove that for
d
=
1
{\displaystyle d=1}
, the function
P
d
{\displaystyle P_{d}}
as defined in theorem 6.11 is a Green's kernel for Poisson's equation (hint: use integration by parts twice).
For all
d
≥
2
{\displaystyle d\geq 2}
and
x
∈
R
d
∖
{
0
}
{\displaystyle x\in \mathbb {R} ^{d}\setminus \{0\}}
, calculate
∇
P
d
(
x
)
{\displaystyle \nabla P_{d}(x)}
and
Δ
P
d
(
x
)
{\displaystyle \Delta P_{d}(x)}
.
Let
O
⊆
R
d
{\displaystyle O\subseteq \mathbb {R} ^{d}}
be open and
f
:
O
→
R
d
{\displaystyle f:O\to \mathbb {R} ^{d}}
be continuous. Prove that
f
(
x
)
=
0
{\displaystyle f(x)=0}
almost everywhere in
O
{\displaystyle O}
implies
f
(
x
)
=
0
{\displaystyle f(x)=0}
everywhere in
O
{\displaystyle O}
.
Prove theorem 6.20 by modelling your proof on the proof of theorem 6.19.
For all dimensions
d
≥
2
{\displaystyle d\geq 2}
, give an example for vectors
α
,
β
∈
N
0
d
{\displaystyle \alpha ,\beta \in \mathbb {N} _{0}^{d}}
such that neither
α
≤
β
{\displaystyle \alpha \leq \beta }
nor
β
≤
α
{\displaystyle \beta \leq \alpha }
.