Lemma 6.1 :
∫
R
e
−
x
2
d
x
=
π
{\displaystyle \int _{\mathbb {R} }e^{-x^{2}}dx={\sqrt {\pi }}}
Proof :
(
∫
R
e
−
x
2
)
2
=
(
∫
R
e
−
x
2
)
⋅
(
∫
R
e
−
y
2
)
=
∫
R
∫
R
e
−
(
x
2
+
y
2
)
d
x
d
y
=
∫
R
2
e
−
‖
(
x
,
y
)
‖
2
d
(
x
,
y
)
Fubini
=
∫
0
∞
∫
0
2
π
r
e
−
r
2
d
φ
d
r
integration by substitution using spherical coordinates
=
2
π
∫
0
∞
r
e
−
r
2
d
r
=
2
π
∫
0
∞
1
2
r
r
e
−
r
d
r
integration by substitution using
r
↦
r
=
π
{\displaystyle {\begin{aligned}\left(\int _{\mathbb {R} }e^{-x^{2}}\right)^{2}&=\left(\int _{\mathbb {R} }e^{-x^{2}}\right)\cdot \left(\int _{\mathbb {R} }e^{-y^{2}}\right)&\\&=\int _{\mathbb {R} }\int _{\mathbb {R} }e^{-(x^{2}+y^{2})}dxdy&\\&=\int _{\mathbb {R} ^{2}}e^{-\|(x,y)\|^{2}}d(x,y)&{\text{Fubini}}\\&=\int _{0}^{\infty }\int _{0}^{2\pi }re^{-r^{2}}d\varphi dr&{\text{ integration by substitution using spherical coordinates}}\\&=2\pi \int _{0}^{\infty }re^{-r^{2}}dr&\\&=2\pi \int _{0}^{\infty }{\frac {1}{2{\sqrt {r}}}}{\sqrt {r}}e^{-r}dr&{\text{integration by substitution using }}r\mapsto {\sqrt {r}}\\&=\pi &\end{aligned}}}
Taking the square root on both sides finishes the proof.
◻
{\displaystyle \Box }
Lemma 6.2 :
∫
R
d
e
−
‖
x
‖
2
/
2
d
x
=
2
π
d
{\displaystyle \int _{\mathbb {R} ^{d}}e^{-\|x\|^{2}/2}dx={\sqrt {2\pi }}^{d}}
Proof :
∫
R
d
e
−
‖
x
‖
2
2
d
x
=
∫
−
∞
∞
⋯
∫
−
∞
∞
⏞
d
times
e
−
x
1
2
+
⋯
+
x
d
2
2
d
x
1
⋯
d
x
d
Fubini's theorem
=
∫
−
∞
∞
e
−
x
d
2
2
⋯
∫
−
∞
∞
e
−
x
1
2
2
d
x
1
⋯
d
x
d
pulling the constants out of the integrals
{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{d}}e^{-{\frac {\|x\|^{2}}{2}}}dx&=\overbrace {\int _{-\infty }^{\infty }\cdots \int _{-\infty }^{\infty }} ^{d{\text{ times}}}e^{-{\frac {x_{1}^{2}+\cdots +x_{d}^{2}}{2}}}dx_{1}\cdots dx_{d}&{\text{Fubini's theorem}}\\&=\int _{-\infty }^{\infty }e^{-{\frac {x_{d}^{2}}{2}}}\cdots \int _{-\infty }^{\infty }e^{-{\frac {x_{1}^{2}}{2}}}dx_{1}\,\cdots dx_{d}&{\text{pulling the constants out of the integrals}}\end{aligned}}}
By lemma 6.1,
∫
R
e
−
x
2
d
x
=
π
{\displaystyle \int _{\mathbb {R} }e^{-x^{2}}dx={\sqrt {\pi }}}
.
If we apply to this integration by substitution (theorem 5.5) with the diffeomorphism
x
↦
x
2
{\displaystyle x\mapsto {\frac {x}{\sqrt {2}}}}
, we obtain
π
=
∫
R
1
2
e
−
x
2
2
d
x
{\displaystyle {\sqrt {\pi }}=\int _{\mathbb {R} }{\frac {1}{\sqrt {2}}}e^{-{\frac {x^{2}}{2}}}dx}
and multiplying with
2
{\displaystyle {\sqrt {2}}}
2
π
=
∫
R
e
−
x
2
2
d
x
{\displaystyle {\sqrt {2\pi }}=\int _{\mathbb {R} }e^{-{\frac {x^{2}}{2}}}dx}
Therefore, calculating the innermost integrals first and then pulling out the resulting constants,
∫
−
∞
∞
e
−
x
d
2
2
⋯
∫
−
∞
∞
e
−
x
1
2
2
⏞
d
times
d
x
1
⋯
d
x
d
=
2
π
d
{\displaystyle \overbrace {\int _{-\infty }^{\infty }e^{-{\frac {x_{d}^{2}}{2}}}\cdots \int _{-\infty }^{\infty }e^{-{\frac {x_{1}^{2}}{2}}}} ^{d{\text{ times}}}dx_{1}\cdots dx_{d}={\sqrt {2\pi }}^{d}}
◻
{\displaystyle \Box }
Theorem 6.3 :
The function
E
:
R
×
R
d
→
R
,
E
(
t
,
x
)
=
{
4
π
t
−
d
e
−
‖
x
‖
2
4
t
t
>
0
0
t
≤
0
{\displaystyle E:\mathbb {R} \times \mathbb {R} ^{d}\to \mathbb {R} ,E(t,x)={\begin{cases}{\sqrt {4\pi t}}^{-d}e^{-{\frac {\|x\|^{2}}{4t}}}&t>0\\0&t\leq 0\end{cases}}}
is a Green's kernel for the heat equation.
Proof :
1.
We show that
E
{\displaystyle E}
is locally integrable.
Let
K
⊂
R
×
R
d
{\displaystyle K\subset \mathbb {R} \times \mathbb {R} ^{d}}
a compact set, and let
T
>
0
{\displaystyle T>0}
such that
K
⊂
(
−
T
,
T
)
×
R
d
{\displaystyle K\subset (-T,T)\times \mathbb {R} ^{d}}
. We first show that the integral
∫
(
−
T
,
T
)
×
R
d
E
(
s
,
y
)
d
(
s
,
y
)
{\displaystyle \int _{(-T,T)\times \mathbb {R} ^{d}}E(s,y)d(s,y)}
exists:
∫
(
−
T
,
T
)
×
R
d
E
(
s
,
y
)
d
(
s
,
y
)
=
∫
(
0
,
T
)
×
R
d
E
(
s
,
y
)
d
(
s
,
y
)
∀
s
≤
0
:
E
(
s
,
y
)
=
0
=
∫
0
T
∫
R
d
1
4
π
s
d
e
−
‖
y
‖
2
4
s
d
y
d
s
Fubini's theorem
{\displaystyle {\begin{aligned}\int _{(-T,T)\times \mathbb {R} ^{d}}E(s,y)d(s,y)&=\int _{(0,T)\times \mathbb {R} ^{d}}E(s,y)d(s,y)&\forall s\leq 0:E(s,y)=0\\&=\int _{0}^{T}\int _{\mathbb {R} ^{d}}{\frac {1}{{\sqrt {4\pi s}}^{d}}}e^{-{\frac {\|y\|^{2}}{4s}}}dyds&{\text{Fubini's theorem}}\end{aligned}}}
By transformation of variables in the inner integral using the diffeomorphism
y
↦
2
s
y
{\displaystyle y\mapsto {\sqrt {2s}}y}
, and lemma 6.2, we obtain:
=
∫
0
T
∫
R
d
2
s
d
4
π
s
d
e
−
‖
y
‖
2
2
d
y
d
s
=
∫
0
T
1
d
s
=
T
{\displaystyle =\int _{0}^{T}\int _{\mathbb {R} ^{d}}{\frac {{\sqrt {2s}}^{d}}{{\sqrt {4\pi s}}^{d}}}e^{-{\frac {\|y\|^{2}}{2}}}dyds=\int _{0}^{T}1ds=T}
Therefore the integral
∫
(
−
T
,
T
)
×
R
d
E
(
s
,
y
)
d
(
s
,
y
)
{\displaystyle \int _{(-T,T)\times \mathbb {R} ^{d}}E(s,y)d(s,y)}
exists. But since
∀
(
s
,
y
)
∈
R
×
R
d
:
|
χ
K
(
s
,
y
)
E
(
s
,
y
)
|
≤
|
E
(
s
,
y
)
|
{\displaystyle \forall (s,y)\in \mathbb {R} \times \mathbb {R} ^{d}:|\chi _{K}(s,y)E(s,y)|\leq |E(s,y)|}
, where
χ
K
{\displaystyle \chi _{K}}
is the characteristic function of
K
{\displaystyle K}
, the integral
∫
(
−
T
,
T
)
×
R
d
χ
K
(
s
,
y
)
E
(
s
,
y
)
d
(
s
,
y
)
=
∫
K
E
(
s
,
y
)
d
(
s
,
y
)
{\displaystyle \int _{(-T,T)\times \mathbb {R} ^{d}}\chi _{K}(s,y)E(s,y)d(s,y)=\int _{K}E(s,y)d(s,y)}
exists. Since
K
{\displaystyle K}
was an arbitrary compact set, we thus have local integrability.
2.
We calculate
∂
t
E
{\displaystyle \partial _{t}E}
and
Δ
x
E
{\displaystyle \Delta _{x}E}
(see exercise 1).
∂
t
E
(
t
,
x
)
=
(
‖
x
‖
2
4
t
2
−
d
4
t
)
E
(
t
,
x
)
{\displaystyle \partial _{t}E(t,x)=\left({\frac {\|x\|^{2}}{4t^{2}}}-{\frac {d}{4t}}\right)E(t,x)}
Δ
x
E
(
t
,
x
)
=
(
‖
x
‖
2
4
t
2
−
d
4
t
)
E
(
t
,
x
)
{\displaystyle \Delta _{x}E(t,x)=\left({\frac {\|x\|^{2}}{4t^{2}}}-{\frac {d}{4t}}\right)E(t,x)}
3.
We show that
∀
φ
∈
D
(
R
×
R
d
)
,
(
t
,
x
)
∈
R
×
R
d
:
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
δ
(
t
,
x
)
(
φ
)
{\displaystyle \forall \varphi \in {\mathcal {D}}(\mathbb {R} \times \mathbb {R} ^{d}),(t,x)\in \mathbb {R} \times \mathbb {R} ^{d}:(\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )=\delta _{(t,x)}(\varphi )}
Let
φ
∈
D
(
R
×
R
d
)
,
(
t
,
x
)
∈
R
×
R
d
{\displaystyle \varphi \in {\mathcal {D}}(\mathbb {R} \times \mathbb {R} ^{d}),(t,x)\in \mathbb {R} \times \mathbb {R} ^{d}}
be arbitrary.
In this last step of the proof, we will only manipulate the term
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
{\displaystyle (\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )}
.
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
T
E
(
⋅
−
(
t
,
x
)
)
(
(
−
∂
t
−
Δ
x
)
φ
)
by definition of distribution derivation
=
∫
R
×
R
d
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
=
∫
(
t
,
∞
)
×
R
d
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
∀
t
≤
0
:
E
(
t
,
x
)
=
0
{\displaystyle {\begin{aligned}(\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )&=T_{E(\cdot -(t,x))}((-\partial _{t}-\Delta _{x})\varphi )&{\text{by definition of distribution derivation}}\\&=\int _{\mathbb {R} \times \mathbb {R} ^{d}}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)&\\&=\int _{(t,\infty )\times \mathbb {R} ^{d}}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)&\forall t\leq 0:E(t,x)=0\\\end{aligned}}}
If we choose
R
>
0
{\displaystyle R>0}
and
T
>
0
{\displaystyle T>0}
such that
supp
φ
⊂
(
−
∞
,
t
+
T
)
×
B
R
(
x
)
{\displaystyle {\text{supp }}\varphi \subset (-\infty ,t+T)\times B_{R}(x)}
, we have even
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
∫
(
t
,
t
+
T
)
×
B
R
(
x
)
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
{\displaystyle (\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )=\int _{(t,t+T)\times B_{R}(x)}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)}
Using the dominated convergence theorem (theorem 5.1), we can rewrite the term again:
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
∫
(
t
,
t
+
T
)
×
B
R
(
x
)
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
=
lim
ϵ
↓
0
∫
(
t
,
t
+
T
)
×
B
R
(
x
)
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
(
1
−
χ
[
t
,
t
+
ϵ
]
(
s
)
)
d
(
s
,
y
)
=
lim
ϵ
↓
0
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
{\displaystyle {\begin{aligned}(\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )&=\int _{(t,t+T)\times B_{R}(x)}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)\\&=\lim _{\epsilon \downarrow 0}\int _{(t,t+T)\times B_{R}(x)}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)(1-\chi _{[t,t+\epsilon ]}(s))d(s,y)\\&=\lim _{\epsilon \downarrow 0}\int _{(t+\epsilon ,t+T)\times B_{R}(x)}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)\end{aligned}}}
, where
χ
[
t
,
t
+
ϵ
]
{\displaystyle \chi _{[t,t+\epsilon ]}}
is the characteristic function of
[
t
,
t
+
ϵ
]
{\displaystyle [t,t+\epsilon ]}
.
We split the limit term in half to manipulate each summand separately:
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
(
−
∂
t
−
Δ
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
=
−
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
Δ
x
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
−
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
∂
t
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
{\displaystyle {\begin{aligned}\int _{(t+\epsilon ,t+T)\times B_{R}(x)}(-\partial _{t}-\Delta _{x})\varphi (s,y)E(s-t,y-x)d(s,y)\\=-\int _{(t+\epsilon ,t+T)\times B_{R}(x)}\Delta _{x}\varphi (s,y)E(s-t,y-x)d(s,y)\\-\int _{(t+\epsilon ,t+T)\times B_{R}(x)}\partial _{t}\varphi (s,y)E(s-t,y-x)d(s,y)\\\end{aligned}}}
The last integrals are taken over
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
{\displaystyle (t+\epsilon ,t+T)\times B_{R}(x)}
for
ϵ
>
0
{\displaystyle \epsilon >0}
. In this area and its boundary,
E
(
s
−
t
,
y
−
x
)
{\displaystyle E(s-t,y-x)}
is differentiable. Therefore, we are allowed to integrate by parts.
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
Δ
x
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
=
∫
t
+
ϵ
t
+
T
∫
B
R
(
x
)
Δ
x
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
y
d
s
Fubini
=
∫
t
+
ϵ
t
+
T
∫
∂
B
R
(
x
)
E
(
s
,
y
)
n
(
y
)
⋅
∇
x
φ
(
s
,
y
)
⏟
=
0
d
y
d
s
−
∫
t
+
ϵ
t
+
T
∫
B
R
(
x
)
∇
x
φ
(
s
,
y
)
⋅
∇
x
E
(
s
−
t
,
y
−
x
)
d
y
d
s
integration by parts in
y
=
∫
t
+
ϵ
t
+
T
∫
B
R
(
x
)
φ
(
s
,
y
)
Δ
x
E
(
s
−
t
,
y
−
x
)
d
y
d
s
−
∫
t
+
ϵ
t
+
T
∫
∂
B
R
(
x
)
φ
(
s
,
y
)
⏟
=
0
n
(
y
)
⋅
∇
x
E
(
s
−
t
,
y
−
x
)
d
y
d
s
integration by parts in
y
{\displaystyle {\begin{aligned}\int _{(t+\epsilon ,t+T)\times B_{R}(x)}\Delta _{x}\varphi (s,y)E(s-t,y-x)d(s,y)&=\int _{t+\epsilon }^{t+T}\int _{B_{R}(x)}\Delta _{x}\varphi (s,y)E(s-t,y-x)dyds&{\text{Fubini}}\\=\int _{t+\epsilon }^{t+T}\int _{\partial B_{R}(x)}E(s,y)n(y)\cdot \underbrace {\nabla _{x}\varphi (s,y)} _{=0}dyds&-\int _{t+\epsilon }^{t+T}\int _{B_{R}(x)}\nabla _{x}\varphi (s,y)\cdot \nabla _{x}E(s-t,y-x)dyds&{\text{integration by parts in }}y\\=\int _{t+\epsilon }^{t+T}\int _{B_{R}(x)}\varphi (s,y)\Delta _{x}E(s-t,y-x)dyds&-\int _{t+\epsilon }^{t+T}\int _{\partial B_{R}(x)}\underbrace {\varphi (s,y)} _{=0}n(y)\cdot \nabla _{x}E(s-t,y-x)dyds&{\text{integration by parts in }}y\end{aligned}}}
In the last two manipulations, we used integration by parts where
φ
{\displaystyle \varphi }
and
f
{\displaystyle f}
exchanged the role of the function in theorem 5.4, and
∇
x
f
{\displaystyle \nabla _{x}f}
and
∇
x
φ
{\displaystyle \nabla _{x}\varphi }
exchanged the role of the vector field. In the latter manipulation, we did not apply theorem 5.4 directly, but instead with subtracted boundary term on both sides.
Let's also integrate the other integral by parts.
∫
(
t
+
ϵ
,
t
+
T
)
×
B
R
(
x
)
∂
t
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
(
s
,
y
)
=
∫
B
R
(
x
)
∫
t
+
ϵ
t
+
T
∂
t
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
d
s
d
y
Fubini
=
∫
B
R
(
x
)
φ
(
s
,
y
)
E
(
s
−
t
,
y
−
x
)
|
s
=
t
+
ϵ
s
=
t
+
T
⏟
=
−
φ
(
t
+
ϵ
,
y
)
E
(
ϵ
,
y
−
x
)
d
y
−
∫
B
R
(
x
)
∫
t
+
ϵ
t
+
T
φ
(
s
,
y
)
∂
t
E
(
s
−
t
,
y
−
x
)
d
s
d
y
integration by parts in
s
{\displaystyle {\begin{aligned}\int _{(t+\epsilon ,t+T)\times B_{R}(x)}\partial _{t}\varphi (s,y)E(s-t,y-x)d(s,y)&=\int _{B_{R}(x)}\int _{t+\epsilon }^{t+T}\partial _{t}\varphi (s,y)E(s-t,y-x)dsdy&{\text{Fubini}}\\=\int _{B_{R}(x)}\underbrace {\varphi (s,y)E(s-t,y-x){\big |}_{s=t+\epsilon }^{s=t+T}} _{=-\varphi (t+\epsilon ,y)E(\epsilon ,y-x)}dy&-\int _{B_{R}(x)}\int _{t+\epsilon }^{t+T}\varphi (s,y)\partial _{t}E(s-t,y-x)dsdy&{\text{integration by parts in }}s\end{aligned}}}
Now we add the two terms back together and see that
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
lim
ϵ
↓
0
−
∫
B
R
(
x
)
−
φ
(
t
+
ϵ
,
y
)
E
(
ϵ
,
y
−
x
)
d
y
+
∫
B
R
(
x
)
∫
t
+
ϵ
t
+
T
φ
(
s
,
y
)
∂
t
E
(
s
−
t
,
y
−
x
)
d
s
d
y
−
∫
t
+
ϵ
t
+
T
∫
B
R
(
x
)
φ
(
s
,
y
)
Δ
x
E
(
s
−
t
,
y
−
x
)
d
y
d
s
{\displaystyle {\begin{aligned}(\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )&=\lim _{\epsilon \downarrow 0}-\int _{B_{R}(x)}-\varphi (t+\epsilon ,y)E(\epsilon ,y-x)dy\\+\int _{B_{R}(x)}\int _{t+\epsilon }^{t+T}\varphi (s,y)\partial _{t}E(s-t,y-x)dsdy&-\int _{t+\epsilon }^{t+T}\int _{B_{R}(x)}\varphi (s,y)\Delta _{x}E(s-t,y-x)dyds\end{aligned}}}
The derivative calculations from above show that
∂
t
E
=
Δ
x
E
{\displaystyle \partial _{t}E=\Delta _{x}E}
, which is why the last two integrals cancel and therefore
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
lim
ϵ
↓
0
∫
B
R
(
x
)
φ
(
t
+
ϵ
,
y
)
E
(
ϵ
,
y
−
x
)
d
y
{\displaystyle (\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )=\lim _{\epsilon \downarrow 0}\int _{B_{R}(x)}\varphi (t+\epsilon ,y)E(\epsilon ,y-x)dy}
Using that
supp
φ
(
t
+
ϵ
,
⋅
)
⊂
B
R
(
x
)
{\displaystyle {\text{supp }}\varphi (t+\epsilon ,\cdot )\subset B_{R}(x)}
and with multi-dimensional integration by substitution with the diffeomorphism
y
↦
x
+
2
ϵ
y
{\displaystyle y\mapsto x+{\sqrt {2\epsilon }}y}
we obtain:
∫
B
R
(
x
)
φ
(
t
+
ϵ
,
y
)
E
(
ϵ
,
y
−
x
)
d
y
=
∫
R
d
φ
(
t
+
ϵ
,
y
)
E
(
ϵ
,
y
−
x
)
d
y
{\displaystyle \int _{B_{R}(x)}\varphi (t+\epsilon ,y)E(\epsilon ,y-x)dy=\int _{\mathbb {R} ^{d}}\varphi (t+\epsilon ,y)E(\epsilon ,y-x)dy}
=
∫
R
d
φ
(
t
+
ϵ
,
y
)
1
4
π
ϵ
d
e
−
‖
y
−
x
‖
2
4
ϵ
d
y
{\displaystyle =\int _{\mathbb {R} ^{d}}\varphi (t+\epsilon ,y){\frac {1}{{\sqrt {4\pi \epsilon }}^{d}}}e^{-{\frac {\|y-x\|^{2}}{4\epsilon }}}dy}
=
∫
R
d
φ
(
t
+
ϵ
,
x
+
2
ϵ
y
)
2
ϵ
d
4
π
ϵ
d
e
−
‖
y
‖
2
2
d
y
=
1
2
π
d
∫
R
d
φ
(
t
+
ϵ
,
x
+
2
ϵ
y
)
e
−
‖
y
‖
2
2
d
y
{\displaystyle =\int _{\mathbb {R} ^{d}}\varphi (t+\epsilon ,x+{\sqrt {2\epsilon }}y){\frac {{\sqrt {2\epsilon }}^{d}}{{\sqrt {4\pi \epsilon }}^{d}}}e^{-{\frac {\|y\|^{2}}{2}}}dy={\frac {1}{{\sqrt {2\pi }}^{d}}}\int _{\mathbb {R} ^{d}}\varphi (t+\epsilon ,x+{\sqrt {2\epsilon }}y)e^{-{\frac {\|y\|^{2}}{2}}}dy}
Since
φ
{\displaystyle \varphi }
is continuous (even smooth), we have
∀
x
∈
R
d
:
lim
ϵ
→
0
φ
(
t
+
ϵ
,
x
+
2
ϵ
y
)
=
φ
(
t
,
x
)
{\displaystyle \forall x\in \mathbb {R} ^{d}:\lim _{\epsilon \to 0}\varphi (t+\epsilon ,x+{\sqrt {2\epsilon }}y)=\varphi (t,x)}
Therefore
(
∂
t
−
Δ
x
)
T
E
(
⋅
−
(
t
,
x
)
)
(
φ
)
=
lim
ϵ
↓
0
1
2
π
d
∫
R
d
φ
(
t
+
ϵ
,
x
+
2
ϵ
y
)
e
−
‖
y
‖
2
2
d
y
=
1
2
π
d
∫
R
d
φ
(
t
,
x
)
e
−
‖
y
‖
2
2
d
y
dominated convergence
=
φ
(
t
,
x
)
lemma 6.2
=
δ
(
t
,
x
)
(
φ
)
{\displaystyle {\begin{aligned}(\partial _{t}-\Delta _{x})T_{E(\cdot -(t,x))}(\varphi )&=\lim _{\epsilon \downarrow 0}{\frac {1}{{\sqrt {2\pi }}^{d}}}\int _{\mathbb {R} ^{d}}\varphi (t+\epsilon ,x+{\sqrt {2\epsilon }}y)e^{-{\frac {\|y\|^{2}}{2}}}dy&\\&={\frac {1}{{\sqrt {2\pi }}^{d}}}\int _{\mathbb {R} ^{d}}\varphi (t,x)e^{-{\frac {\|y\|^{2}}{2}}}dy&{\text{dominated convergence}}\\&=\varphi (t,x)&{\text{lemma 6.2}}\\&=\delta _{(t,x)}(\varphi )&\end{aligned}}}
◻
{\displaystyle \Box }
Theorem 6.4 :
If
f
:
R
×
R
d
{\displaystyle f:\mathbb {R} \times \mathbb {R} ^{d}}
is bounded, once continuously differentiable in the
t
{\displaystyle t}
-variable and twice continuously differentiable in the
x
{\displaystyle x}
-variable, then
u
(
t
,
x
)
:=
(
E
∗
f
)
(
t
,
x
)
{\displaystyle u(t,x):=(E*f)(t,x)}
solves the heat equation
∀
(
t
,
x
)
∈
R
×
R
d
:
∂
t
u
(
t
,
x
)
−
Δ
x
u
(
t
,
x
)
=
f
(
t
,
x
)
{\displaystyle \forall (t,x)\in \mathbb {R} \times \mathbb {R} ^{d}:\partial _{t}u(t,x)-\Delta _{x}u(t,x)=f(t,x)}
Proof :
1.
We show that
(
E
∗
f
)
(
t
,
x
)
{\displaystyle (E*f)(t,x)}
is sufficiently often differentiable such that the equations are satisfied.
2.
We invoke theorem 5.?, which states exactly that a convolution with a Green's kernel is a solution, provided that the convolution is sufficiently often differentiable (which we showed in part 1 of the proof).
◻
{\displaystyle \Box }
Theorem and definition 6.6 :
Let
f
:
[
0
,
∞
)
×
R
d
→
R
{\displaystyle f:[0,\infty )\times \mathbb {R} ^{d}\to \mathbb {R} }
be bounded, once continuously differentiable in the
t
{\displaystyle t}
-variable and twice continuously differentiable in the
x
{\displaystyle x}
-variable, and let
E
:
R
d
→
R
{\displaystyle E:\mathbb {R} ^{d}\to \mathbb {R} }
be continuous and bounded. If we define
f
~
:
R
×
R
d
→
R
,
f
~
(
t
,
x
)
=
{
f
(
t
,
x
)
t
≥
0
0
t
<
0
{\displaystyle {\tilde {f}}:\mathbb {R} \times \mathbb {R} ^{d}\to \mathbb {R} ,{\tilde {f}}(t,x)={\begin{cases}f(t,x)&t\geq 0\\0&t<0\end{cases}}}
, then the function
u
:
[
0
,
∞
)
×
R
d
→
R
,
u
(
t
,
x
)
=
{
(
E
∗
x
g
)
(
t
,
x
)
+
(
f
~
∗
E
)
(
t
,
x
)
t
>
0
g
(
x
)
t
=
0
{\displaystyle u:[0,\infty )\times \mathbb {R} ^{d}\to \mathbb {R} ,u(t,x)={\begin{cases}(E*_{x}g)(t,x)+({\tilde {f}}*E)(t,x)&t>0\\g(x)&t=0\end{cases}}}
is a continuous solution of the initial value problem for the heat equation , that is
{
∀
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
:
∂
t
u
(
t
,
x
)
−
Δ
x
u
(
t
,
x
)
=
f
(
t
,
x
)
∀
x
∈
R
d
:
u
(
0
,
x
)
=
g
(
x
)
{\displaystyle {\begin{cases}\forall (t,x)\in (0,\infty )\times \mathbb {R} ^{d}:&\partial _{t}u(t,x)-\Delta _{x}u(t,x)=f(t,x)\\\forall x\in \mathbb {R} ^{d}:&u(0,x)=g(x)\end{cases}}}
Note that if we do not require the solution to be continuous, we may just take any solution and just set it to
g
{\displaystyle g}
at
t
=
0
{\displaystyle t=0}
.
Proof :
1.
We show
∀
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
:
∂
t
u
(
t
,
x
)
−
Δ
x
u
(
t
,
x
)
=
f
(
t
,
x
)
(
∗
)
{\displaystyle \forall (t,x)\in (0,\infty )\times \mathbb {R} ^{d}:\partial _{t}u(t,x)-\Delta _{x}u(t,x)=f(t,x)~~~~~(*)}
From theorem 7.4, we already know that
f
~
∗
E
{\displaystyle {\tilde {f}}*E}
solves
∀
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
:
∂
t
(
f
~
∗
E
)
(
t
,
x
)
−
Δ
x
(
f
~
∗
E
)
(
t
,
x
)
=
f
~
(
t
,
x
)
=
t
>
0
f
(
t
,
x
)
{\displaystyle \forall (t,x)\in (0,\infty )\times \mathbb {R} ^{d}:\partial _{t}({\tilde {f}}*E)(t,x)-\Delta _{x}({\tilde {f}}*E)(t,x)={\tilde {f}}(t,x){\overset {t>0}{=}}f(t,x)}
Therefore, we have for
∀
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
{\displaystyle \forall (t,x)\in (0,\infty )\times \mathbb {R} ^{d}}
,
∂
t
u
(
t
,
x
)
−
Δ
x
u
(
t
,
x
)
=
∂
t
(
E
∗
x
g
)
(
t
,
x
)
+
∂
t
(
f
~
∗
E
)
(
t
,
x
)
−
Δ
x
(
E
∗
x
g
)
(
t
,
x
)
−
Δ
x
(
f
~
∗
E
)
(
t
,
x
)
=
f
(
t
,
x
)
+
∂
t
(
E
∗
x
g
)
(
t
,
x
)
−
Δ
x
(
E
∗
x
g
)
(
t
,
x
)
{\displaystyle {\begin{aligned}\partial _{t}u(t,x)-\Delta _{x}u(t,x)=&\partial _{t}(E*_{x}g)(t,x)+\partial _{t}({\tilde {f}}*E)(t,x)\\&-\Delta _{x}(E*_{x}g)(t,x)-\Delta _{x}({\tilde {f}}*E)(t,x)\\=&f(t,x)+\partial _{t}(E*_{x}g)(t,x)-\Delta _{x}(E*_{x}g)(t,x)\end{aligned}}}
which is why
(
∗
)
{\displaystyle (*)}
would follow if
∀
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
:
∂
t
(
E
∗
x
g
)
(
t
,
x
)
−
Δ
x
(
E
∗
x
g
)
(
t
,
x
)
=
0
{\displaystyle \forall (t,x)\in (0,\infty )\times \mathbb {R} ^{d}:\partial _{t}(E*_{x}g)(t,x)-\Delta _{x}(E*_{x}g)(t,x)=0}
This we shall now check.
By definition of the spatial convolution, we have
∂
t
(
E
∗
x
g
)
(
t
,
x
)
=
∂
t
∫
R
d
E
(
t
,
x
−
y
)
g
(
y
)
d
y
{\displaystyle \partial _{t}(E*_{x}g)(t,x)=\partial _{t}\int _{\mathbb {R} ^{d}}E(t,x-y)g(y)dy}
and
Δ
x
(
E
∗
x
g
)
(
t
,
x
)
=
Δ
x
∫
R
d
E
(
t
,
x
−
y
)
g
(
y
)
d
y
{\displaystyle \Delta _{x}(E*_{x}g)(t,x)=\Delta _{x}\int _{\mathbb {R} ^{d}}E(t,x-y)g(y)dy}
By applying Leibniz' integral rule (see exercise 2) we find that
∂
t
(
E
∗
x
g
)
(
t
,
x
)
−
Δ
x
(
E
∗
x
g
)
(
t
,
x
)
=
∂
t
∫
R
d
E
(
t
,
x
−
y
)
g
(
y
)
d
y
−
Δ
x
∫
R
d
E
(
t
,
x
−
y
)
g
(
y
)
d
y
=
∫
R
d
∂
t
E
(
t
,
x
−
y
)
g
(
y
)
d
y
−
∫
R
d
Δ
x
E
(
t
,
x
−
y
)
g
(
y
)
d
y
Leibniz' integral rule
=
∫
R
d
(
∂
t
E
(
t
,
x
−
y
)
−
Δ
x
E
(
t
,
x
−
y
)
)
g
(
y
)
d
y
linearity of the integral
=
0
exercise 1
{\displaystyle {\begin{aligned}\partial _{t}(E*_{x}g)(t,x)-\Delta _{x}(E*_{x}g)(t,x)&=\partial _{t}\int _{\mathbb {R} ^{d}}E(t,x-y)g(y)dy-\Delta _{x}\int _{\mathbb {R} ^{d}}E(t,x-y)g(y)dy&\\&=\int _{\mathbb {R} ^{d}}\partial _{t}E(t,x-y)g(y)dy-\int _{\mathbb {R} ^{d}}\Delta _{x}E(t,x-y)g(y)dy&{\text{ Leibniz' integral rule}}\\&=\int _{\mathbb {R} ^{d}}\left(\partial _{t}E(t,x-y)-\Delta _{x}E(t,x-y)\right)g(y)dy&{\text{ linearity of the integral}}\\&=0&{\text{ exercise 1}}\end{aligned}}}
for all
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
{\displaystyle (t,x)\in (0,\infty )\times \mathbb {R} ^{d}}
.
2.
We show that
u
{\displaystyle u}
is continuous.
It is clear that
u
{\displaystyle u}
is continuous on
(
0
,
∞
)
×
R
d
{\displaystyle (0,\infty )\times \mathbb {R} ^{d}}
, since all the first-order partial derivatives exist and are continuous (see exercise 2). It remains to be shown that
u
{\displaystyle u}
is continuous on
{
0
}
×
R
d
{\displaystyle \{0\}\times \mathbb {R} ^{d}}
.
To do so, we first note that for all
(
t
,
x
)
∈
(
0
,
∞
)
×
R
d
{\displaystyle (t,x)\in (0,\infty )\times \mathbb {R} ^{d}}
∫
R
d
E
(
t
,
x
−
y
)
d
y
=
∫
R
d
E
(
t
,
y
)
d
y
integration by substitution using
y
↦
x
−
y
=
∫
R
d
4
π
t
−
d
e
‖
y
‖
2
4
t
d
y
=
∫
R
d
2
π
−
d
e
‖
y
‖
2
2
d
y
integration by substitution using
y
↦
2
t
y
=
1
lemma 6.2
{\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{d}}E(t,x-y)dy&=\int _{\mathbb {R} ^{d}}E(t,y)dy&{\text{ integration by substitution using }}y\mapsto x-y\\&=\int _{\mathbb {R} ^{d}}{\sqrt {4\pi t}}^{-d}e^{\frac {\|y\|^{2}}{4t}}dy&\\&=\int _{\mathbb {R} ^{d}}{\sqrt {2\pi }}^{-d}e^{\frac {\|y\|^{2}}{2}}dy&{\text{ integration by substitution using }}y\mapsto {\sqrt {2t}}y\\&=1&{\text{ lemma 6.2}}\end{aligned}}}
Furthermore, due to the continuity of
g
{\displaystyle g}
, we may choose for arbitrary
ϵ
>
0
{\displaystyle \epsilon >0}
and any
x
∈
R
d
{\displaystyle x\in \mathbb {R} ^{d}}
a
δ
>
0
{\displaystyle \delta >0}
such that
∀
y
∈
B
δ
(
x
)
:
|
g
(
y
)
−
g
(
x
)
|
<
ϵ
{\displaystyle \forall y\in B_{\delta }(x):|g(y)-g(x)|<\epsilon }
.
From these last two observations, we may conclude:
|
g
(
x
)
−
(
E
∗
x
g
)
(
t
,
x
)
|
=
|
1
⋅
g
(
x
)
−
∫
R
d
E
(
t
,
x
−
y
)
g
(
x
)
d
y
|
=
|
∫
R
d
E
(
t
,
x
−
y
)
g
(
x
)
d
y
−
∫
R
d
E
(
t
,
x
−
y
)
g
(
x
)
d
y
|
=
|
∫
B
δ
(
x
)
E
(
t
,
x
−
y
)
(
g
(
y
)
−
g
(
x
)
)
d
y
+
∫
R
d
∖
B
δ
(
x
)
E
(
t
,
x
−
y
)
(
g
(
y
)
−
g
(
x
)
)
d
y
|
≤
|
∫
B
δ
(
x
)
E
(
t
,
x
−
y
)
(
g
(
y
)
−
g
(
x
)
)
d
y
|
+
|
∫
R
d
∖
B
δ
(
x
)
E
(
t
,
x
−
y
)
(
g
(
y
)
−
g
(
x
)
)
d
y
|
triangle ineq. in
R
≤
∫
B
δ
(
x
)
|
E
(
t
,
x
−
y
)
|
|
g
(
y
)
−
g
(
x
)
|
⏟
<
ϵ
d
y
+
∫
R
d
∖
B
δ
(
x
)
|
E
(
t
,
x
−
y
)
(
g
(
y
)
−
g
(
x
)
)
|
d
y
triangle ineq. for
∫
<
∫
R
d
|
E
(
t
,
x
−
y
)
|
ϵ
d
y
+
∫
R
d
∖
B
δ
(
x
)
|
E
(
t
,
x
−
y
)
|
(
|
g
(
y
)
|
+
|
g
(
x
)
|
)
⏟
≤
2
‖
g
‖
∞
d
y
monotony of the
∫
=
ϵ
+
2
‖
g
‖
∞
|
∫
R
d
∖
B
δ
(
x
)
E
(
t
,
x
−
y
)
d
y
|
{\displaystyle {\begin{aligned}|g(x)-(E*_{x}g)(t,x)|&=\left|1\cdot g(x)-\int _{\mathbb {R} ^{d}}E(t,x-y)g(x)dy\right|&\\&=\left|\int _{\mathbb {R} ^{d}}E(t,x-y)g(x)dy-\int _{\mathbb {R} ^{d}}E(t,x-y)g(x)dy\right|\\&=\left|\int _{B_{\delta }(x)}E(t,x-y)(g(y)-g(x))dy+\int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}E(t,x-y)(g(y)-g(x))dy\right|&\\&\leq \left|\int _{B_{\delta }(x)}E(t,x-y)(g(y)-g(x))dy\right|+\left|\int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}E(t,x-y)(g(y)-g(x))dy\right|&{\text{triangle ineq. in }}\mathbb {R} \\&\leq \int _{B_{\delta }(x)}|E(t,x-y)|\underbrace {|g(y)-g(x)|} _{<\epsilon }dy+\int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}|E(t,x-y)(g(y)-g(x))|dy&{\text{ triangle ineq. for }}\int \\&<\int _{\mathbb {R} ^{d}}|E(t,x-y)|\epsilon dy+\int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}|E(t,x-y)|\underbrace {(|g(y)|+|g(x)|)} _{\leq 2\|g\|_{\infty }}dy&{\text{ monotony of the }}\int \\&=\epsilon +2\|g\|_{\infty }\left|\int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}E(t,x-y)dy\right|\end{aligned}}}
But due to integration by substitution using the diffeomorphism
x
↦
2
t
x
{\displaystyle x\mapsto {\sqrt {2t}}x}
, we obtain
∫
R
d
∖
B
δ
(
x
)
E
(
t
,
x
−
y
)
d
y
=
∫
R
d
∖
B
δ
(
0
)
E
(
t
,
x
)
d
y
=
∫
R
d
∖
B
δ
2
t
(
0
)
1
2
π
d
e
−
‖
x
‖
2
2
d
y
→
0
,
t
→
0
{\displaystyle \int _{\mathbb {R} ^{d}\setminus B_{\delta }(x)}E(t,x-y)dy=\int _{\mathbb {R} ^{d}\setminus B_{\delta }(0)}E(t,x)dy=\int _{\mathbb {R} ^{d}\setminus B_{\frac {\delta }{\sqrt {2t}}}(0)}{\frac {1}{{\sqrt {2\pi }}^{d}}}e^{-{\frac {\|x\|^{2}}{2}}}dy\to 0,t\to 0}
which is why
lim
t
→
0
|
g
(
x
)
−
(
E
∗
x
g
)
(
t
,
x
)
|
<
ϵ
{\displaystyle \lim _{t\to 0}|g(x)-(E*_{x}g)(t,x)|<\epsilon }
Since
ϵ
>
0
{\displaystyle \epsilon >0}
was arbitrary, continuity is proven.
◻
{\displaystyle \Box }