Derive the following error equation for
e
k
+
1
=
x
k
+
1
−
x
∗
{\displaystyle e_{k+1}=x_{k+1}-x^{*}\!\,}
e
k
+
1
=
B
k
−
1
(
(
B
k
−
f
′
(
x
∗
)
)
e
k
−
∫
0
1
(
f
′
(
s
x
k
+
(
1
−
s
)
x
∗
)
−
f
′
(
x
∗
)
)
e
k
d
s
)
{\displaystyle e_{k+1}=B_{k}^{-1}\left((B_{k}-\mathbf {f'} (x^{*}))e_{k}-\int _{0}^{1}(\mathbf {f'} (sx_{k}+(1-s)x^{*})-\mathbf {f'} (x^{*}))e_{k}\,ds\right)\!\,}
Note the following identity
F
(
x
)
−
F
(
y
)
=
∫
0
1
F
′
(
s
x
+
(
1
−
s
)
y
)
(
x
−
y
)
d
s
(
∗
)
{\displaystyle F(x)-F(y)=\int _{0}^{1}F'(sx+(1-s)y)(x-y)ds\qquad (*)\!\,}
The error
e
k
+
1
{\displaystyle e_{k+1}\!\,}
is given by
e
k
+
1
=
x
k
+
1
−
x
∗
=
x
k
−
x
∗
⏟
e
k
−
B
k
−
1
f
(
x
k
)
=
e
k
−
B
k
−
1
(
f
(
x
k
)
−
f
(
x
∗
)
⏟
0
)
=
B
k
−
1
{
(
B
k
−
f
′
(
x
∗
)
)
e
k
−
∫
0
1
[
f
′
(
s
x
k
+
(
1
−
s
)
x
∗
)
−
f
′
(
x
∗
)
]
e
k
d
s
}
{\displaystyle {\begin{aligned}e_{k+1}&=x_{k+1}-x_{*}\\&=\underbrace {x_{k}-x_{*}} _{e_{k}}-B_{k}^{-1}f(x_{k})\\&=e_{k}-B_{k}^{-1}(f(x_{k})-\underbrace {f(x_{*})} _{0})\\&=B_{k}^{-1}\left\{(B_{k}-f'(x_{*}))e_{k}-\int _{0}^{1}\left[f'(sx_{k}+(1-s)x_{*})-f'(x_{*})\right]e_{k}ds\right\}\\\end{aligned}}}
Let
B
k
=
B
∈
R
n
×
n
{\displaystyle B_{k}=B\in \mathbb {R} ^{n\times n}\!\,}
be a fixed matrix. Find conditions on B that guarentee local convergence. What rate of convergence do you expect and why?
Assume
B
{\displaystyle B\!\,}
is invertible,
‖
B
−
f
′
(
x
∗
)
‖
{\displaystyle \|B-f'(x_{*})\|\!\,}
is bounded, and
f
′
{\displaystyle f'\!\,}
is Lipschitz.
‖
e
k
+
1
‖
≤
‖
B
−
1
‖
{
‖
B
−
f
′
(
x
∗
)
‖
‖
e
k
‖
−
∫
0
1
‖
f
′
(
s
x
k
+
(
1
−
s
)
x
∗
)
−
f
′
(
x
∗
)
‖
⏟
≤
L
‖
s
x
k
+
(
1
−
s
)
x
∗
−
x
∗
‖
‖
e
k
‖
d
s
}
≤
‖
B
−
1
‖
{
‖
B
−
f
′
(
x
∗
)
‖
‖
e
k
‖
−
∫
0
1
L
‖
e
k
‖
2
s
d
s
}
≤
‖
B
−
1
‖
{
‖
B
−
f
′
(
x
∗
)
‖
−
L
2
‖
e
k
‖
}
‖
e
k
‖
{\displaystyle {\begin{aligned}\|e_{k+1}\|&\leq \|B^{-1}\|\left\{\|B-f'(x_{*})\|\|e_{k}\|-\int _{0}^{1}\underbrace {\|f'(sx_{k}+(1-s)x_{*})-f'(x_{*})\|} _{\leq L\|sx_{k}+(1-s)x_{*}-x_{*}\|}\|e_{k}\|ds\right\}\\&\leq \|B^{-1}\|\left\{\|B-f'(x_{*})\|\|e_{k}\|-\int _{0}^{1}L\|e_{k}\|^{2}sds\right\}\\&\leq \|B^{-1}\|\left\{\|B-f'(x_{*})\|-{\frac {L}{2}}\|e_{k}\|\right\}\|e_{k}\|\end{aligned}}}
This implies local superlinear convergence.
Find sufficient conditions on
‖
B
k
−
f
′
(
x
k
)
‖
{\displaystyle \|B_{k}-\mathbf {f'} (x_{k})\|\!\,}
for the convergence to be superlinear. What choice of
B
k
{\displaystyle B_{k}\!\,}
corresponds to the Newton method and what rate of convergence do you expect?
Super linear convergence
edit
‖
e
k
+
1
‖
‖
e
k
‖
→
0
{\displaystyle {\frac {\|e_{k+1}\|}{\|e_{k}\|}}\rightarrow 0\!\,}
as
k
→
∞
{\displaystyle k\rightarrow \infty \!\,}
Find condition for super linear convergence
edit
From part(b)
‖
e
k
+
1
‖
‖
e
k
‖
≤
‖
B
k
−
1
‖
{
‖
B
k
−
f
′
(
x
∗
)
‖
−
L
2
‖
e
k
‖
}
{\displaystyle {\frac {\|e_{k+1}\|}{\|e_{k}\|}}\leq \|B_{k}^{-1}\|\{\|B_{k}-f'(x_{*})\|-{\frac {L}{2}}\|e_{k}\|\}\!\,}
Since
‖
e
k
‖
→
0
as
k
→
∞
{\displaystyle \|e_{k}\|\rightarrow 0{\mbox{ as }}k\rightarrow \infty \!\,}
, if
‖
B
k
−
f
′
(
x
∗
)
‖
→
0
{\displaystyle \|B_{k}-f'(x_{*})\|\rightarrow 0\!\,}
as
k
→
0
{\displaystyle k\rightarrow 0\!\,}
, we have super linear convergence i.e.
B
k
=
f
′
(
x
k
)
Newton Iteration form
{\displaystyle B_{k}=f'(x_{k}){\mbox{ Newton Iteration form}}\!\,}
Let
f
(
t
,
y
)
{\displaystyle f(t,y)\!\,}
be uniformly Lipschitz with respect to
y
{\displaystyle y}
. Let
y
(
t
)
{\displaystyle y(t)\!\,}
be the solution to the initial value problem :
y
′
=
f
(
t
,
y
)
,
y
(
0
)
=
y
0
{\displaystyle y'=f(t,y),\;y(0)=y_{0}\!\,}
. Consider the trapezoid method
(
1
)
y
i
+
1
=
y
i
+
h
2
(
f
(
t
i
,
y
i
)
+
f
(
t
i
+
1
,
y
i
+
1
)
)
,
(
i
≥
0
)
{\displaystyle (1)\qquad y_{i+1}=y_{i}+{h \over 2}\left(f(t_{i},y_{i})+f(t_{i+1},y_{i+1})\right),\qquad (i\geq 0)\!\,}
.
Find a condition on the stepsize
h
{\displaystyle h\!\,}
that ensures (1) can be solved uniquely for
y
i
+
1
{\displaystyle y_{i+1}\!\,}
.
The implicit method can be viewed as a fix point iteration:
y
i
+
1
=
y
i
+
h
2
(
f
(
t
i
,
y
i
)
+
f
(
t
i
+
1
,
y
i
+
1
)
)
⏟
g
(
y
i
+
1
)
{\displaystyle y_{i+1}=\underbrace {y_{i}+{\frac {h}{2}}(f(t_{i},y_{i})+f(t_{i+1},y_{i+1}))} _{g(y_{i+1})}\!\,}
We want
|
∂
g
(
y
i
+
1
)
∂
y
i
+
1
|
<
1
{\displaystyle \left|{\frac {\partial g(y_{i+1})}{\partial y_{i+1}}}\right|<1\!\,}
which implies
h
<
2
|
∂
f
(
t
i
+
1
,
y
i
+
1
)
∂
y
i
+
1
|
{\displaystyle h<{\frac {2}{\left|{\frac {\partial f(t_{i+1},y_{i+1})}{\partial y_{i+1}}}\right|}}\!\,}
Define a local truncation error and estimate it. Examine the additional regularity of
f
{\displaystyle f\!\,}
needed for this estimate.
Re-writing (1) and replacing
y
k
with
y
(
t
k
)
and
f
(
t
k
,
y
k
)
with
y
′
(
t
k
)
{\displaystyle y_{k}{\mbox{ with }}y(t_{k}){\mbox{ and }}f(t_{k},y_{k}){\mbox{ with }}y'(t_{k})\!\,}
we have a formula for consistency of order p:
y
(
t
i
+
1
)
−
y
(
t
i
)
−
h
2
y
′
(
t
i
)
−
h
2
y
(
t
i
+
1
)
=
O
(
h
p
+
1
)
{\displaystyle y(t_{i+1})-y(t_{i})-{h \over 2}y'(t_{i})-{h \over 2}y(t_{i+1})={\mathcal {O}}(h^{p+1})\!\,}
For uniform stepsize h
t
i
+
1
=
t
i
+
h
{\displaystyle t_{i+1}=t_{i}+h\!\,}
Expanding in Taylor Series about
t
i
{\displaystyle t_{i}\!\,}
gives
Prove a global error estimate for (1)
Consider the 2-point boundary value problem
(
2
)
−
a
u
″
+
b
u
=
f
(
x
)
,
0
<
x
<
1
,
u
(
0
)
=
u
0
,
u
(
1
)
=
u
1
{\displaystyle (2)\qquad -au''+bu=f(x),\quad 0<x<1,\qquad u(0)=u_{0},\,u(1)=u_{1}\!\,}
,
with
a
,
b
>
0
{\displaystyle a,b>0\!\,}
constants and
f
∈
C
[
0
,
1
]
{\displaystyle f\in C[0,1]\!\,}
. Let
P
=
{
x
i
}
i
=
0
n
+
1
{\displaystyle {\mathcal {P}}=\{x_{i}\}_{i=0}^{n+1}\!\,}
be a uniform partition of [0,1] with meshsize
h
{\displaystyle h\!\,}
.
Use centered finite differences to discretize (2). Write the system as
A
U
=
F
{\displaystyle A\mathbf {U} =\mathbf {F} \!\,}
and identify
A
∈
R
n
×
n
and
U
,
F
∈
R
n
{\displaystyle A\in \mathbb {R} ^{n\times n}{\mbox{ and }}\mathbf {U} ,\mathbf {F} \in \mathbb {R} ^{n}\!\,}
. Prove that A is nonsingular.
Using Taylor Expansions, we can approximate the second derivative as follows
u
″
(
x
)
=
u
(
x
+
h
)
−
2
u
(
x
)
+
u
(
x
−
h
)
h
2
{\displaystyle u''(x)={\frac {u(x+h)-2u(x)+u(x-h)}{h^{2}}}\!\,}
We can eliminate two equations from the n+2 equations by substituting the initial conditions
u
(
0
)
=
u
0
,
u
(
1
)
=
u
1
{\displaystyle u(0)=u_{0},u(1)=u_{1}\!\,}
into the equations for
U
1
{\displaystyle U_{1}\!\,}
and
U
n
{\displaystyle U_{n}\!\,}
respectively.
We then have the system
[
2
a
h
2
+
b
−
a
h
2
−
a
h
2
2
a
h
2
+
b
−
a
h
2
−
a
h
2
2
a
h
2
+
b
−
a
h
2
⋱
⋱
⋱
−
a
h
2
2
a
h
2
+
b
]
⏟
A
[
U
1
U
2
U
3
⋮
U
n
]
⏟
U
=
[
f
(
x
1
)
+
a
h
2
u
0
f
(
x
2
)
f
(
x
3
)
⋮
f
(
x
n
)
+
a
h
2
u
1
]
⏟
F
{\displaystyle \underbrace {\begin{bmatrix}{\frac {2a}{h^{2}}}+b&-{\frac {a}{h^{2}}}&&&&&\\-{\frac {a}{h^{2}}}&{\frac {2a}{h^{2}}}+b&-{\frac {a}{h^{2}}}&&&&\\&-{\frac {a}{h^{2}}}&{\frac {2a}{h^{2}}}+b&-{\frac {a}{h^{2}}}&&\\&&\ddots &\ddots &\ddots &\\&&&&-{\frac {a}{h^{2}}}&{\frac {2a}{h^{2}}}+b\end{bmatrix}} _{A}\underbrace {\begin{bmatrix}U_{1}\\U_{2}\\U_{3}\\\vdots \\U_{n}\end{bmatrix}} _{U}=\underbrace {\begin{bmatrix}f(x_{1})+{\frac {a}{h^{2}}}u_{0}\\f(x_{2})\\f(x_{3})\\\vdots \\f(x_{n})+{\frac {a}{h^{2}}}u_{1}\end{bmatrix}} _{F}\!\,}
A
{\displaystyle A\!\,}
is nonsingular since it is diagonally dominant.
Define truncation error and derive a bound for this method in terms of
h
{\displaystyle h\!\,}
. State without proof an error estimate of the form
max
1
≤
i
≤
n
|
u
(
x
i
)
−
U
i
|
≤
C
h
s
{\displaystyle \max _{1\leq i\leq n}|u(x_{i})-U_{i}|\leq Ch^{s}\!\,}
and specify the order s.
Local Truncation Error
edit
L
T
E
=
u
(
x
i
+
h
)
+
u
(
x
i
−
h
)
−
2
u
(
x
i
)
h
2
−
u
″
(
x
i
)
=
u
(
4
)
(
ξ
)
12
h
2
{\displaystyle LTE={\frac {u(x_{i}+h)+u(x_{i}-h)-2u(x_{i})}{h^{2}}}-u''(x_{i})={\frac {u^{(4)}(\xi )}{12}}h^{2}\!\,}
Bound for Local Truncation Error
edit
1
12
‖
u
(
4
)
‖
∞
h
2
{\displaystyle {\frac {1}{12}}\|u^{(4)}\|_{\infty }h^{2}\!\,}
Derive Bound for Max Error
edit
Let
L
=
−
a
∂
x
x
+
b
{\displaystyle L=-a\partial _{xx}+b\!\,}
,
L
h
=
A
{\displaystyle L_{h}=A\!\,}
, and
R
h
{\displaystyle R_{h}\!\,}
is the local truncation error.
Then
L
u
=
f
L
h
u
=
f
+
R
h
L
h
U
=
f
{\displaystyle {\begin{aligned}Lu&=f\\L_{h}u&=f+R_{h}\\L_{h}U&=f\end{aligned}}\!\,}
Subtracting the two last equations gives
L
h
(
u
−
U
)
=
R
h
{\displaystyle L_{h}(u-U)=R_{h}\!\,}
Hence,
‖
u
−
U
‖
∞
≤
‖
L
h
−
1
‖
‖
R
h
‖
{\displaystyle \|u-U\|_{\infty }\leq \|L_{h}^{-1}\|\|R_{h}\|\!\,}
‖
u
−
U
‖
∞
≤
C
h
2
{\displaystyle \|u-U\|_{\infty }\leq Ch^{2}\!\,}
, that is the error has order 2.
Note that
A
{\displaystyle A\!\,}
is a
M
{\displaystyle M\!\,}
matrix and hence the discrete maximum principle applies. (See January 05 667 test for definition of
M
{\displaystyle M\!\,}
matrix)
Discrete Maximum Principle
edit
A
U
=
F
{\displaystyle AU=F\!\,}
If
F
≤
0
{\displaystyle F\leq 0\!\,}
, then
U
≤
0
{\displaystyle U\leq 0\!\,}
.
Specifically let
F
=
f
1
−
f
2
{\displaystyle F=f_{1}-f_{2}\!\,}
, then
U
1
−
U
2
≤
0
{\displaystyle U_{1}-U_{2}\leq 0\!\,}
which implies
U
1
≤
U
2
{\displaystyle U_{1}\leq U_{2}\!\,}