Claim: There exists
η
∈
(
x
0
,
x
2
)
{\displaystyle \eta \in (x_{0},x_{2})\!\,}
such that
f
′
′
(
η
)
−
q
′
′
(
η
)
=
0
{\displaystyle f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,}
Proof:
The interpolation polynomial may be expressed using dividing difference coefficients i.e.
q
(
x
)
=
f
[
x
0
]
+
f
[
x
0
,
x
1
]
(
x
−
x
0
)
+
f
[
x
0
,
x
1
,
x
2
]
(
x
−
x
0
)
(
x
−
x
1
)
{\displaystyle q(x)=f[x_{0}]+f[x_{0},x_{1}](x-x_{0})+f[x_{0},x_{1},x_{2}](x-x_{0})(x-x_{1})\!\,}
which implies
q
′
′
(
x
)
=
2
f
[
x
0
,
x
1
,
x
2
]
{\displaystyle q^{\prime \prime }(x)=2f[x_{0},x_{1},x_{2}]\!\,}
In general, the divided difference coefficients may be expressed as a factorial weighted point of a derivative of
f
{\displaystyle f\!\,}
i.e.
(
△
)
f
[
x
0
,
…
,
x
n
]
=
f
(
n
)
(
ξ
)
(
n
)
!
ξ
∈
I
[
x
0
,
…
,
x
n
]
{\displaystyle (\triangle )\qquad f[x_{0},\ldots ,x_{n}]={\frac {f^{(n)}(\xi )}{(n)!}}\qquad \xi \in I[x_{0},\ldots ,x_{n}]\!\,}
Hence,
f
[
x
0
,
x
1
,
x
2
]
=
f
′
′
(
η
)
2
!
η
∈
[
x
0
,
x
2
]
{\displaystyle f[x_{0},x_{1},x_{2}]={\frac {f^{\prime \prime }(\eta )}{2!}}\qquad \eta \in [x_{0},x_{2}]\!\,}
which implies
q
′
′
(
η
)
−
f
′
′
(
η
)
=
0
{\displaystyle q^{\prime \prime }(\eta )-f^{\prime \prime }(\eta )=0\!\,}
From the hint we know that
f
′
′
(
η
)
−
q
′
′
(
η
)
=
0
{\displaystyle f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,}
which implies
f
′
′
(
x
)
−
q
′
′
(
x
)
=
f
′
′
(
x
)
−
q
′
′
(
x
)
−
f
′
′
(
η
)
+
q
′
′
(
η
)
{\displaystyle f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )\!\,}
Since
q
(
x
)
{\displaystyle q(x)\!\,}
is quadratic,
q
′
′
(
x
)
{\displaystyle q^{\prime \prime }(x)\!\,}
is constant i.e.
q
′
′
(
x
)
=
K
{\displaystyle q^{\prime \prime }(x)=K\!\,}
for all
x
{\displaystyle x\!\,}
Therefore,
f
′
′
(
x
)
−
q
′
′
(
x
)
=
f
′
′
(
x
)
−
q
′
′
(
x
)
−
f
′
′
(
η
)
+
q
′
′
(
η
)
=
f
′
′
(
x
)
−
f
′
′
(
η
)
{\displaystyle f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )=f^{\prime \prime }(x)-f^{\prime \prime }(\eta )\!\,}
By the fundamental theorem of calculus,
f
′
′
(
x
)
−
f
′
′
(
η
)
=
∫
η
x
f
′
′
′
(
t
)
d
t
≤
‖
f
′
′
′
(
x
)
‖
∞
|
x
−
η
|
≤
2
h
‖
f
′
′
′
‖
∞
{\displaystyle {\begin{aligned}f^{\prime \prime }(x)-f^{\prime \prime }(\eta )&=\int _{\eta }^{x}f^{\prime \prime \prime }(t)dt\\&\leq \|f^{\prime \prime \prime }(x)\|_{\infty }|x-\eta |\\&\leq 2h\|f^{\prime \prime \prime }\|_{\infty }\end{aligned}}}
Therefore,
max
x
∈
[
x
0
,
x
2
]
|
f
′
′
(
x
)
−
q
′
′
(
x
)
|
≤
2
‖
f
′
′
′
(
x
)
‖
∞
⏟
C
h
1
{\displaystyle \max _{x\in [x_{0},x_{2}]}|f^{\prime \prime }(x)-q^{\prime \prime }(x)|\leq \underbrace {2\|f^{\prime \prime \prime }(x)\|_{\infty }} _{C}h^{1}\!\,}
Now suppose
h
=
x
2
−
x
1
=
x
1
−
x
0
{\displaystyle h=x_{2}-x_{1}=x_{1}-x_{0}\!\,}
and
f
{\displaystyle f\!\,}
has 4 continuous derivatives. In this case show
|
f
″
(
x
1
)
−
q
″
(
x
1
)
|
≤
C
′
h
β
{\displaystyle |f''(x_{1})-q''(x_{1})|\leq C'h^{\beta }\!\,}
where
β
>
α
{\displaystyle \beta >\alpha \!\,}
. What is
C
′
{\displaystyle C'\!\,}
in terms of the derivatives of
f
{\displaystyle f\!\,}
?
Third Derivative of f has Zero
edit
We know that
f
[
x
0
,
x
1
,
x
2
,
x
]
=
0
{\displaystyle f[x_{0},x_{1},x_{2},x]=0\!\,}
, because
p
∈
P
2
{\displaystyle p\in P_{2}\!\,}
. Now, by
(
△
)
{\displaystyle (\triangle )\!\,}
we can conclude that there exists
η
∗
∈
I
[
x
0
,
x
1
,
x
2
,
x
]
{\displaystyle \eta ^{*}\in I[x_{0},x_{1},x_{2},x]\!\,}
such that
f
‴
(
η
∗
)
=
0
{\displaystyle f'''(\eta ^{*})=0\!\,}
.
Application of Fundamental Theorem of Calculus (Twice)
edit
f
″
(
x
1
)
−
f
″
(
η
)
=
∫
η
x
1
f
(
3
)
(
x
)
d
x
=
∫
η
x
1
f
(
3
)
(
x
)
−
f
(
3
)
(
η
∗
)
⏟
0
d
x
=
∫
η
x
1
∫
η
∗
x
f
(
4
)
(
t
)
d
t
d
x
≤
‖
f
(
4
)
‖
∞
|
x
−
η
∗
|
|
x
1
−
η
|
≤
2
‖
f
(
4
)
‖
∞
⏟
C
′
h
2
,
{\displaystyle {\begin{aligned}f''(x_{1})-f''(\eta )&=\int _{\eta }^{x_{1}}f^{(3)}(x)dx\\&=\int _{\eta }^{x_{1}}f^{(3)}(x)-\underbrace {f^{(3)}(\eta ^{*})} _{0}dx\\&=\int _{\eta }^{x_{1}}\int _{\eta *}^{x}f^{(4)}(t)dtdx\\&\leq \|f^{(4)}\|_{\infty }|x-\eta ^{*}||x_{1}-\eta |\\&\leq \underbrace {2\|f^{(4)}\|_{\infty }} _{C'}h^{2},\end{aligned}}}
Find
{
p
0
,
p
1
,
p
2
}
{\displaystyle \{p_{0},p_{1},p_{2}\}\!\,}
such that
p
i
{\displaystyle p_{i}\!\,}
is a polynomial of degree
i
{\displaystyle i\!\,}
and this set is orthogonal on
[
0
,
∞
)
{\displaystyle [0,\infty )\!\,}
with respect to the weight function
w
(
x
)
=
e
−
x
{\displaystyle w(x)=e^{-x}\!\,}
. (Note:
∫
0
∞
x
n
e
−
x
d
x
=
n
!
{\displaystyle \int _{0}^{\infty }x^{n}e^{-x}dx=n!\!\,}
,
0
!
=
1.
{\displaystyle 0!=1.\!\,}
)
To find orthogonal
{
p
0
,
p
1
,
p
2
}
{\displaystyle \{p_{0},p_{1},p_{2}\}\!\,}
use the Gram Schmidt method.
Let
{
v
0
=
1
,
v
1
=
x
,
v
2
=
x
2
}
{\displaystyle \{v_{0}=1,v_{1}=x,v_{2}=x^{2}\}\!\,}
be a basis of
P
2
{\displaystyle P_{2}\!\,}
.
Choose
p
0
=
v
0
=
1
{\displaystyle p_{0}=v_{0}=1\!\,}
.
From Gram Schmidt, we have
p
1
=
x
⏟
v
1
−
(
v
1
,
p
0
)
(
p
0
⋅
,
p
0
)
⋅
1
⏟
p
0
{\displaystyle p_{1}=\underbrace {x} _{v_{1}}-{\frac {(v_{1},p_{0})}{(p_{0}\cdot ,p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,}
, where
(
v
1
,
p
0
)
=
∫
0
∞
e
−
x
⋅
1
⋅
x
d
x
=
1
{\displaystyle (v_{1},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot xdx=1\!\,}
(
p
0
,
p
0
)
=
∫
0
∞
e
−
x
⋅
1
⋅
1
d
x
=
1
{\displaystyle (p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,}
Therefore
p
1
=
x
−
1
{\displaystyle p_{1}=x-1\!\,}
Proceeding with Gram Schmidt, we have
p
2
=
x
2
⏟
v
2
−
(
v
2
,
p
1
)
(
p
1
,
p
1
)
⋅
(
x
−
1
)
⏟
p
1
−
(
v
2
,
p
0
)
(
p
0
,
p
0
)
⋅
1
⏟
p
0
{\displaystyle p_{2}=\underbrace {x^{2}} _{v_{2}}-{\frac {(v_{2},p_{1})}{(p_{1},p_{1})}}\cdot \underbrace {(x-1)} _{p_{1}}-{\frac {(v_{2},p_{0})}{(p_{0},p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,}
where
(
v
2
,
p
1
)
=
∫
0
∞
e
−
x
⋅
x
2
⋅
(
x
−
1
)
d
x
=
∫
0
∞
e
−
x
⋅
x
3
d
x
⏟
3
!
−
∫
0
∞
e
−
x
⋅
x
2
d
x
⏟
2
!
=
4
{\displaystyle {\begin{aligned}(v_{2},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot (x-1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{3}dx} _{3!}-\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{2}dx} _{2!}\\&=4\end{aligned}}}
(
p
1
,
p
1
)
=
∫
0
∞
e
−
x
⋅
(
x
−
1
)
⋅
(
x
−
1
)
d
x
=
∫
0
∞
e
−
x
⋅
(
x
2
−
2
x
+
1
)
d
x
=
∫
0
∞
e
−
x
x
2
d
x
⏟
2
!
−
2
∫
0
∞
e
−
x
⋅
x
d
x
⏟
1
!
+
∫
0
∞
e
−
x
⋅
1
d
x
⏟
0
!
=
1
{\displaystyle {\begin{aligned}(p_{1},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot (x-1)\cdot (x-1)dx\\&=\int _{0}^{\infty }e^{-x}\cdot (x^{2}-2x+1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}x^{2}dx} _{2!}-2\underbrace {\int _{0}^{\infty }e^{-x}\cdot xdx} _{1!}+\underbrace {\int _{0}^{\infty }e^{-x}\cdot 1dx} _{0!}\\&=1\end{aligned}}}
(
v
2
,
p
0
)
=
∫
0
∞
e
−
x
⋅
x
2
⋅
1
d
x
=
2
!
=
2
{\displaystyle (v_{2},p_{0})=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot 1dx=2!=2\!\,}
(
p
0
,
p
0
)
=
∫
0
∞
e
−
x
⋅
1
⋅
1
d
x
=
1
{\displaystyle (p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,}
Therefore
p
2
=
x
2
−
4
x
+
2
{\displaystyle p_{2}=x^{2}-4x+2\!\,}
Derive the 2-point Gaussian formula
∫
0
∞
f
(
x
)
e
−
x
d
x
≈
w
1
f
(
x
1
)
+
w
2
f
(
x
2
)
{\displaystyle \int _{0}^{\infty }f(x)e^{-x}dx\approx w_{1}f(x_{1})+w_{2}f(x_{2})\!\,}
i.e. find the weights and nodes
The nodes
x
1
{\displaystyle x_{1}\!\,}
and
x
2
{\displaystyle x_{2}\!\,}
are the roots of the
n
{\displaystyle n\!\,}
th orthogonal polynomial i.e.
p
2
(
x
)
=
x
2
−
4
x
+
2
{\displaystyle p_{2}(x)=x^{2}-4x+2\!\,}
Applying the quadratic formula yields the roots:
x
1
=
2
−
2
{\displaystyle x_{1}=2-{\sqrt {2}}\!\,}
x
2
=
2
+
2
{\displaystyle x_{2}=2+{\sqrt {2}}\!\,}
The approximation is exact for polynomials at most of degree
2
n
−
1
=
3
{\displaystyle 2n-1=3\!\,}
. Hence, we have the following system of equations
∫
0
∞
e
−
x
d
x
=
1
=
w
1
⋅
1
⏟
f
(
x
1
)
+
w
2
⋅
1
⏟
f
(
x
2
)
{\displaystyle \int _{0}^{\infty }e^{-x}dx=1=w_{1}\cdot \underbrace {1} _{f(x_{1})}+w_{2}\cdot \underbrace {1} _{f(x_{2})}\!\,}
∫
0
∞
x
⋅
e
−
x
d
x
=
1
=
w
1
⋅
(
2
−
2
)
⏟
f
(
x
1
)
+
w
2
⋅
(
2
+
2
)
⏟
f
(
x
2
)
{\displaystyle \int _{0}^{\infty }x\cdot e^{-x}dx=1=w_{1}\cdot \underbrace {(2-{\sqrt {2}})} _{f(x_{1})}+w_{2}\cdot \underbrace {(2+{\sqrt {2}})} _{f(x_{2})}\!\,}
Solving the solving the system of equation by substitution yields the weights:
w
1
=
2
+
2
4
{\displaystyle w_{1}={\frac {2+{\sqrt {2}}}{4}}\!\,}
w
2
=
2
−
2
4
{\displaystyle w_{2}={\frac {2-{\sqrt {2}}}{4}}\!\,}
Write down the Jacobi iteration for solving
A
x
=
b
{\displaystyle Ax=b\!\,}
in a way that it would be programmed on a computer
Choose
x
0
{\displaystyle x_{0}\!\,}
for
k
=
0
,
1
,
2
,
…
{\displaystyle k=0,1,2,\dots \!\,}
x
(
i
+
1
)
=
D
−
1
(
b
−
(
L
+
U
)
x
(
i
)
)
{\displaystyle x^{(i+1)}=D^{-1}(b-(L+U)x^{(i)})\!\,}
<convergence condition>
end
Where
A
=
D
+
L
+
U
{\displaystyle A=D+L+U\!\,}
,
D
{\displaystyle D\!\,}
is diagonal,
L
and
U
{\displaystyle L{\mbox{ and }}U\!\,}
are lower and upper triangular, respectively.
The
n
{\displaystyle n\!\,}
diagonal entries of
A
{\displaystyle A\!\,}
are non-zero since otherwise
D
−
1
{\displaystyle D^{-1}\!\,}
would not exist.
Therefore
A
{\displaystyle A\!\,}
contains
m
−
n
{\displaystyle m-n\!\,}
off-diagonal non-zero entries.
The computation during each iteration is given by
x
(
i
+
1
)
=
D
−
1
(
b
−
(
L
+
U
)
x
(
i
)
⏟
m
−
n
multiplies
)
⏟
n
multiplies
{\displaystyle x^{(i+1)}=\underbrace {D^{-1}(b-\underbrace {(L+U)x^{(i)}} _{m-n{\mbox{ multiplies}}})} _{n{\mbox{ multiplies}}}\!\,}
Therefore there are
(
m
−
n
)
+
n
=
m
{\displaystyle (m-n)+n=m\!\,}
multiplies in each iteration.
Assume that
A
{\displaystyle A\!\,}
is strictly diagonally dominant: for
i
=
1
,
…
,
n
,
{\displaystyle i=1,\ldots ,n,\!\,}
|
a
i
i
|
>
∑
j
=
1
,
j
≠
1
n
|
a
i
j
|
{\displaystyle |a_{ii}|>\sum _{j=1,j\neq 1}^{n}|a_{ij}|\!\,}
Show that the Jacobi iteration converges for any guess
x
(
0
)
{\displaystyle x^{(0)}\!\,}
. (Hint: You may use Gerschgorin's theorem without proving it.)
Let
E
:=
D
−
1
(
L
+
U
)
{\displaystyle E:=D^{-1}(L+U)\,\!}
.
Theorem 8.2.1 [SB] states that
ρ
(
E
)
<
1
{\displaystyle \rho (E)<1\,\!}
if and only if the Jacobi iteration converges.
Matrix multiplication and the definitions of
D
−
1
,
L
,
U
{\displaystyle D^{-1},L,U\!\,}
gives the explicit entrywise value of
E
{\displaystyle E\!\,}
e
i
j
=
a
i
j
a
i
i
{\displaystyle e_{ij}={\frac {a_{ij}}{a_{ii}}}\,\!}
for
j
≠
i
{\displaystyle j\neq i\!\,}
and
e
i
i
=
0
{\displaystyle e_{ii}=0\,\!}
Then, using Gerschgorin's Theorem and diagonal dominance, we have the result.
|
λ
−
e
i
i
⏟
0
|
<
∑
j
=
1
j
≠
i
n
|
e
i
j
|
=
∑
j
=
1
j
≠
i
n
|
a
i
j
|
|
a
i
i
|
<
1
{\displaystyle |\lambda -\underbrace {e_{ii}} _{0}|<\sum _{\underset {j\neq i}{j=1}}^{n}|e_{ij}|=\sum _{\underset {j\neq i}{j=1}}^{n}{\frac {|a_{ij}|}{|a_{ii}|}}<1\!\,}