# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2000

## Problem 1

 Let ${\displaystyle f\!\,}$  be a function with 3 continuous derivatives. Let ${\displaystyle q\!\,}$  be a quadratic polynomial that interpolates ${\displaystyle f\!\,}$  at ${\displaystyle x_{0} . Let ${\displaystyle h=max(x_{1}-x_{0},x_{2}-x_{1})\!\,}$  and ${\displaystyle \max _{x_{0}\leq x\leq x_{2}}|f'''(x)|=K\!\,}$ .

## Problem 1a

 Show that ${\displaystyle \max _{x_{0}\leq x\leq x_{2}}|f''(x)-q''(x)|=Ch^{\alpha }\!\,}$ ,where ${\displaystyle C>0\!\,}$  depends only on ${\displaystyle K\!\,}$  and determine ${\displaystyle \alpha _{i}\!\,}$ . (Hint: the key to this is to prove that ${\displaystyle f''(x)-q''(x)\!\,}$  vanishes at some point in ${\displaystyle [x_{0},x_{2}]\!\,}$ . The result can then be obtained by integration.)

## Solution 1a

### Proof of Hint

Claim: There exists ${\displaystyle \eta \in (x_{0},x_{2})\!\,}$  such that ${\displaystyle f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,}$

Proof: The interpolation polynomial may be expressed using dividing difference coefficients i.e.

${\displaystyle q(x)=f[x_{0}]+f[x_{0},x_{1}](x-x_{0})+f[x_{0},x_{1},x_{2}](x-x_{0})(x-x_{1})\!\,}$

which implies

${\displaystyle q^{\prime \prime }(x)=2f[x_{0},x_{1},x_{2}]\!\,}$

In general, the divided difference coefficients may be expressed as a factorial weighted point of a derivative of ${\displaystyle f\!\,}$  i.e.

${\displaystyle (\triangle )\qquad f[x_{0},\ldots ,x_{n}]={\frac {f^{(n)}(\xi )}{(n)!}}\qquad \xi \in I[x_{0},\ldots ,x_{n}]\!\,}$

Hence,

${\displaystyle f[x_{0},x_{1},x_{2}]={\frac {f^{\prime \prime }(\eta )}{2!}}\qquad \eta \in [x_{0},x_{2}]\!\,}$

which implies

${\displaystyle q^{\prime \prime }(\eta )-f^{\prime \prime }(\eta )=0\!\,}$

### Application of Hint

From the hint we know that

${\displaystyle f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,}$

which implies

${\displaystyle f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )\!\,}$

Since ${\displaystyle q(x)\!\,}$  is quadratic, ${\displaystyle q^{\prime \prime }(x)\!\,}$  is constant i.e. ${\displaystyle q^{\prime \prime }(x)=K\!\,}$  for all ${\displaystyle x\!\,}$

Therefore,

${\displaystyle f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )=f^{\prime \prime }(x)-f^{\prime \prime }(\eta )\!\,}$

By the fundamental theorem of calculus,

{\displaystyle {\begin{aligned}f^{\prime \prime }(x)-f^{\prime \prime }(\eta )&=\int _{\eta }^{x}f^{\prime \prime \prime }(t)dt\\&\leq \|f^{\prime \prime \prime }(x)\|_{\infty }|x-\eta |\\&\leq 2h\|f^{\prime \prime \prime }\|_{\infty }\end{aligned}}}

Therefore,

${\displaystyle \max _{x\in [x_{0},x_{2}]}|f^{\prime \prime }(x)-q^{\prime \prime }(x)|\leq \underbrace {2\|f^{\prime \prime \prime }(x)\|_{\infty }} _{C}h^{1}\!\,}$

## Problem 1b

 Now suppose ${\displaystyle h=x_{2}-x_{1}=x_{1}-x_{0}\!\,}$  and ${\displaystyle f\!\,}$  has 4 continuous derivatives. In this case show ${\displaystyle |f''(x_{1})-q''(x_{1})|\leq C'h^{\beta }\!\,}$  where ${\displaystyle \beta >\alpha \!\,}$ . What is ${\displaystyle C'\!\,}$  in terms of the derivatives of ${\displaystyle f\!\,}$ ?

## Solution 1b

### Third Derivative of f has Zero

We know that ${\displaystyle f[x_{0},x_{1},x_{2},x]=0\!\,}$ , because ${\displaystyle p\in P_{2}\!\,}$ . Now, by ${\displaystyle (\triangle )\!\,}$  we can conclude that there exists ${\displaystyle \eta ^{*}\in I[x_{0},x_{1},x_{2},x]\!\,}$  such that ${\displaystyle f'''(\eta ^{*})=0\!\,}$ .

### Application of Fundamental Theorem of Calculus (Twice)

{\displaystyle {\begin{aligned}f''(x_{1})-f''(\eta )&=\int _{\eta }^{x_{1}}f^{(3)}(x)dx\\&=\int _{\eta }^{x_{1}}f^{(3)}(x)-\underbrace {f^{(3)}(\eta ^{*})} _{0}dx\\&=\int _{\eta }^{x_{1}}\int _{\eta *}^{x}f^{(4)}(t)dtdx\\&\leq \|f^{(4)}\|_{\infty }|x-\eta ^{*}||x_{1}-\eta |\\&\leq \underbrace {2\|f^{(4)}\|_{\infty }} _{C'}h^{2},\end{aligned}}}

## Problem 2a

 Find ${\displaystyle \{p_{0},p_{1},p_{2}\}\!\,}$  such that ${\displaystyle p_{i}\!\,}$  is a polynomial of degree ${\displaystyle i\!\,}$  and this set is orthogonal on ${\displaystyle [0,\infty )\!\,}$  with respect to the weight function ${\displaystyle w(x)=e^{-x}\!\,}$ . (Note: ${\displaystyle \int _{0}^{\infty }x^{n}e^{-x}dx=n!\!\,}$ , ${\displaystyle 0!=1.\!\,}$ )

## Solution 2a

### Apply Gram Schmidt

To find orthogonal ${\displaystyle \{p_{0},p_{1},p_{2}\}\!\,}$  use the Gram Schmidt method.

Let ${\displaystyle \{v_{0}=1,v_{1}=x,v_{2}=x^{2}\}\!\,}$  be a basis of ${\displaystyle P_{2}\!\,}$ .

### Calculate p_0

Choose ${\displaystyle p_{0}=v_{0}=1\!\,}$ .

### Calculate p_1

From Gram Schmidt, we have

${\displaystyle p_{1}=\underbrace {x} _{v_{1}}-{\frac {(v_{1},p_{0})}{(p_{0}\cdot ,p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,}$ , where

${\displaystyle (v_{1},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot xdx=1\!\,}$

${\displaystyle (p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,}$

Therefore ${\displaystyle p_{1}=x-1\!\,}$

### Calculate p_2

Proceeding with Gram Schmidt, we have

${\displaystyle p_{2}=\underbrace {x^{2}} _{v_{2}}-{\frac {(v_{2},p_{1})}{(p_{1},p_{1})}}\cdot \underbrace {(x-1)} _{p_{1}}-{\frac {(v_{2},p_{0})}{(p_{0},p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,}$  where

{\displaystyle {\begin{aligned}(v_{2},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot (x-1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{3}dx} _{3!}-\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{2}dx} _{2!}\\&=4\end{aligned}}}

{\displaystyle {\begin{aligned}(p_{1},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot (x-1)\cdot (x-1)dx\\&=\int _{0}^{\infty }e^{-x}\cdot (x^{2}-2x+1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}x^{2}dx} _{2!}-2\underbrace {\int _{0}^{\infty }e^{-x}\cdot xdx} _{1!}+\underbrace {\int _{0}^{\infty }e^{-x}\cdot 1dx} _{0!}\\&=1\end{aligned}}}

${\displaystyle (v_{2},p_{0})=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot 1dx=2!=2\!\,}$

${\displaystyle (p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,}$

Therefore

${\displaystyle p_{2}=x^{2}-4x+2\!\,}$

## Problem 2b

 Derive the 2-point Gaussian formula ${\displaystyle \int _{0}^{\infty }f(x)e^{-x}dx\approx w_{1}f(x_{1})+w_{2}f(x_{2})\!\,}$  i.e. find the weights and nodes

## Solution 2b

### Find the Nodes

The nodes ${\displaystyle x_{1}\!\,}$  and ${\displaystyle x_{2}\!\,}$  are the roots of the ${\displaystyle n\!\,}$ th orthogonal polynomial i.e. ${\displaystyle p_{2}(x)=x^{2}-4x+2\!\,}$

Applying the quadratic formula yields the roots:

${\displaystyle x_{1}=2-{\sqrt {2}}\!\,}$

${\displaystyle x_{2}=2+{\sqrt {2}}\!\,}$

### Find the Weights

The approximation is exact for polynomials at most of degree ${\displaystyle 2n-1=3\!\,}$ . Hence, we have the following system of equations

${\displaystyle \int _{0}^{\infty }e^{-x}dx=1=w_{1}\cdot \underbrace {1} _{f(x_{1})}+w_{2}\cdot \underbrace {1} _{f(x_{2})}\!\,}$

${\displaystyle \int _{0}^{\infty }x\cdot e^{-x}dx=1=w_{1}\cdot \underbrace {(2-{\sqrt {2}})} _{f(x_{1})}+w_{2}\cdot \underbrace {(2+{\sqrt {2}})} _{f(x_{2})}\!\,}$

Solving the solving the system of equation by substitution yields the weights:

${\displaystyle w_{1}={\frac {2+{\sqrt {2}}}{4}}\!\,}$

${\displaystyle w_{2}={\frac {2-{\sqrt {2}}}{4}}\!\,}$

## Problem 3

 Let ${\displaystyle A\!\,}$  be an ${\displaystyle n\times n\!\,}$  nonsingular matrix, and consider the linear system ${\displaystyle Ax=b\!\,}$

## Problem 3a

 Write down the Jacobi iteration for solving ${\displaystyle Ax=b\!\,}$  in a way that it would be programmed on a computer

## Solution 3a

Choose ${\displaystyle x_{0}\!\,}$

for ${\displaystyle k=0,1,2,\dots \!\,}$

${\displaystyle x^{(i+1)}=D^{-1}(b-(L+U)x^{(i)})\!\,}$
<convergence condition>

end

Where ${\displaystyle A=D+L+U\!\,}$ , ${\displaystyle D\!\,}$  is diagonal, ${\displaystyle L{\mbox{ and }}U\!\,}$  are lower and upper triangular, respectively.

## Problem 3b

 Suppose ${\displaystyle A\!\,}$  has ${\displaystyle m\!\,}$  non-zero elements with ${\displaystyle m< . How many operations per iteration does the Jacobi iteration take?

## Solution 3b

The ${\displaystyle n\!\,}$  diagonal entries of ${\displaystyle A\!\,}$  are non-zero since otherwise ${\displaystyle D^{-1}\!\,}$  would not exist.

Therefore ${\displaystyle A\!\,}$  contains ${\displaystyle m-n\!\,}$  off-diagonal non-zero entries.

The computation during each iteration is given by

${\displaystyle x^{(i+1)}=\underbrace {D^{-1}(b-\underbrace {(L+U)x^{(i)}} _{m-n{\mbox{ multiplies}}})} _{n{\mbox{ multiplies}}}\!\,}$

Therefore there are ${\displaystyle (m-n)+n=m\!\,}$  multiplies in each iteration.

## Problem 3c

 Assume that ${\displaystyle A\!\,}$  is strictly diagonally dominant: for ${\displaystyle i=1,\ldots ,n,\!\,}$  ${\displaystyle |a_{ii}|>\sum _{j=1,j\neq 1}^{n}|a_{ij}|\!\,}$  Show that the Jacobi iteration converges for any guess ${\displaystyle x^{(0)}\!\,}$ . (Hint: You may use Gerschgorin's theorem without proving it.)

## Solution 3c

Let ${\displaystyle E:=D^{-1}(L+U)\,\!}$ .

Theorem 8.2.1 [SB] states that ${\displaystyle \rho (E)<1\,\!}$  if and only if the Jacobi iteration converges.

Matrix multiplication and the definitions of ${\displaystyle D^{-1},L,U\!\,}$  gives the explicit entrywise value of ${\displaystyle E\!\,}$

${\displaystyle e_{ij}={\frac {a_{ij}}{a_{ii}}}\,\!}$  for ${\displaystyle j\neq i\!\,}$  and ${\displaystyle e_{ii}=0\,\!}$

Then, using Gerschgorin's Theorem and diagonal dominance, we have the result.

${\displaystyle |\lambda -\underbrace {e_{ii}} _{0}|<\sum _{\underset {j\neq i}{j=1}}^{n}|e_{ij}|=\sum _{\underset {j\neq i}{j=1}}^{n}{\frac {|a_{ij}|}{|a_{ii}|}}<1\!\,}$