# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2000

## Problem 1

 Let $f\!\,$ be a function with 3 continuous derivatives. Let $q\!\,$ be a quadratic polynomial that interpolates $f\!\,$ at $x_{0} . Let $h=max(x_{1}-x_{0},x_{2}-x_{1})\!\,$ and $\max _{x_{0}\leq x\leq x_{2}}|f'''(x)|=K\!\,$ .

## Problem 1a

 Show that $\max _{x_{0}\leq x\leq x_{2}}|f''(x)-q''(x)|=Ch^{\alpha }\!\,$ ,where $C>0\!\,$ depends only on $K\!\,$ and determine $\alpha _{i}\!\,$ . (Hint: the key to this is to prove that $f''(x)-q''(x)\!\,$ vanishes at some point in $[x_{0},x_{2}]\!\,$ . The result can then be obtained by integration.)

## Solution 1a

### Proof of Hint

Claim: There exists $\eta \in (x_{0},x_{2})\!\,$  such that $f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,$

Proof: The interpolation polynomial may be expressed using dividing difference coefficients i.e.

$q(x)=f[x_{0}]+f[x_{0},x_{1}](x-x_{0})+f[x_{0},x_{1},x_{2}](x-x_{0})(x-x_{1})\!\,$

which implies

$q^{\prime \prime }(x)=2f[x_{0},x_{1},x_{2}]\!\,$

In general, the divided difference coefficients may be expressed as a factorial weighted point of a derivative of $f\!\,$  i.e.

$(\triangle )\qquad f[x_{0},\ldots ,x_{n}]={\frac {f^{(n)}(\xi )}{(n)!}}\qquad \xi \in I[x_{0},\ldots ,x_{n}]\!\,$

Hence,

$f[x_{0},x_{1},x_{2}]={\frac {f^{\prime \prime }(\eta )}{2!}}\qquad \eta \in [x_{0},x_{2}]\!\,$

which implies

$q^{\prime \prime }(\eta )-f^{\prime \prime }(\eta )=0\!\,$

### Application of Hint

From the hint we know that

$f^{\prime \prime }(\eta )-q^{\prime \prime }(\eta )=0\!\,$

which implies

$f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )\!\,$

Since $q(x)\!\,$  is quadratic, $q^{\prime \prime }(x)\!\,$  is constant i.e. $q^{\prime \prime }(x)=K\!\,$  for all $x\!\,$

Therefore,

$f^{\prime \prime }(x)-q^{\prime \prime }(x)=f^{\prime \prime }(x)-q^{\prime \prime }(x)-f^{\prime \prime }(\eta )+q^{\prime \prime }(\eta )=f^{\prime \prime }(x)-f^{\prime \prime }(\eta )\!\,$

By the fundamental theorem of calculus,

{\begin{aligned}f^{\prime \prime }(x)-f^{\prime \prime }(\eta )&=\int _{\eta }^{x}f^{\prime \prime \prime }(t)dt\\&\leq \|f^{\prime \prime \prime }(x)\|_{\infty }|x-\eta |\\&\leq 2h\|f^{\prime \prime \prime }\|_{\infty }\end{aligned}}

Therefore,

$\max _{x\in [x_{0},x_{2}]}|f^{\prime \prime }(x)-q^{\prime \prime }(x)|\leq \underbrace {2\|f^{\prime \prime \prime }(x)\|_{\infty }} _{C}h^{1}\!\,$

## Problem 1b

 Now suppose $h=x_{2}-x_{1}=x_{1}-x_{0}\!\,$ and $f\!\,$ has 4 continuous derivatives. In this case show $|f''(x_{1})-q''(x_{1})|\leq C'h^{\beta }\!\,$ where $\beta >\alpha \!\,$ . What is $C'\!\,$ in terms of the derivatives of $f\!\,$ ?

## Solution 1b

### Third Derivative of f has Zero

We know that $f[x_{0},x_{1},x_{2},x]=0\!\,$ , because $p\in P_{2}\!\,$ . Now, by $(\triangle )\!\,$  we can conclude that there exists $\eta ^{*}\in I[x_{0},x_{1},x_{2},x]\!\,$  such that $f'''(\eta ^{*})=0\!\,$ .

### Application of Fundamental Theorem of Calculus (Twice)

{\begin{aligned}f''(x_{1})-f''(\eta )&=\int _{\eta }^{x_{1}}f^{(3)}(x)dx\\&=\int _{\eta }^{x_{1}}f^{(3)}(x)-\underbrace {f^{(3)}(\eta ^{*})} _{0}dx\\&=\int _{\eta }^{x_{1}}\int _{\eta *}^{x}f^{(4)}(t)dtdx\\&\leq \|f^{(4)}\|_{\infty }|x-\eta ^{*}||x_{1}-\eta |\\&\leq \underbrace {2\|f^{(4)}\|_{\infty }} _{C'}h^{2},\end{aligned}}

## Problem 2a

 Find $\{p_{0},p_{1},p_{2}\}\!\,$ such that $p_{i}\!\,$ is a polynomial of degree $i\!\,$ and this set is orthogonal on $[0,\infty )\!\,$ with respect to the weight function $w(x)=e^{-x}\!\,$ . (Note: $\int _{0}^{\infty }x^{n}e^{-x}dx=n!\!\,$ , $0!=1.\!\,$ )

## Solution 2a

### Apply Gram Schmidt

To find orthogonal $\{p_{0},p_{1},p_{2}\}\!\,$  use the Gram Schmidt method.

Let $\{v_{0}=1,v_{1}=x,v_{2}=x^{2}\}\!\,$  be a basis of $P_{2}\!\,$ .

### Calculate p_0

Choose $p_{0}=v_{0}=1\!\,$ .

### Calculate p_1

From Gram Schmidt, we have

$p_{1}=\underbrace {x} _{v_{1}}-{\frac {(v_{1},p_{0})}{(p_{0}\cdot ,p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,$ , where

$(v_{1},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot xdx=1\!\,$

$(p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,$

Therefore $p_{1}=x-1\!\,$

### Calculate p_2

Proceeding with Gram Schmidt, we have

$p_{2}=\underbrace {x^{2}} _{v_{2}}-{\frac {(v_{2},p_{1})}{(p_{1},p_{1})}}\cdot \underbrace {(x-1)} _{p_{1}}-{\frac {(v_{2},p_{0})}{(p_{0},p_{0})}}\cdot \underbrace {1} _{p_{0}}\!\,$  where

{\begin{aligned}(v_{2},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot (x-1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{3}dx} _{3!}-\underbrace {\int _{0}^{\infty }e^{-x}\cdot x^{2}dx} _{2!}\\&=4\end{aligned}}

{\begin{aligned}(p_{1},p_{1})&=\int _{0}^{\infty }e^{-x}\cdot (x-1)\cdot (x-1)dx\\&=\int _{0}^{\infty }e^{-x}\cdot (x^{2}-2x+1)dx\\&=\underbrace {\int _{0}^{\infty }e^{-x}x^{2}dx} _{2!}-2\underbrace {\int _{0}^{\infty }e^{-x}\cdot xdx} _{1!}+\underbrace {\int _{0}^{\infty }e^{-x}\cdot 1dx} _{0!}\\&=1\end{aligned}}

$(v_{2},p_{0})=\int _{0}^{\infty }e^{-x}\cdot x^{2}\cdot 1dx=2!=2\!\,$

$(p_{0},p_{0})=\int _{0}^{\infty }e^{-x}\cdot 1\cdot 1dx=1\!\,$

Therefore

$p_{2}=x^{2}-4x+2\!\,$

## Problem 2b

 Derive the 2-point Gaussian formula $\int _{0}^{\infty }f(x)e^{-x}dx\approx w_{1}f(x_{1})+w_{2}f(x_{2})\!\,$ i.e. find the weights and nodes

## Solution 2b

### Find the Nodes

The nodes $x_{1}\!\,$  and $x_{2}\!\,$  are the roots of the $n\!\,$ th orthogonal polynomial i.e. $p_{2}(x)=x^{2}-4x+2\!\,$

Applying the quadratic formula yields the roots:

$x_{1}=2-{\sqrt {2}}\!\,$

$x_{2}=2+{\sqrt {2}}\!\,$

### Find the Weights

The approximation is exact for polynomials at most of degree $2n-1=3\!\,$ . Hence, we have the following system of equations

$\int _{0}^{\infty }e^{-x}dx=1=w_{1}\cdot \underbrace {1} _{f(x_{1})}+w_{2}\cdot \underbrace {1} _{f(x_{2})}\!\,$

$\int _{0}^{\infty }x\cdot e^{-x}dx=1=w_{1}\cdot \underbrace {(2-{\sqrt {2}})} _{f(x_{1})}+w_{2}\cdot \underbrace {(2+{\sqrt {2}})} _{f(x_{2})}\!\,$

Solving the solving the system of equation by substitution yields the weights:

$w_{1}={\frac {2+{\sqrt {2}}}{4}}\!\,$

$w_{2}={\frac {2-{\sqrt {2}}}{4}}\!\,$

## Problem 3

 Let $A\!\,$ be an $n\times n\!\,$ nonsingular matrix, and consider the linear system $Ax=b\!\,$ ## Problem 3a

 Write down the Jacobi iteration for solving $Ax=b\!\,$ in a way that it would be programmed on a computer

## Solution 3a

Choose $x_{0}\!\,$

for $k=0,1,2,\dots \!\,$

$x^{(i+1)}=D^{-1}(b-(L+U)x^{(i)})\!\,$
<convergence condition>

end

Where $A=D+L+U\!\,$ , $D\!\,$  is diagonal, $L{\mbox{ and }}U\!\,$  are lower and upper triangular, respectively.

## Problem 3b

 Suppose $A\!\,$ has $m\!\,$ non-zero elements with $m< . How many operations per iteration does the Jacobi iteration take?

## Solution 3b

The $n\!\,$  diagonal entries of $A\!\,$  are non-zero since otherwise $D^{-1}\!\,$  would not exist.

Therefore $A\!\,$  contains $m-n\!\,$  off-diagonal non-zero entries.

The computation during each iteration is given by

$x^{(i+1)}=\underbrace {D^{-1}(b-\underbrace {(L+U)x^{(i)}} _{m-n{\mbox{ multiplies}}})} _{n{\mbox{ multiplies}}}\!\,$

Therefore there are $(m-n)+n=m\!\,$  multiplies in each iteration.

## Problem 3c

 Assume that $A\!\,$ is strictly diagonally dominant: for $i=1,\ldots ,n,\!\,$ $|a_{ii}|>\sum _{j=1,j\neq 1}^{n}|a_{ij}|\!\,$ Show that the Jacobi iteration converges for any guess $x^{(0)}\!\,$ . (Hint: You may use Gerschgorin's theorem without proving it.)

## Solution 3c

Let $E:=D^{-1}(L+U)\,\!$ .

Theorem 8.2.1 [SB] states that $\rho (E)<1\,\!$  if and only if the Jacobi iteration converges.

Matrix multiplication and the definitions of $D^{-1},L,U\!\,$  gives the explicit entrywise value of $E\!\,$

$e_{ij}={\frac {a_{ij}}{a_{ii}}}\,\!$  for $j\neq i\!\,$  and $e_{ii}=0\,\!$

Then, using Gerschgorin's Theorem and diagonal dominance, we have the result.

$|\lambda -\underbrace {e_{ii}} _{0}|<\sum _{\underset {j\neq i}{j=1}}^{n}|e_{ij}|=\sum _{\underset {j\neq i}{j=1}}^{n}{\frac {|a_{ij}|}{|a_{ii}|}}<1\!\,$