# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2005

## Problem 1

 Given ${\displaystyle f\in C[a,b]\!\,}$ , let ${\displaystyle p_{n}^{*}\!\,}$  denote the best uniform approximation to ${\displaystyle f\!\,}$  among polynomials of degree ${\displaystyle \leq n\!\,}$ , i.e. ${\displaystyle \max _{x\in [a,b]}|f(x)-p_{n}(x)|\!\,}$  is minimized by the choice ${\displaystyle p_{n}=p_{n}^{*}\!\,}$ . Let ${\displaystyle e(x)=f(x)-p_{n}^{*}(x)\!\,}$ . Prove that there are at least two points ${\displaystyle x_{1},x_{2}\in [a,b]\!\,}$  such that ${\displaystyle |e(x_{1})|=|e(x_{2})|=\max _{x\in [a,b]}|f(x)-p_{n}^{*}(x)|\!\,}$  and ${\displaystyle e(x_{1})=-e(x_{2})\!\,}$

## Solution 1

Since both ${\displaystyle f(x)\!\,}$  and ${\displaystyle p_{n}^{*}(x)\!\,}$  are continuous, so is the function

${\displaystyle e(x)=f(x)-p_{n}^{*}(x)\!\,}$

and therefore ${\displaystyle e(x)\!\,}$  attains both a maximum and a minimum on ${\displaystyle [a,b]\!\,}$ .

Let ${\displaystyle M=\max _{x\in [a,b]}e(x)\!\,}$  and ${\displaystyle m=\min _{x\in [a,b]}e(x)\!\,}$

If ${\displaystyle M\neq -m\!\,}$ , then there exists a polynomial ${\displaystyle q(x)=p_{n}^{*}(x)+{\frac {M+m}{2}}\!\,}$  (a vertical shift of the supposed best approximation ${\displaystyle p_{n}^{*}(x))\!\,}$  which is a better approximation to ${\displaystyle f(x)\!\,}$ .

{\displaystyle {\begin{aligned}f(x)-q(x)&=f(x)-p_{n}^{*}(x)-{\frac {M+m}{2}}\\&\leq M-{\frac {M+m}{2}}\\&={\frac {M-m}{2}}\\\\\\f(x)-q(x)&\geq m-{\frac {M+m}{2}}\\&={\frac {m-M}{2}}\end{aligned}}\!\,}

Therefore,

${\displaystyle \|f(x)-q(x)\|\leq {\frac {M-m}{2}}\leq \|f-p_{n}^{*}\|=M\!\,}$

Equality holds if and only if ${\displaystyle m=-M\!\,}$ .

## Problem 2a

 Find the node ${\displaystyle x_{1}\!\,}$  and the weight ${\displaystyle w_{1}\!\,}$  so that the integration rule ${\displaystyle I=\int _{0}^{1}x^{-{\frac {1}{2}}}f(x)dx\approx w_{1}f(x_{1})\!\,}$  is exact for linear functions. (No knowledge of orthogonal polynomials is required.)

## Solution 2a

Let ${\displaystyle f(x)=1\!\,}$ . Then

${\displaystyle \int _{0}^{1}{\frac {1}{x^{\frac {1}{2}}}}=w_{1}\cdot 1\!\,}$

which implies after computation

${\displaystyle w_{1}=2\!\,}$

Let ${\displaystyle f(x)=x\!\,}$ . Then

${\displaystyle \int _{0}^{1}{\frac {x}{x^{\frac {1}{2}}}}=w_{1}\cdot x_{1}=2x_{1}\!\,}$

which implies after computation

${\displaystyle x_{1}={\frac {1}{3}}\!\,}$

## Problem 2b

 Show that no one-point rule for approximating ${\displaystyle I\!\,}$  can be exact for quadratics.

## Solution 2b

Suppose there is a one-point rule for approximating ${\displaystyle I\!\,}$  that is exact for quadratics.

Let ${\displaystyle f(x)=x^{2}\!\,}$ .

Then,

${\displaystyle \int _{0}^{1}{\frac {x^{2}}{x^{\frac {1}{2}}}}={\frac {2}{5}}\!\,}$

but

${\displaystyle w_{1}x_{1}^{2}=2({\frac {1}{3}})^{2}={\frac {2}{9}}\!\,}$ ,

## Problem 2c

 In fact ${\displaystyle I-w_{1}f(x_{1})=cf''(\xi ){\mbox{ for some }}\xi \in [0,1]\!\,}$  Find ${\displaystyle c\!\,}$ .

## Solution 2c

Let ${\displaystyle f(x)=x^{2}\!\,}$ . Then ${\displaystyle f''(x)=2\!\,}$

Using the values computed in part (b), we have

${\displaystyle {\frac {2}{5}}-{\frac {2}{9}}=c\cdot 2\!\,}$

which implies

${\displaystyle c={\frac {4}{45}}\!\,}$

## Problem 2d

 Let ${\displaystyle f(x)\!\,}$  and ${\displaystyle g(x)\!\,}$  be two polynomials of degree ${\displaystyle \leq 3\!\,}$ . Suppose ${\displaystyle f\!\,}$  and ${\displaystyle g\!\,}$  agree at ${\displaystyle x=a\!\,}$ , ${\displaystyle x=b\!\,}$ , and ${\displaystyle x=(a+b)/2\!\,}$ . Show ${\displaystyle \int _{a}^{b}f(x)dx=\int _{a}^{b}g(x)dx\!\,}$

## Solution 2d

There exist ${\displaystyle w_{1},w_{2},w_{3}\!\,}$  such that if ${\displaystyle f(x)\!\,}$  is a polynomial of degree ${\displaystyle \leq 3\!\,}$

${\displaystyle \int _{a}^{b}f(x)dx=w_{1}f(a)+w_{2}f(b)+w_{3}f({\frac {a+b}{2}})\!\,}$

Hence,

{\displaystyle {\begin{aligned}\int _{a}^{b}f(x)dx&=w_{1}f(a)+w_{2}f(b)+w_{3}f({\frac {a+b}{2}})\\&=w_{1}g(a)+w_{2}g(b)+w_{3}g({\frac {a+b}{2}})\\&=\int _{a}^{b}g(x)dx\end{aligned}}\!\,}

## Problem 3

 Let ${\displaystyle A\in R^{n\times n}\!\,}$  be nonsingular and ${\displaystyle b\in R^{n}\!\,}$ . Consider the following iteration for the solution ${\displaystyle Ax=b\!\,}$  ${\displaystyle x_{k+1}=x_{k}+\alpha (b-Ax_{k})\!\,}$

## Problem 3a

 Show that if all eigenvalues of ${\displaystyle A\!\,}$  have positive real part then there will be some real ${\displaystyle \alpha \!\,}$  such that the iterates converge for any starting vector ${\displaystyle x_{0}\!\,}$ . Discuss how to choose ${\displaystyle \alpha \!\,}$  optimally in case ${\displaystyle A\!\,}$  is symmetric and determine the rate of convergence.

## Solution 3a

### Convergence of any starting vector

Using the iteration ${\displaystyle \,\!x_{k+1}=x_{k}+\alpha (b-Ax_{k})}$ , we have that

{\displaystyle {\begin{aligned}e_{k+1}&=x_{k+1}-x^{*}\\&=x_{k}-x^{*}+\alpha (b-Ax_{k})\\&=x_{k}-x^{*}+\alpha (Ax^{*}-Ax_{k})\\&=e_{k}-\alpha Ae_{k}\\&=(I-\alpha A)e_{k}\end{aligned}}}

Then, computing norms we obtain

${\displaystyle \,\!\|e_{k+1}\|\leq \|I-\alpha A\|\|e_{k}\|}$ .

In particular, considering ${\displaystyle \,\!\|\cdot \|_{2}}$ , we have that

${\displaystyle \,\!\|e_{k+1}\|_{2}\leq \|I-\alpha A\|_{2}\|e_{k}\|_{2}}$ .

Now, in order to study the convergence of the method, we need to analyze ${\displaystyle \,\!\|I-\alpha A\|_{2}}$ . We use the following characterization:

{\displaystyle {\begin{aligned}\|I-\alpha A\|_{2}^{2}&=\rho \left((I-\alpha A)(I-\alpha A)^{*}\right)\\&=\rho \left(I-\alpha A^{*}-\alpha A+|\alpha |^{2}AA^{*}\right)\end{aligned}}}

Let's denote by ${\displaystyle \,\!\lambda }$  an eigenvalue of the matrix ${\displaystyle \,\!A}$ . Then ${\displaystyle \rho \left((I-\alpha A)(I-\alpha A)^{*}\right)<1\,\!}$  implies

${\displaystyle 1-2\alpha Re(\lambda )+\alpha ^{2}|\lambda |^{2}<1\!\,}$

i.e.

${\displaystyle -2\alpha Re(\lambda )+\alpha ^{2}|\lambda |^{2}<0\!\,}$

The above equation is quadratic with respect to ${\displaystyle \alpha \!\,}$  and opens upward. Hence using the quadratic formula we can find the roots of the equation to be

${\displaystyle \alpha _{1}=0\quad \alpha _{2}=2{\frac {Re(\lambda )}{|\lambda |^{2}}}\!\,}$

Then, if ${\displaystyle \,\!Re(\lambda )>0}$  for all the eigenvalues of the matrix ${\displaystyle \,\!A}$ , we can find ${\displaystyle \,\!\alpha \in (0,2{\frac {Re(\lambda )}{|\lambda |^{2}}})}$  such that ${\displaystyle \,\!\rho \left((I-\alpha A)(I-\alpha A)^{*}\right)<1}$ , i.e., the method converges.

### Choosing alpha if A symmetric

On the other hand, if the matrix ${\displaystyle \,\!A}$  is symmetric, we have that ${\displaystyle \,\!\|I-\alpha A\|_{2}=\rho \left(I-\alpha A\right)}$ . Then using the Schur decomposition, we can find a unitary matrix ${\displaystyle \,\!U}$  and a diagonal matrix ${\displaystyle \,\!\Lambda }$  such that ${\displaystyle \,\!A=U^{*}\Lambda U}$ , and in consequence,

{\displaystyle {\begin{aligned}\|I-\alpha A\|_{2}&=\|I-\alpha \Lambda \|_{2}\\&=\rho (I-\alpha \Lambda )\\&=\max _{i=1,\dots ,n}(1-\alpha \lambda _{i})\\\end{aligned}}}

This last expression is minimized if ${\displaystyle \,\!(1-\alpha \lambda _{1})=-(1-\alpha \lambda _{n})}$ , i.e, if ${\displaystyle \,\!\alpha _{opt}=2/(\lambda _{1}+\lambda _{n})}$ . With this optimal value of ${\displaystyle \,\!\alpha }$ , we obtain that

${\displaystyle \,\!(I-\alpha _{opt}\Lambda )_{i,i}=1-{\frac {2}{\lambda _{1}+\lambda _{n}}}\lambda _{i},}$

which implies that

${\displaystyle \,\!\|I-\alpha _{opt}\Lambda \|_{2}={\frac {\lambda _{n}-\lambda _{1}}{\lambda _{1}+\lambda _{n}}}}$ .

Finally, we've obtained that

${\displaystyle \,\!\|e_{k+1}\|_{2}\leq {\frac {\lambda _{n}-\lambda _{1}}{\lambda _{1}+\lambda _{n}}}\|e_{k}\|_{2}}$ .

## Problem 3b

 Show that if some eigenvalues of ${\displaystyle A\!\,}$  have negative real part and some have positive real part, then there is no real ${\displaystyle \alpha \!\,}$  for which the iterations converge.

## Solution 3b

If ${\displaystyle Re(\lambda _{i})>0\!\,}$  for ${\displaystyle i=1,2,\ldots ,n\!\,}$ , then convergence occurs if

${\displaystyle \alpha \in (0,2{\frac {Re(\lambda )}{|\lambda |^{2}}})\!\,}$

Hence ${\displaystyle \alpha >0\!\,}$ .

Similarly, if ${\displaystyle Re(\lambda _{i})<0\!\,}$  for ${\displaystyle i=1,2,\ldots ,n\!\,}$ , then convergence occurs if

${\displaystyle \alpha \in (2{\frac {Re(\lambda )}{|\lambda |^{2}}},0)\!\,}$

Hence ${\displaystyle \alpha <0\!\,}$ .

If some eigenvalues are positive and some negative, there does not exist a real ${\displaystyle \alpha \!\,}$  for which the iterations converge since ${\displaystyle \alpha \!\,}$  cannot be both positive and negative simultaneously.

## Problem 3c

 Let ${\displaystyle p=\|I-\alpha A\|<1\!\,}$  for a matrix norm associated to a vector norm. Show that the error can be expressed in terms of the difference between consecutive iterates, namely ${\displaystyle \|x-x_{k+1}\|\leq {\frac {\rho }{1-\rho }}\|x_{k}-x_{k+1}\|\!\,}$ (The proof of this is short but a little tricky)

## Solution 3c

{\displaystyle {\begin{aligned}\|x-x_{k+1}\|&\leq p\|x-x_{k+1}+x_{k+1}-x_{k}\|\\&\leq p\|x-x_{k+1}\|+p\|x_{k+1}-x_{k}\|\\(1-p)\|x-x_{k+1}\|&\leq p\|x_{k+1}-x_{k}\|\\\|x-x_{k+1}\|&\leq {\frac {p}{1-p}}\|x_{k+1}-x_{k}\|\end{aligned}}\!\,}

## Problem 3d

 Let ${\displaystyle A\!\,}$  be the tridiagonal matrix ${\displaystyle {\begin{pmatrix}3&1&0&0&\cdots &0\\-1&3&1&0&\cdots &0\\\vdots &\ddots &\ddots &\ddots &\vdots &\vdots \\\vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\\vdots &\vdots &\vdots &\ddots &3&1\\0&\cdots &\cdots &\cdots &-1&3\end{pmatrix}}\!\,}$  Find a value of ${\displaystyle \alpha \!\,}$  that guarantees convergence

## Solution 3d

By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.

${\displaystyle 1<|\lambda |<5\!\,}$

Therefore,

${\displaystyle \rho (I-\alpha A)=\max _{i}(|1-\alpha \lambda _{i}|)\leq |1-\alpha 5|\!\,}$

We want

${\displaystyle \rho (I-\alpha A)<|1-\alpha 5|<1\!\,}$

Therefore,

${\displaystyle 0<\alpha <{\frac {2}{5}}\!\,}$