# Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2006

## Problem 1

 Let ${\displaystyle f\!\,}$  be continuous on ${\displaystyle [0,1]\!\,}$ . A polynomial ${\displaystyle p\!\,}$  of degree not greater than ${\displaystyle n\!\,}$  is said to be a best (or Chebyshev) approximation to ${\displaystyle f\!\,}$  if ${\displaystyle p\!\,}$  minimizes the expression ${\displaystyle E(p)=\max _{x\in [0,1]}|f(x)-p(x)|\!\,}$

## Problem 1a

 Show that a sufficient condition for ${\displaystyle p\!\,}$  to be a best approximation is that there exists points ${\displaystyle x_{0}  such that ${\displaystyle f(x_{i})-p(x_{i})=-[f(x_{i+1})-p(x_{i+1})]=\pm E(p),\qquad i=0,1,2,\dots ,n\!\,}$ .

## Solution 1a

Assume there exists ${\displaystyle q\in \Pi ^{n}\!\,}$  such that

${\displaystyle E(q)

Then for ${\displaystyle i=0,1,\ldots ,n\!\,}$

${\displaystyle |f(x_{i})-q(x_{i})|\leq |f(x_{i})-p(x_{i})|\!\,}$

Let ${\displaystyle e(q)=f(x)-q(x)\!\,}$  and ${\displaystyle e(p)=f(x)-p(x)\!\,}$ .

Then ${\displaystyle e\equiv e(q(x))-e(p(x))\!\,}$  takes on the sign of ${\displaystyle e(p)\!\,}$  since

${\displaystyle |e(p(x_{i}))|\geq |e(q(x_{i}))|\!\,}$

Since ${\displaystyle e(p)\!\,}$  changes signs ${\displaystyle n+1\!\,}$  times (by hypothesis), ${\displaystyle e\!\,}$  has ${\displaystyle n+1\!\,}$  zeros.

However ${\displaystyle e=p-q\!\,}$  and thus can only have at most ${\displaystyle n\!\,}$  zeros. Therefore ${\displaystyle e\equiv 0\!\,}$  and ${\displaystyle p=q\!\,}$

## Problem 1b

 Compute the best linear approximation to ${\displaystyle x^{2}\;{\mbox{in}}\;[0,1]\!\,}$ . [Hint: Drawing a line through the parabola will allow you to conjecture where two points of oscillation must lie. Use conditions from (a) to determine the third point and coefficients of ${\displaystyle p\!\,}$ .]

## Solution 1b

First we need to find the roots of ${\displaystyle T_{2}(x)\!\,}$  in [0,1], which are given by

${\displaystyle x_{k}={\frac {1}{2}}+{\frac {1}{2}}\cos({\frac {2k-1}{4}}\pi )\!\,}$

So our points at which to interpolate are

${\displaystyle x_{1}={\frac {1}{2}}+{\frac {1}{2}}\cos({\frac {\pi }{4}})\!\,}$

${\displaystyle x_{2}={\frac {1}{2}}+{\frac {1}{2}}\cos({\frac {3\pi }{4}})={\frac {1}{2}}-{\frac {1}{2}}\cos({\frac {\pi }{4}})\!\,}$

Our linear interpolant passes through the points ${\displaystyle (x_{1},x_{1}^{2})\!\,}$  and ${\displaystyle (x_{2},x_{2}^{2})\!\,}$ , which using point-slope form gives the equation

${\displaystyle y-x_{1}^{2}={\frac {x_{2}^{2}-x_{1}^{2}}{x_{2}-x_{1}}}(x-x_{1})\!\,}$

or

${\displaystyle y=x-{\frac {1}{8}}\!\,}$

## Problem 2

 We will be concerned with the least squares problem of minimizing ${\displaystyle \rho ^{2}(x)=\|b-Ax\|^{2}\!\,}$ .Here ${\displaystyle A\!\,}$  is an ${\displaystyle m\times n\!\,}$  matrix of rank ${\displaystyle n\!\,}$  (which implies ${\displaystyle m\geq n\!\,}$ ) and ${\displaystyle \|\cdot \|\!\,}$  is the Euclidean vector norm. Let ${\displaystyle A={\begin{pmatrix}Q_{1}&Q_{2}\end{pmatrix}}{\begin{pmatrix}R\\0\end{pmatrix}}\!\,}$ be the QR decomposition of ${\displaystyle A\!\,}$ . Here ${\displaystyle Q_{1},Q_{2}\;{\mbox{and}}\;R\!\,}$  are respectively ${\displaystyle m\times n,m\times (m-n),\;{\mbox{and}}\;n\times n\!\,}$ .

## Problem 2a

 Show that the solution of the least squares problem satisfies the QR equation ${\displaystyle Rx=Q_{1}^{T}b\!\,}$  and that the solution is unique. Further show that ${\displaystyle \rho (x)=\|Q_{2}^{T}b\|\!\,}$ .

## Solution 2a

### The two problems are equivalent

First notice

${\displaystyle A={\begin{pmatrix}Q_{1}Q_{2}\end{pmatrix}}{\begin{pmatrix}R\\0\end{pmatrix}}=Q_{1}R}$

Then we can write

{\displaystyle {\begin{aligned}\|b-Ax\|&=\|b-Q_{1}Rx\|\\&=\|Q_{1}^{T}b-Q_{1}^{T}Q_{1}Rx\|\\&=\|Q_{1}^{T}b-Rx\|\end{aligned}}}

Note that multiplying by orthogonal matrices does not affect the norm.

Then solving ${\displaystyle \min _{x}\|b-Ax\|^{2}\!\,}$  is equivalent to solving ${\displaystyle \min _{x}\|Q_{1}^{T}b-Rx\|^{2}\!\,}$ , which is equivalent to solving ${\displaystyle Rx=Q_{1}^{T}b\!\,}$ . Note that a solution exists and is unique since ${\displaystyle R\!\,}$  is n-by-n and non-singular.

### Show that ${\displaystyle \rho (x)=\|Q_{2}^{T}b\|}$

Similarly

{\displaystyle {\begin{aligned}\|b-Ax\|&=\|b-Q_{1}Rx\|\\&=\|Q_{2}^{T}b-\underbrace {Q_{2}^{T}Q_{1}} _{0}Rx\|\\&=\|Q_{2}^{T}b\|\end{aligned}}}

Then

${\displaystyle \rho ^{2}(x)=\|b-Ax\|^{2}=\|Q_{2}^{T}b\|^{2}}$ , or simply ${\displaystyle \rho (x)=\|Q_{2}^{T}b\|}$ , as desired.

## Problem 2b

 Use the QR equation to show that the least squares solution satisfies the normal equations ${\displaystyle (A^{T}A)x=A^{T}b\!\,}$ .

## Solution 2b

{\displaystyle {\begin{aligned}A^{T}Ax&=(Q_{1}R)^{T}Q_{1}Rx\\&=R^{T}Q_{1}^{T}Q_{1}Rx\\&=R^{T}Rx\\\\A^{T}b&=(Q_{1}R)^{T}b\\&=R^{T}\underbrace {Q_{1}^{T}b} _{Rx}\\&=R^{T}Rx\end{aligned}}\!\,}

## Problem 3

 ${\displaystyle \!\,}$  Let ${\displaystyle \!\,A}$  be real symmetric and let ${\displaystyle u_{0}\neq 0\!\,}$  be given. For ${\displaystyle k=1,2,\ldots \!\,}$ , define ${\displaystyle u_{k+1}\!\,}$  as the linear combination of the vectors ${\displaystyle Au_{k},u_{k},\ldots ,u_{0}\!\,}$  with the coefficient of ${\displaystyle Au_{k}\!\,}$  equal to one and orthogonal to the vectors ${\displaystyle u_{k},\ldots ,u_{0}\!\,}$ ; i.e.  ${\displaystyle u_{k+1}=Au_{k}-\alpha _{k}u_{k}-\beta _{k}u_{k-1}-\gamma _{k}u_{k-2}-\cdots \!\,}$  

## Problem 3a

 Find formulas for ${\displaystyle \alpha _{k}\!\,}$  and ${\displaystyle \beta _{k}\!\,}$

## Solution 3a

Using Gram Schmidit, we have

{\displaystyle {\begin{aligned}\alpha _{k}&={\frac {(Au_{k},u_{k})}{\|u_{k}\|^{2}}}\\\beta _{k}&={\frac {(Au_{k},u_{k-1})}{\|u_{k-1}\|^{2}}}\end{aligned}}}

## Problem 3b

 Show that ${\displaystyle \gamma _{k}=\delta _{k}=\ldots =0\!\,}$  Where do you use the symmetry of ${\displaystyle A\!\,}$ ?

## Solution 3b

Since

${\displaystyle \gamma _{k}={\frac {(Au_{k},u_{k-2})}{\|u_{k-2}\|^{2}}}}$

, if ${\displaystyle \gamma _{k}=0\!\,}$ , then

${\displaystyle (Au_{k},u_{k-2})=0\!\,}$

Since ${\displaystyle A\!\,}$  is symmetric,

${\displaystyle (Au_{k},u_{k-2})=(u_{k},A^{T}u_{k-2})=(u_{k},Au_{k-2})\!\,}$

From hypothesis, ${\displaystyle (u_{i},u_{j})={\begin{cases}1&{\mbox{if }}i=j\\0&{\mbox{if }}i\neq j\end{cases}}}$

Also from hypothesis,

{\displaystyle {\begin{aligned}Au_{k}&=u_{k+1}+\alpha _{k}u_{k}+\beta _{k}u_{k-1}+\gamma _{k}u_{k-2}+\ldots \\Au_{k-2}&=u_{k-1}+\alpha _{k-2}u_{k-2}+\beta _{k-2}u_{k-3}+\gamma _{k-2}u_{k-4}+\ldots \end{aligned}}}

Using the above results we have,

{\displaystyle {\begin{aligned}(Au_{k},u_{k-2})&=(u_{k},Au_{k-2})\\&=(u_{k},u_{k-1}+\alpha _{k-2}u_{k-2}+\beta _{k-2}u_{k-3}+\gamma _{k-2}u_{k-4}+\ldots )\\&=(u_{k},u_{k-1})+\alpha _{k-2}(u_{k},u_{k-2})+\beta _{k-2}(u_{k},u_{k-3})+\gamma _{k-2}(u_{k},u_{k-4})+\ldots )\\&=0\end{aligned}}}

## Problem 3c

 For which non-zero vectors ${\displaystyle u_{0}\!\,}$  does ${\displaystyle u_{1}=0\!\,}$  hold?

## Solution 3c

For ${\displaystyle k=0\!\,}$ ,

${\displaystyle u_{1}=Au_{0}-\alpha _{0}u_{0}\!\,}$

If ${\displaystyle u_{1}=0\!\,}$ , then

${\displaystyle Au_{0}=\alpha _{0}u_{0}\!\,}$

Since ${\displaystyle \alpha _{0}\!\,}$  is a scalar, ${\displaystyle u_{0}\!\,}$  is an eigenvector of ${\displaystyle A\!\,}$ .