Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2005

Problem 1Edit

Given  , let   denote the best uniform approximation to   among polynomials of degree  , i.e.


is minimized by the choice  . Let  . Prove that there are at least two points   such that



Solution 1Edit

Since both   and   are continuous, so is the function


and therefore   attains both a maximum and a minimum on  .

Let   and  

If  , then there exists a polynomial   (a vertical shift of the supposed best approximation   which is a better approximation to  .




Equality holds if and only if  .

Problem 2aEdit

Find the node   and the weight   so that the integration rule


is exact for linear functions. (No knowledge of orthogonal polynomials is required.)

Solution 2aEdit

Let  . Then


which implies after computation


Let  . Then


which implies after computation


Problem 2bEdit

Show that no one-point rule for approximating   can be exact for quadratics.

Solution 2bEdit

Suppose there is a one-point rule for approximating   that is exact for quadratics.

Let  .





a contradiction.

Problem 2cEdit

In fact


Find  .

Solution 2cEdit

Let  . Then  

Using the values computed in part (b), we have


which implies


Problem 2dEdit

Let   and   be two polynomials of degree  . Suppose   and   agree at  ,  , and  . Show


Solution 2dEdit

There exist   such that if   is a polynomial of degree  




Problem 3Edit

Let   be nonsingular and  . Consider the following iteration for the solution  


Problem 3aEdit

Show that if all eigenvalues of   have positive real part then there will be some real   such that the iterates converge for any starting vector  . Discuss how to choose   optimally in case   is symmetric and determine the rate of convergence.

Solution 3aEdit

Convergence of any starting vectorEdit

Using the iteration  , we have that


Then, computing norms we obtain


In particular, considering  , we have that


Now, in order to study the convergence of the method, we need to analyze  . We use the following characterization:


Let's denote by   an eigenvalue of the matrix  . Then   implies




The above equation is quadratic with respect to   and opens upward. Hence using the quadratic formula we can find the roots of the equation to be


Then, if   for all the eigenvalues of the matrix  , we can find   such that  , i.e., the method converges.

Choosing alpha if A symmetricEdit

On the other hand, if the matrix   is symmetric, we have that  . Then using the Schur decomposition, we can find a unitary matrix   and a diagonal matrix   such that  , and in consequence,


This last expression is minimized if  , i.e, if  . With this optimal value of  , we obtain that


which implies that


Finally, we've obtained that


Problem 3bEdit

Show that if some eigenvalues of   have negative real part and some have positive real part, then there is no real   for which the iterations converge.

Solution 3bEdit

If   for  , then convergence occurs if


Hence  .

Similarly, if   for  , then convergence occurs if


Hence  .

If some eigenvalues are positive and some negative, there does not exist a real   for which the iterations converge since   cannot be both positive and negative simultaneously.

Problem 3cEdit

Let   for a matrix norm associated to a vector norm. Show that the error can be expressed in terms of the difference between consecutive iterates, namely


(The proof of this is short but a little tricky)

Solution 3cEdit


Problem 3dEdit

Let   be the tridiagonal matrix


Find a value of   that guarantees convergence

Solution 3dEdit

By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.




We want