Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2005

Problem 1 edit

Given  , let   denote the best uniform approximation to   among polynomials of degree  , i.e.


 


is minimized by the choice  . Let  . Prove that there are at least two points   such that


 


and  

Solution 1 edit

Since both   and   are continuous, so is the function


 


and therefore   attains both a maximum and a minimum on  .


Let   and  


If  , then there exists a polynomial   (a vertical shift of the supposed best approximation   which is a better approximation to  .


 


Therefore,


 


Equality holds if and only if  .

Problem 2a edit

Find the node   and the weight   so that the integration rule


 


is exact for linear functions. (No knowledge of orthogonal polynomials is required.)

Solution 2a edit

Let  . Then


 


which implies after computation


 


Let  . Then


 


which implies after computation


 

Problem 2b edit

Show that no one-point rule for approximating   can be exact for quadratics.

Solution 2b edit

Suppose there is a one-point rule for approximating   that is exact for quadratics.


Let  .


Then,


 


but


 ,


a contradiction.

Problem 2c edit

In fact


 


Find  .

Solution 2c edit

Let  . Then  


Using the values computed in part (b), we have


 


which implies


 

Problem 2d edit

Let   and   be two polynomials of degree  . Suppose   and   agree at  ,  , and  . Show


 

Solution 2d edit

There exist   such that if   is a polynomial of degree  


 


Hence,


 

Problem 3 edit

Let   be nonsingular and  . Consider the following iteration for the solution  


 

Problem 3a edit

Show that if all eigenvalues of   have positive real part then there will be some real   such that the iterates converge for any starting vector  . Discuss how to choose   optimally in case   is symmetric and determine the rate of convergence.

Solution 3a edit

Convergence of any starting vector edit

Using the iteration  , we have that


 

Then, computing norms we obtain

 .


In particular, considering  , we have that

 .


Now, in order to study the convergence of the method, we need to analyze  . We use the following characterization:

 


Let's denote by   an eigenvalue of the matrix  . Then   implies


 


i.e.


 


The above equation is quadratic with respect to   and opens upward. Hence using the quadratic formula we can find the roots of the equation to be


 


Then, if   for all the eigenvalues of the matrix  , we can find   such that  , i.e., the method converges.

Choosing alpha if A symmetric edit

On the other hand, if the matrix   is symmetric, we have that  . Then using the Schur decomposition, we can find a unitary matrix   and a diagonal matrix   such that  , and in consequence,


 


This last expression is minimized if  , i.e, if  . With this optimal value of  , we obtain that


 


which implies that


 .


Finally, we've obtained that


 .

Problem 3b edit

Show that if some eigenvalues of   have negative real part and some have positive real part, then there is no real   for which the iterations converge.

Solution 3b edit

If   for  , then convergence occurs if


 


Hence  .


Similarly, if   for  , then convergence occurs if


 


Hence  .


If some eigenvalues are positive and some negative, there does not exist a real   for which the iterations converge since   cannot be both positive and negative simultaneously.

Problem 3c edit

Let   for a matrix norm associated to a vector norm. Show that the error can be expressed in terms of the difference between consecutive iterates, namely


 

(The proof of this is short but a little tricky)

Solution 3c edit

 

Problem 3d edit

Let   be the tridiagonal matrix


 


Find a value of   that guarantees convergence

Solution 3d edit

By Gerschgorin's theorem, the magnitude of all the eigenvalues are between 1 and 5 i.e.


 


Therefore,


 


We want


 


Therefore,