Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/August 2006

Problem 1

edit

Consider the definite integral

 

Let   denote the approximation of  by the composite midpoint rule with   uniform subintervals. For every   set

 .

Let   be defined by

 .

Assume that  .

Problem 1a

edit

Show that the quadrature error   satisfies

 

Hint: Use integration by parts over each subinterval  .

Solution 1a

edit

Integrating   by parts over arbitrary points   and   gives

 


Since   is defined on   we use

 

and

 


Using the first interval we get

 


And for the second we get

 


Since these apply to arbitrary half-subintervals, we can rewrite equation   with the its indecies shifted by one unit. The equation for the interval   is

 


Combining   and   and writing it in the same form as the integration by parts, we have

 


Then our last step is to sum this over all of our   subintervals, noting that  

 

Problem 1b

edit

Derive a sharp bound on the error of the form

  for every  .

Here   denotes the maximum norm over  . Recall that the above bound is sharp when the inequality is an equality for some nonzero  .

Solution 1b

edit

Applying the result of part (a), the triangle inequality, and pulling out the constant  , we have,


 


  is some constant less than infinity since   is compact and   is continuous on each of the finite number of intervals for which it is defined.


The above inequality becomes an equality when


 


where   is any constant.

Problem 2

edit

Consider the (unshifted)   method for finding the eigenvalues of an invertible matrix  

Problem 2a

edit

Give the algorithm.

Solution 2a

edit

The QR algorithm produces a sequence of similar matrices   whose limit tends towards being upper triangular or nearly upper triangular. This is advantageous since the eigenvalues of an upper triangular matrix lie on its diagonal.


i = 0   
A_1 = A 
while ( error > tolerance  )   
   A_i=Q_i R_i  (QR decomposition/factorization)
   A_{i+1}=R_i Q_i (multiply R and Q, the reverse multiplication)
   i=i+1 (increment)

Problem 2b

edit

Show that each of the matrices   generated by this algorithm are orthogonally similar to  .

Solution 2b

edit

From the factor step (QR decomposition) of the algorithm, we have


 


which implies


 


Substituting into the reverse multiply step, we have


 

Problem 2c

edit

Show that if   is upper Hessenberg then so are each of the matrices   generated by this algorithm.


Solution 2c

edit

A series of Given's Rotations matrices pre-multiplying  , a upper Heisenberg matrix, yield an upper triangular matrix  i.e.


 


Since Givens Rotations matrices are each orthogonal, we can write


 


i.e.


 


If we let  , we have  , or more generally for  


 .


In each case, the sequence of Given's Rotations matrices that compose   have the following structure


 


So   is upper Hessenberg.


From the algorithm, we have


 


We conclude   is upper Hessenberg because for   the  th column of   is a linear combination of the first   columns of   since   is also upper Hessenberg.

Problem 2d

edit

Let


 


For this   the sequence   has a limit. Find this limit. Give your reasoning.

Solution 2d

edit

The eigenvalues of   can be computed. They are   and  . Furthermore, the result of matrix multiplies in the algorithm shows that the diagonal difference,  , is constant for all  .


Since the limit of   is an upper triangular matrix with the eigenvalues of   on the diagonal, the limit is


 

Problem 3

edit

Let   be symmetric and positive definite. Let  . Consider solving   using the conjugate gradient method. The   iterate   then satisfies


  for every  ,


where   denotes the vector A-norm,   is the initial residual, and


 .

Problem 3a

edit

Prove that the error   is bounded by


 ,


where   is any real polynomial of degree   or less that satisfies  . Here   denotes the matrix norm of   that is induced by the vector A-norm.

Solution 3a

edit

We know that   for every  , so if we can choose a   such that


 ,


then we can solve part a.

Rewrite r^(0)

edit

First note from definition of  


 

Rewrite Krylov space

edit

Therefore, we can rewrite   as follows:


 

Write y explicitly

edit

We can then write   explicitly as follows:


 

Substitute and Apply Inequality

edit

We substitute   into the hypothesis inequality and apply a norm inequality of matrix norms to get the desired result.


 

Problem 3b

edit

Let   denote the   Chebyshev polynomial. Let   and   denote respectively the smallest and largest eigenvalues of  . Apply the result of part (a) to

 

to show that

 .

You can use without proof the fact that

 ,

where   denotes the set of eigenvalues of  , and the facts that for every   the polynomial   has degree  , is positive for  , and satisfies

 .

Solution 3b

edit

Overview

edit

We want to show that


 

Maximize Numerator of p_n(z)

edit

By hypothesis,


 


Since only the numerator of   depends on  , we only need to maximize the numerator in order to maximize  . That is find


 


Rewrite T_n

edit

Let  . Then


 


Hence,


 


so


 


Max of T_n is 1

edit

Denote the argument of   as   since the argument depends on  . Hence,


 ,


Then,


 


 


Thus  .


Now, since   is real for  ,


 


Hence,


 

Show T_n(1)=1

edit

Let  , then


 


Using our formula we have,


 


In other words, if  ,   achieves its maximum value of  .