Numerical Methods Qualification Exam Problems and Solutions (University of Maryland)/January 2007

Problem 1

edit

Let   be distinct real numbers, and consider the matrix

 

Prove that   can be expressed as

 

where the column vector   generates a polynomial

 

satisfying

 

You may cite anything you know about polynomial interpolation to justify that   is nonsingular.

Solution 1

edit

We want to show that


 


or equivalently the  th,  th entry of   is   for   and   for   i.e.

 

First notice that

 

Also notice that

 

Hence,

 

Problem 2

edit

Let   be a real matrix of order   with eigenvalues   and   linearly independent eigenvectors  .

Suppose you want to find an eigenvector   corresponding to the eigenvalue   and you are given   such that

 

Specify a numerical algorithm for finding   and give a convergence proof for this algorithm demonstrating that it is convergent under appropriate circumstances

Solution 2

edit

Shifted Inverse Power Method

edit

Let

 

Then,

 

which implies

 

Since shifting the eigenvalues and inverting the matrix does not affect the eigenvectors,

 


Assume   for all  . Generate   to find  . Start with arbitrary   such that  .


For  
   
   
    (Rayleigh quotient)
End



Convergence of Power Method

edit

If  , for all  , then   will be dominated by  .


Since   are linearly independent, they form a basis of  . Hence,

 


From the definition of eigenvectors,

 


To find a general form of  , the approximate eigenvector at the kth step, examine a few steps of the algorithm:

 


From induction,

 

Hence,

 

Comparing a weighted   and  ,

 

since   by assumption.

The above expression goes to   as   since  , for all  . Hence as   grows,   is parallel to  . Because  , it must be that  .


Problem 3

edit

Suppose A is an upper triangular, nonsingular matrix. Show that both Jacobi iterations and Gauss-Seidel iterations converge in finitely many steps when used to solve  .

Solution 3

edit

Derivation of Iterations

edit

Let   where   is a diagonal matrix,   is a lower triangular matrix with a zero diagonal and   is an upper triangular matrix also with a zero diagonal.


The Jacobi iteration can be found by substituting into  , grouping  , and solving for   i.e.


 


Since   by hypothesis, the iteration is


 


Similarly, the Gauss-Seidel iteration can be found by substituting into  , grouping  , and solving for   i.e.


 


Since   by hypothesis, the iteration has identical from as Jacopi:


 

Convergence in Finite Number of Steps

edit

Jacobi and Gauss-Seidel are iterative methods that split the matrix   into  ,  , and  : Diagonal, Upper (everything above the diagonal), and Lower (everything below the diagonal) triangular matrices, respectively. Their iterations go as such


  (Gauss-Seidel)
  (Jacobi)


In our case   is upper triangular, so   is the zero matrix. As a result, the Gauss-Seidel and Jacobi methods take on the following identical form


 


Additionally,   can be written


 


Subtracting   from   we get


 


In our problem,   is diagonal and   is upper triangular with zeros along the diagonal. Notice that the product   will also be upper triangular with zeros along the diagonal.

Let  ,  


 


Also, let   be the related matrix


 


Where   is  ,   is  , and   is  .


Finally, the product   (call it (1)) is


 


Here   is almost identical in structure to  , except that its diagonal elements are zeros.

At this point the convergence in   steps (the size of the starting matrix) should be apparent since   is just   where   and each time   is multiplied by  , the k-th super-diagonal is zeroed out (where k=0 is the diagonal itself). After   applications of  , the result will be the zero matrix of size  .

In brief,   is the zero matrix of size  .


Therefore  , i.e. the Jacobi and Gauss-Seidel Methods used to solve   converge in   steps when   is upper triangular.

Examples of (1)

edit