Linear Algebra/Mechanics of Matrix Multiplication/Solutions

Solutions

edit
This exercise is recommended for all readers.
Problem 1

Predict the result of each multiplication by an elementary reduction matrix, and then check by multiplying it out.

  1.  
  2.  
  3.  
  4.  
  5.  
Answer
  1. The second matrix has its first row multiplied by   and its second row multiplied by  .
     
  2. The second matrix has its first row multiplied by   and its second row multiplied by  .
     
  3. The second matrix undergoes the pivot operation of replacing the second row with   times the first row added to the second.
     
  4. The first matrix undergoes the column operation of: the second column is replaced by   times the first column plus the second.
     
  5. The first matrix has its columns swapped.
     
This exercise is recommended for all readers.
Problem 2

The need to take linear combinations of rows and columns in tables of numbers arises often in practice. For instance, this is a map of part of Vermont and New York.

In part because of Lake Champlain, there are no roads directly connecting some pairs of towns. For instance, there is no way to go from Winooski to Grand Isle without going through Colchester. (Of course, many other roads and towns have been left off to simplify the graph. From top to bottom of this map is about forty miles.)  
  1. The incidence matrix of a map is the square matrix whose   entry is the number of roads from city   to city  . Produce the incidence matrix of this map (take the cities in alphabetical order).
  2. A matrix is symmetric if it equals its transpose. Show that an incidence matrix is symmetric. (These are all two-way streets. Vermont doesn't have many one-way streets.)
  3. What is the significance of the square of the incidence matrix? The cube?
Answer
  1. The incidence matrix is this (e.g, the first row shows that there is only one connection including Burlington, the road to Winooski).
     
  2. Because these are two-way roads, any road connecting city   to city   gives a connection between city   and city  .
  3. The square of the incidence matrix tells how cities are connected by trips involving two roads.
This exercise is recommended for all readers.
Problem 3

This table gives the number of hours of each type done by each worker, and the associated pay rates. Use matrices to compute the wages due.

  regular   overtime
Alan     40   12
Betty     35   6
Catherine     40   18
Donald     28   0
  wage
regular     $25.00
overtime     $45.00

(Remark. This illustrates, as did the prior problem, that in practice we often want to compute linear combinations of rows and columns in a context where we really aren't interested in any associated linear maps.)

Answer

The pay due each person appears in the matrix product of the two arrays.

Problem 4

Find the product of this matrix with its transpose.

 
Answer

The product is the identity matrix (recall that  ). An explanation is that the given matrix represents, with respect to the standard bases, a rotation in   of   radians while the transpose represents a rotation of   radians. The two cancel.

This exercise is recommended for all readers.
Problem 5

Prove that the diagonal matrices form a subspace of  . What is its dimension?

Answer

The set of diagonal matrices is nonempty as the zero matrix is diagonal. Clearly it is closed under scalar multiples and sums. Therefore it is a subspace. The dimension is  ; here is a basis.

 
Problem 6

Does the identity matrix represent the identity map if the bases are unequal?

Answer

No. In  , with respect to the unequal bases   and  , the identity transformation is represented by by this matrix.

 
Problem 7

Show that every multiple of the identity commutes with every square matrix. Are there other matrices that commute with all square matrices?

Answer

For any scalar   and square matrix   we have  .

There are no other such matrices; here is an argument for   matrices that is easily extended to  . If a matrix commutes with all others then it commutes with this unit matrix.

 

From this we first conclude that the upper left entry   must equal its lower right entry  . We also conclude that the lower left entry   is zero. The argument for the upper right entry   is similar.

Problem 8

Prove or disprove: nonsingular matrices commute.

Answer

It is false; these two don't commute.

 
This exercise is recommended for all readers.
Problem 9

Show that the product of a permutation matrix and its transpose is an identity matrix.

Answer

A permutation matrix has a single one in each row and column, and all its other entries are zeroes. Fix such a matrix. Suppose that the  -th row has its one in its  -th column. Then no other row has its one in the  -th column; every other row has a zero in the  -th column. Thus the dot product of the  -th row and any other row is zero.

The  -th row of the product is made up of the dot products of the  -th row of the matrix and the columns of the transpose. By the last paragraph, all such dot products are zero except for the  -th one, which is one.

Problem 10

Show that if the first and second rows of   are equal then so are the first and second rows of  . Generalize.

Answer

The generalization is to go from the first and second rows to the  -th and  -th rows. Row   of   is made up of the dot products of row   of   and the columns of  . Thus if rows   and   of   are equal then so are rows   and   of  .

Problem 11

Describe the product of two diagonal matrices.

Answer

If the product of two diagonal matrices is defined— if both are  — then the product of the diagonals is the diagonal of the products: where   are equal-sized diagonal matrices,   is all zeros except each that   entry is  .

Problem 12

Write

 

as the product of two elementary reduction matrices.

Answer

One way to produce this matrix from the identity is to use the column operations of first multiplying the second column by three, and then adding the negative of the resulting second column to the first.

 

Column operations, in contrast with row operations) are written from left to right, so doing the above two operations is expressed with this matrix product.

 

Remark. Alternatively, we could get the required matrix with row operations. Starting with the identity, first adding the negative of the first row to the second, and then multiplying the second row by three will work. Because successive row operations are written as matrix products from right to left, doing these two row operations is expressed with: the same matrix product.

This exercise is recommended for all readers.
Problem 13

Show that if   has a row of zeros then   (if defined) has a row of zeros. Does that work for columns?

Answer

The  -th row of   is made up of the dot products of the  -th row of   with the columns of  . The dot product of a zero row with a column is zero.

It works for columns if stated correctly: if   has a column of zeros then   (if defined) has a column of zeros. The proof is easy.

Problem 14

Show that the set of unit matrices forms a basis for  .

Answer

Perhaps the easiest way is to show that each   matrix is a linear combination of unit matrices in one and only one way:

 

has the unique solution  ,  , etc.

Problem 15

Find the formula for the  -th power of this matrix.

 
Answer

Call that matrix  . We have

 

In general,

 

where   is the  -th Fibonacci number   and  ,  , which is verified by induction, based on this equation.

 
This exercise is recommended for all readers.
Problem 16

The trace of a square matrix is the sum of the entries on its diagonal (its significance appears in Chapter Five). Show that  .

Answer

Chapter Five gives a less computational reason— the trace of a matrix is the second coefficient in its characteristic polynomial— but for now we can use indices. We have

 

while

 

and the two are equal.

This exercise is recommended for all readers.
Problem 17

A square matrix is upper triangular if its only nonzero entries lie above, or on, the diagonal. Show that the product of two upper triangular matrices is upper triangular. Does this hold for lower triangular also?

Answer

A matrix is upper triangular if and only if its   entry is zero whenever  . Thus, if   are upper triangular then   and   are zero when  . An entry in the product   is zero unless at least some of the terms are nonzero, that is, unless for at least some of the summands   both   and  . Of course, if   this cannot happen and so the product of two upper triangular matrices is upper triangular. (A similar argument works for lower triangular matrices.)

Problem 18

A square matrix is a Markov matrix if each entry is between zero and one and the sum along each row is one. Prove that a product of Markov matrices is Markov.

Answer

The sum along the  -th row of the product is this.

 
This exercise is recommended for all readers.
Problem 19

Give an example of two matrices of the same rank with squares of differing rank.

Answer

Matrices representing (say, with respect to  ) the maps that send

 

and

 

will do.

Problem 20

Combine the two generalizations of the identity matrix, the one allowing entires to be other than ones, and the one allowing the single one in each row and column to be off the diagonal. What is the action of this type of matrix?

Answer

The combination is to have all entries of the matrix be zero except for one (possibly) nonzero entry in each row and column. Such a matrix can be written as the product of a permutation matrix and a diagonal matrix, e.g.,

 

and its action is thus to rescale the rows and permute them.

Problem 21

On a computer multiplications are more costly than additions, so people are interested in reducing the number of multiplications used to compute a matrix product.

  1. How many real number multiplications are needed in formula we gave for the product of a   matrix and a   matrix?
  2. Matrix multiplication is associative, so all associations yield the same result. The cost in number of multiplications, however, varies. Find the association requiring the fewest real number multiplications to compute the matrix product of a   matrix, a   matrix, a   matrix, and a   matrix.
  3. (Very hard.) Find a way to multiply two   matrices using only seven multiplications instead of the eight suggested by the naive approach.
Answer
  1. Each entry   takes   multiplications and there are   entries. Thus there are   multiplications.
  2. Let   be  , let   be  , let   be  , let   be  . Then, using the formula from the prior part,

     

    shows which is cheapest.

  3. This is reported by Knuth as an improvement by S. Winograd of a formula due to V. Strassen: where  ,
     
     
    takes seven multiplications and fifteen additions (save the intermediate results).
? Problem 22

If   and   are square matrices of the same size such that  , does it follow that  ? (Putnam Exam 1990)

Answer

This is how the answer was given in the cited source.

No, it does not. Let   and   represent, with respect to the standard bases, these transformations of  .

 

Observe that

 
Problem 23

Demonstrate these four assertions to get an alternate proof that column rank equals row rank. (Liebeck 1966)

  1.   iff  .
  2.   iff  .
  3.  .
  4.  .
Answer

This is how the answer was given in the cited source.

  1. Obvious.
  2. If   then   where  . Hence   by (a). The converse is obvious.
  3. By (b),  , ... ,  are linearly independent iff  , ... ,   are linearly independent.
  4. We have  . Thus also   and so we have  .
Problem 24

Prove (where   is an   matrix and so defines a transformation of any  -dimensional space   with respect to   where   is a basis) that  . Conclude

  1.   iff  ;
  2.   iff  ;
  3.   iff   and   ;
  4.   iff   ;
  5. (Requires the Direct Sum subsection, which is optional.)   iff  .
(Ackerson 1955)
Answer

This is how the answer was given in the cited source.

Let   be a basis for   (  might be  ). Let   be such that  . Note   is linearly independent, and extend to a basis for  :   where  .

Now take  . Write

 

and so

 

But  , so   and we now know

 

spans  .

To see   is linearly independent, write

 

and, since   we get a contradiction unless it is   (clearly it is in  , but   is a basis for  ).

Hence  .

References

edit
  • Ackerson, R. H. (1955), "A Note on Vector Spaces", American Mathematical Monthly, American Mathematical Society, 62 (10): 721 {{citation}}: Unknown parameter |month= ignored (help).
  • Liebeck, Hans. (1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American Mathematical Monthly, American Mathematical Society, 73 (10): 1114 {{citation}}: Unknown parameter |month= ignored (help).
  • William Lowell Putnam Mathematical Competition, Problem A-5, 1990.