Linear Algebra/Row Equivalence/Solutions

SolutionsEdit

This exercise is recommended for all readers.
Problem 1

Decide if the matrices are row equivalent.

  1.  
  2.  
  3.  
  4.  
  5.  
Answer

Bring each to reduced echelon form and compare.

  1. The first gives
     
    while the second gives
     
    The two reduced echelon form matrices are not identical, and so the original matrices are not row equivalent.
  2. The first is this.
     
    The second is this.
     
    These two are row equivalent.
  3. These two are not row equivalent because they have different sizes.
  4. The first,
     
    and the second.
     
    These are not row equivalent.
  5. Here the first is
     
    while this is the second.
     
    These are not row equivalent.
Problem 2

Describe the matrices in each of the classes represented in Example 2.10.

Answer

First, the only matrix row equivalent to the matrix of all  's is itself (since row operations have no effect).

Second, the matrices that reduce to

 

have the form

 

(where  , and   and   are not both zero).

Next, the matrices that reduce to

 

have the form

 

(where  , and not both are zero).

Finally, the matrices that reduce to

 

are the nonsingular matrices. That's because a linear system for which this is the matrix of coefficients will have a unique solution, and that is the definition of nonsingular. (Another way to say the same thing is to say that they fall into none of the above classes.)

Problem 3

Describe all matrices in the row equivalence class of these.

  1.  
  2.  
  3.  
Answer
  1. They have the form
     
    where  .
  2. They have this form (for  ).
     
  3. They have the form
     
    (for  ) where  . (This is the formula that determines when a   matrix is nonsingular.)
Problem 4

How many row equivalence classes are there?

Answer

Infinitely many. For instance, in

 

each   gives a different class.

Problem 5

Can row equivalence classes contain different-sized matrices?

Answer

No. Row operations do not change the size of a matrix.

Problem 6

How big are the row equivalence classes?

  1. Show that the class of any zero matrix is finite.
  2. Do any other classes contain only finitely many members?
Answer
  1. A row operation on a zero matrix has no effect. Thus each zero matrix is alone in its row equivalence class.
  2. No. Any nonzero entry can be rescaled.
This exercise is recommended for all readers.
Problem 7

Give two reduced echelon form matrices that have their leading entries in the same columns, but that are not row equivalent.

Answer

Here are two.

 
This exercise is recommended for all readers.
Problem 8

Show that any two   nonsingular matrices are row equivalent. Are any two singular matrices row equivalent?

Answer

Any two   nonsingular matrices have the same reduced echelon form, namely the matrix with all  's except for  's down the diagonal.

 

Two same-sized singular matrices need not be row equivalent. For example, these two   singular matrices are not row equivalent.

 
This exercise is recommended for all readers.
Problem 9

Describe all of the row equivalence classes containing these.

  1.   matrices
  2.   matrices
  3.   matrices
  4.   matrices
Answer

Since there is one and only one reduced echelon form matrix in each class, we can just list the possible reduced echelon form matrices.

For that list, see the answer for Problem 1.5.

Problem 10
  1. Show that a vector   is a linear combination of members of the set   if and only if there is a linear relationship   where   is not zero. (Hint. Watch out for the   case.)
  2. Use that to simplify the proof of Lemma 2.5.
Answer
  1. If there is a linear relationship where   is not zero then we can subtract   from both sides and divide by   to get   as a linear combination of the others. (Remark: if there are no other vectors in the set— if the relationship is, say,  — then the statement is still true because the zero vector is by definition the sum of the empty set of vectors.) Conversely, if   is a combination of the others   then subtracting   from both sides gives a relationship where at least one of the coefficients is nonzero; namely, the   in front of  .
  2. The first row is not a linear combination of the others for the reason given in the proof: in the equation of components from the column containing the leading entry of the first row, the only nonzero entry is the leading entry from the first row, so its coefficient must be zero. Thus, from the prior part of this exercise, the first row is in no linear relationship with the other rows. Thus, when considering whether the second row can be in a linear relationship with the other rows, we can leave the first row out. But now the argument just applied to the first row will apply to the second row. (That is, we are arguing here by induction.)
This exercise is recommended for all readers.
Problem 11

Finish the proof of Lemma 2.5.

  1. First illustrate the inductive step by showing that  .
  2. Do the full inductive step: where  , assume that   for   and deduce that   also.
  3. Find the contradiction.
Answer
  1. In the equation
     
    we already know that  . Let   be the column number of the leading entry of the second row. Consider the prior equation on entries in that column.
     
    Because   is the column of the leading entry in the second row,   for  . Thus the equation reduces to
     
    and since   is not   we have that  .
  2. In the equation
     
    we already know that  . Let   be the column number of the leading entry of row  . Consider the above equation on entries in that column.
     
    Because   is the column of the leading entry in the row  , we have that   for  . Thus the equation reduces to
     
    and since   is not   we have that  .
  3. From the prior item in this exercise we know that in the equation
     
    we already know that  . Let   be the column number of the leading entry of row  . Rewrite the above equation on entries in that column.
     
    Because   is the column of the leading entry in the row  , we have that   for  . That makes the right side of the equation sum to  , but the left side is not   since it is the leading entry of the row. That's the contradiction.
Problem 12

Finish the induction argument in Lemma 2.6.

  1. State the inductive hypothesis, Also state what must be shown to follow from that hypothesis.
  2. Check that the inductive hypothesis implies that in the relationship   the coefficients   are each zero.
  3. Finish the inductive step by arguing, as in the base case, that   and   are impossible.
Answer
  1. The inductive step is to show that if the statement holds on rows   through   then it also holds on row  . That is, we assume that  , and  , ..., and  , and we will show that   also holds (for   in  ).
  2. Corollary 2.3 gives the relationship   between rows. Inside of those row vectors, consider the relationship between the entries in the column  . Because by the induction hypothesis this is a row greater than the first  , the row   has a zero in entry   (the matrix   is in echelon form). But the row   has a nonzero entry in column  ; by definition of   it is the leading entry in the first row of  . Thus, in that column, the above relationship among rows resolves to this equation among numbers:  , with  . Therefore  . With  , a similar argument shows that  . With those two, another turn gives that  . That is, inside of the larger induction argument used to prove the entire lemma, here is an subargument by induction that shows   for all   in  . (We won't write out the details since it is just like the induction done in Problem 11.)
  3. Note that the prior item of this exercise shows that the relationship between rows   reduces to  . Consider the column   entries in this equation. By definition of   as the column number of the leading entry of  , the entries in this column of the other rows   are zeros. Now if   then the equation of entries from column   would be  , which is impossible as   isn't zero as it leads its row. A symmetric argument shows that   also is impossible.
Problem 13

Why, in the proof of Theorem 2.7, do we bother to restrict to the nonzero rows? Why not just stick to the relationship that we began with,  , with   instead of  , and argue using it that the only nonzero coefficient is  , which is  ?

Answer

The zero rows could have nonzero coefficients, and so the statement would not be true.

This exercise is recommended for all readers.
Problem 14

Three truck drivers went into a roadside cafe. One truck driver purchased four sandwiches, a cup of coffee, and ten doughnuts for $ . Another driver purchased three sandwiches, a cup of coffee, and seven doughnuts for $ . What did the third truck driver pay for a sandwich, a cup of coffee, and a doughnut? (Trono 1991)

Answer

We know that   and that  , and we'd like to know what   is. Fortunately,   is a linear combination of   and  . Calling the unknown price  , we have this reduction.

 

The price paid is $ .

Problem 15

The fact that Gaussian reduction disallows multiplication of a row by zero is needed for the proof of uniqueness of reduced echelon form, or else every matrix would be row equivalent to a matrix of all zeros. Where is it used?

Answer

If multiplication of a row by zero were allowed then Lemma 2.6 would not hold. That is, where

 

all the rows of the second matrix can be expressed as linear combinations of the rows of the first, but the converse does not hold. The second row of the first matrix is not a linear combination of the rows of the second matrix.

This exercise is recommended for all readers.
Problem 16

The Linear Combination Lemma says which equations can be gotten from Gaussian reduction from a given linear system.

  1. Produce an equation not implied by this system.
     
  2. Can any equation be derived from an inconsistent system?
Answer
  1. An easy answer is this:
     
    For a less wise-guy-ish answer, solve the system:
     
    gives   and  . Now any equation not satisfied by   will do, e.g.,  .
  2. Every equation can be derived from an inconsistent system. For instance, here is how to derive " " from " ". First,
     
    (validity of the   case is separate but clear). Similarly,  . Ditto for  . But now,   gives  .
Problem 17

Extend the definition of row equivalence to linear systems. Under your definition, do equivalent systems have the same solution set? (Hoffman & Kunze 1971)

Answer

Define linear systems to be equivalent if their augmented matrices are row equivalent. The proof that equivalent systems have the same solution set is easy.

This exercise is recommended for all readers.
Problem 18

In this matrix

 

the first and second columns add to the third.

  1. Show that remains true under any row operation.
  2. Make a conjecture.
  3. Prove that it holds.
Answer
  1. The three possible row swaps are easy, as are the three possible rescalings. One of the six possible pivots is  :
     
    and again the first and second columns add to the third. The other five pivots are similar.
  2. The obvious conjecture is that row operations do not change linear relationships among columns.
  3. A case-by-case proof follows the sketch given in the first item.

ReferencesEdit

  • Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall 
  • Trono, Tony (compilier) (1991), University of Vermont Mathematics Department High School Prize Examinations 1958-1991, mimeograhed printing