Linear Algebra/Row Equivalence
We will close this section and this chapter by proving that every matrix is row equivalent to one and only one reduced echelon form matrix. The ideas that appear here will reappear, and be further developed, in the next chapter.
The underlying theme here is that one way to understand a mathematical situation is by being able to classify the cases that can happen. We have met this theme several times already. We have classified solution sets of linear systems into the no-elements, one-element, and infinitely-many elements cases. We have also classified linear systems with the same number of equations as unknowns into the nonsingular and singular cases. We adopted these classifications because they give us a way to understand the situations that we were investigating. Here, where we are investigating row equivalence, we know that the set of all matrices breaks into the row equivalence classes. When we finish the proof here, we will have a way to understand each of those classes— its matrices can be thought of as derived by row operations from the unique reduced echelon form matrix in that class.
To understand how row operations act to transform one matrix into another, we consider the effect that they have on the parts of a matrix. The crucial observation is that row operations combine the rows linearly.
- Definition 2.1
A linear combination of is an expression of the form where the 's are scalars.
(We have already used the phrase "linear combination" in this book. The meaning is unchanged, but the next result's statement makes a more formal definition in order.)
- Lemma 2.2 (Linear Combination Lemma)
A linear combination of linear combinations is a linear combination.
- Proof
Given the linear combinations through , consider a combination of those
where the 's are scalars along with the 's. Distributing those 's and regrouping gives
which is a linear combination of the 's.
In this subsection we will use the convention that, where a matrix is named with an upper case roman letter, the matching lower-case greek letter names the rows.
- Corollary 2.3
Where one matrix reduces to another, each row of the second is a linear combination of the rows of the first.
The proof below uses induction on the number of row operations used to reduce one matrix to the other. Before we proceed, here is an outline of the argument (readers unfamiliar with induction may want to compare this argument with the one used in the "" proof).[1] First, for the base step of the argument, we will verify that the proposition is true when reduction can be done in zero row operations. Second, for the inductive step, we will argue that if being able to reduce the first matrix to the second in some number of operations implies that each row of the second is a linear combination of the rows of the first, then being able to reduce the first to the second in operations implies the same thing. Together, this base step and induction step prove this result because by the base step the proposition is true in the zero operations case, and by the inductive step the fact that it is true in the zero operations case implies that it is true in the one operation case, and the inductive step applied again gives that it is therefore true in the two operations case, etc.
- Proof
We proceed by induction on the minimum number of row operations that take a first matrix to a second one .
In the base step, that zero reduction operations suffice, the two matrices are equal and each row of is obviously a combination of 's rows: .
For the inductive step, assume the inductive hypothesis: with , if a matrix can be derived from in or fewer operations then its rows are linear combinations of the 's rows. Consider a that takes operations. Because there are more than zero operations, there must be a next-to-last matrix so that . This is only operations away from and so the inductive hypothesis applies to it, that is, each row of is a linear combination of the rows of .
If the last operation, the one from to , is a row swap then the rows of are just the rows of reordered and thus each row of is also a linear combination of the rows of . The other two possibilities for this last operation, that it multiplies a row by a scalar and that it adds a multiple of one row to another, both result in the rows of being linear combinations of the rows of . But therefore, by the Linear Combination Lemma, each row of is a linear combination of the rows of .
With that, we have both the base step and the inductive step, and so the proposition follows.
- Example 2.4
In the reduction
call the matrices , , , and . The methods of the proof show that there are three sets of linear relationships.
The prior result gives us the insight that Gauss' method works by taking linear combinations of the rows. But to what end; why do we go to echelon form as a particularly simple, or basic, version of a linear system? The answer, of course, is that echelon form is suitable for back substitution, because we have isolated the variables. For instance, in this matrix
has been removed from 's equation. That is, Gauss' method has made 's row independent of 's row.
Independence of a collection of row vectors, or of any kind of vectors, will be precisely defined and explored in the next chapter. But a first take on it is that we can show that, say, the third row above is not comprised of the other rows, that . For, suppose that there are scalars , , and such that this relationship holds.
The first row's leading entry is in the first column and narrowing our consideration of the above relationship to consideration only of the entries from the first column gives that . The second row's leading entry is in the third column and the equation of entries in that column , along with the knowledge that , gives that . Now, to finish, the third row's leading entry is in the fourth column and the equation of entries in that column , along with and , gives an impossibility.
The following result shows that this effect always holds. It shows that what Gauss' linear elimination method eliminates is linear relationships among the rows.
- Lemma 2.5
In an echelon form matrix, no nonzero row is a linear combination of the other rows.
- Proof
Let be in echelon form. Suppose, to obtain a contradiction, that some nonzero row is a linear combination of the others.
We will first use induction to show that the coefficients , ..., associated with rows above are all zero. The contradiction will come from consideration of and the rows below it.
The base step of the induction argument is to show that the first coefficient is zero. Let the first row's leading entry be in column number and consider the equation of entries in that column.
The matrix is in echelon form so the entries , ..., , including , are all zero.
Because the entry is nonzero as it leads its row, the coefficient must be zero.
The inductive step is to show that for each row index between and , if the coefficient and the coefficients , ..., are all zero then is also zero. That argument, and the contradiction that finishes this proof, is saved for Problem 11.
We can now prove that each matrix is row equivalent to one and only one reduced echelon form matrix. We will find it convenient to break the first half of the argument off as a preliminary lemma. For one thing, it holds for any echelon form whatever, not just reduced echelon form.
- Lemma 2.6
If two echelon form matrices are row equivalent then the leading entries in their first rows lie in the same column. The same is true of all the nonzero rows— the leading entries in their second rows lie in the same column, etc.
For the proof we rephrase the result in more technical terms. Define the form of an matrix to be the sequence where is the column number of the leading entry in row and if there is no leading entry in that row. The lemma says that if two echelon form matrices are row equivalent then their forms are equal sequences.
- Proof
Let and be echelon form matrices that are row equivalent. Because they are row equivalent they must be the same size, say . Let the column number of the leading entry in row of be and let the column number of the leading entry in row of be . We will show that , that , etc., by induction.
This induction argument relies on the fact that the matrices are row equivalent, because the Linear Combination Lemma and its corollary therefore give that each row of is a linear combination of the rows of and vice versa:
where the 's and 's are scalars.
The base step of the induction is to verify the lemma for the first rows of the matrices, that is, to verify that . If either row is a zero row then the entire matrix is a zero matrix since it is in echelon form, and therefore both matrices are zero matrices (by Corollary 2.3), and so both and are . For the case where neither nor is a zero row, consider the instance of the linear relationship above.
First, note that is impossible: in the columns of to the left of column the entries are all zeroes (as leads the first row) and so if then the equation of entries from column would be , but isn't zero since it leads its row and so this is an impossibility. Next, a symmetric argument shows that also is impossible. Thus the base case holds.
The inductive step is to show that if , and , ..., and , then also (for in the interval ). This argument is saved for Problem 12.
That lemma answers two of the questions that we have posed: (i) any two echelon form versions of a matrix have the same free variables, and consequently, and (ii) any two echelon form versions have the same number of free variables. There is no linear system and no combination of row operations such that, say, we could solve the system one way and get and free but solve it another way and get and free, or solve it one way and get two free variables while solving it another way yields three.
We finish now by specializing to the case of reduced echelon form matrices.
- Theorem 2.7
Each matrix is row equivalent to a unique reduced echelon form matrix.
- Proof
Clearly any matrix is row equivalent to at least one reduced echelon form matrix, via Gauss-Jordan reduction. For the other half, that any matrix is equivalent to at most one reduced echelon form matrix, we will show that if a matrix Gauss-Jordan reduces to each of two others then those two are equal.
Suppose that a matrix is row equivalent to two reduced echelon form matrices and , which are therefore row equivalent to each other. The Linear Combination Lemma and its corollary allow us to write the rows of one, say , as a linear combination of the rows of the other . The preliminary result, Lemma 2.6, says that in the two matrices, the same collection of rows are nonzero. Thus, if through are the nonzero rows of then the nonzero rows of are through . Zero rows don't contribute to the sum so we can rewrite the relationship to include just the nonzero rows.
The preliminary result also says that for each row between and , the leading entries of the -th row of and appear in the same column, denoted . Rewriting the above relationship to focus on the entries in the -th column
gives this set of equations for up to .
Since is in reduced echelon form, all of the 's in column are zero except for , which is . Thus each equation above simplifies to . But is also in reduced echelon form and so all of the 's in column are zero except for , which is . Therefore, each is zero, except that , and , ..., and .
We have shown that the only nonzero coefficient in the linear combination labelled () is , which is . Therefore . Because this holds for all nonzero rows, .
We end with a recap. In Gauss' method we start with a matrix and then derive a sequence of other matrices. We defined two matrices to be related if one can be derived from the other. That relation is an equivalence relation, called row equivalence, and so partitions the set of all matrices into row equivalence classes.
(There are infinitely many matrices in the pictured class, but we've only got room to show two.) We have proved there is one and only one reduced echelon form matrix in each row equivalence class. So the reduced echelon form is a canonical form[2] for row equivalence: the reduced echelon form matrices are representatives of the classes.
We can answer questions about the classes by translating them into questions about the representatives.
- Example 2.8
We can decide if matrices are interreducible by seeing if Gauss-Jordan reduction produces the same reduced echelon form result. Thus, these are not row equivalent
because their reduced echelon forms are not equal.
- Example 2.9
Any nonsingular matrix Gauss-Jordan reduces to this.
- Example 2.10
We can describe the classes by listing all possible reduced echelon form matrices. Any matrix lies in one of these: the class of matrices row equivalent to this,
the infinitely many classes of matrices row equivalent to one of this type
where (including ), the class of matrices row equivalent to this,
and the class of matrices row equivalent to this
(this is the class of nonsingular matrices).
Exercises
edit- This exercise is recommended for all readers.
- Problem 1
Decide if the matrices are row equivalent.
- Problem 2
Describe the matrices in each of the classes represented in Example 2.10.
- Problem 3
Describe all matrices in the row equivalence class of these.
- Problem 4
How many row equivalence classes are there?
- Problem 5
Can row equivalence classes contain different-sized matrices?
- Problem 6
How big are the row equivalence classes?
- Show that the class of any zero matrix is finite.
- Do any other classes contain only finitely many members?
- This exercise is recommended for all readers.
- Problem 7
Give two reduced echelon form matrices that have their leading entries in the same columns, but that are not row equivalent.
- This exercise is recommended for all readers.
- Problem 8
Show that any two nonsingular matrices are row equivalent. Are any two singular matrices row equivalent?
- This exercise is recommended for all readers.
- Problem 9
Describe all of the row equivalence classes containing these.
- matrices
- matrices
- matrices
- matrices
- Problem 10
- Show that a vector is a linear combination of members of the set if and only if there is a linear relationship where is not zero. (Hint. Watch out for the case.)
- Use that to simplify the proof of Lemma 2.5.
- This exercise is recommended for all readers.
- Problem 11
Finish the proof of Lemma 2.5.
- First illustrate the inductive step by showing that .
- Do the full inductive step: where , assume that for and deduce that also.
- Find the contradiction.
- Problem 12
Finish the induction argument in Lemma 2.6.
- State the inductive hypothesis, Also state what must be shown to follow from that hypothesis.
- Check that the inductive hypothesis implies that in the relationship the coefficients are each zero.
- Finish the inductive step by arguing, as in the base case, that and are impossible.
- Problem 13
Why, in the proof of Theorem 2.7, do we bother to restrict to the nonzero rows? Why not just stick to the relationship that we began with, , with instead of , and argue using it that the only nonzero coefficient is , which is ?
- This exercise is recommended for all readers.
- Problem 14
Three truck drivers went into a roadside cafe. One truck driver purchased four sandwiches, a cup of coffee, and ten doughnuts for $ . Another driver purchased three sandwiches, a cup of coffee, and seven doughnuts for $ . What did the third truck driver pay for a sandwich, a cup of coffee, and a doughnut? (Trono 1991)
- Problem 15
The fact that Gaussian reduction disallows multiplication of a row by zero is needed for the proof of uniqueness of reduced echelon form, or else every matrix would be row equivalent to a matrix of all zeros. Where is it used?
- This exercise is recommended for all readers.
- Problem 16
The Linear Combination Lemma says which equations can be gotten from Gaussian reduction from a given linear system.
- Produce an equation not implied by this system.
- Can any equation be derived from an inconsistent system?
- Problem 17
Extend the definition of row equivalence to linear systems. Under your definition, do equivalent systems have the same solution set? (Hoffman & Kunze 1971)
- This exercise is recommended for all readers.
- Problem 18
In this matrix
the first and second columns add to the third.
- Show that remains true under any row operation.
- Make a conjecture.
- Prove that it holds.
Footnotes
editReferences
edit- Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall
- Trono, Tony (compilier) (1991), University of Vermont Mathematics Department High School Prize Examinations 1958-1991, mimeograhed printing