The two reduced echelon form matrices are not identical, and so the
original matrices are not row equivalent.
The first is this.
The second is this.
These two are row equivalent.
These two are not row equivalent because they have different
sizes.
The first,
and the second.
These are not row equivalent.
Here the first is
while this is the second.
These are not row equivalent.
Problem 2
Describe the matrices in each of the classes represented in
Example 2.10.
Answer
First, the only matrix row equivalent to the matrix of all
's is itself (since row operations have no effect).
Second, the matrices that reduce to
have the form
(where , and and are not both zero).
Next, the matrices that reduce to
have the form
(where , and not both are zero).
Finally, the matrices that reduce to
are the nonsingular matrices.
That's because a linear system for which this is the matrix of
coefficients will have a unique solution, and that is the definition
of nonsingular.
(Another way to say the same thing is to say that they fall into none
of the above classes.)
Problem 3
Describe all matrices in the row equivalence class of
these.
Answer
They have the form
where .
They have this form (for ).
They have the form
(for ) where .
(This is the formula that determines when a matrix
is nonsingular.)
Problem 4
How many row equivalence classes are there?
Answer
Infinitely many.
For instance, in
each gives a different class.
Problem 5
Can row equivalence classes contain different-sized matrices?
Answer
No.
Row operations do not change the size of a matrix.
Problem 6
How big are the row equivalence classes?
Show that the class of any zero matrix is finite.
Do any other classes contain only finitely many members?
Answer
A row operation on a zero matrix has no effect.
Thus each zero matrix is alone in its row equivalence class.
No.
Any nonzero entry can be rescaled.
This exercise is recommended for all readers.
Problem 7
Give two reduced echelon form matrices that have their leading
entries in the same columns,
but that are not row equivalent.
Answer
Here are two.
This exercise is recommended for all readers.
Problem 8
Show that any two nonsingular matrices are
row equivalent.
Are any two singular matrices row equivalent?
Answer
Any two nonsingular matrices have
the same reduced echelon
form, namely the matrix with all 's except for 's down
the diagonal.
Two same-sized singular matrices need not be row equivalent.
For example, these two singular matrices
are not row equivalent.
This exercise is recommended for all readers.
Problem 9
Describe all of the row equivalence classes containing these.
matrices
matrices
matrices
matrices
Answer
Since there is one and only one reduced echelon form matrix in each
class, we can just list the possible reduced echelon form matrices.
Show that a vector is a linear combination
of members of the set
if and only if there is a linear relationship
where is not zero.
(Hint. Watch out for the case.)
If there is a linear relationship where is not zero
then we can subtract from both sides and divide
by to get as a linear
combination of the others.
(Remark:
if there are no other vectors in the set— if the
relationship is, say,
— then the statement is still true because
the zero vector is by definition the sum of the empty set
of vectors.)
Conversely, if is a combination of the others
then subtracting
from both sides gives a relationship where
at least one
of the coefficients is nonzero; namely,
the in front of .
The first row is not a linear combination of the
others for
the reason given in the proof: in the equation of components from
the column containing the leading entry of the first row, the
only nonzero entry is the leading entry from the first row, so
its coefficient must be zero.
Thus, from the prior part of this exercise, the first row is in
no linear relationship with the other rows.
Thus, when considering whether the second row can be in a linear
relationship
with the other rows, we can leave the first row out.
But now the argument just applied to the first row will apply
to the second row.
(That is, we are arguing here by induction.)
First illustrate the inductive step by showing
that .
Do the full inductive step: where ,
assume that for
and deduce that
also.
Find the contradiction.
Answer
In the equation
we already know that .
Let be the column number of the leading entry of the
second row.
Consider the prior equation on entries in that column.
Because is the column of the leading entry in the second
row, for .
Thus the equation reduces to
and since is not we have that .
In the equation
we already know that .
Let be the column number of the leading entry of
row .
Consider the above equation on entries in that column.
Because is the column of the leading entry in the
row , we have that for .
Thus the equation reduces to
and since is not we have that .
From the prior item in this exercise we know that in the equation
we already know that .
Let be the column number of the leading entry of
row .
Rewrite the above equation on entries in that column.
Because is the column of the leading entry in the
row , we have that for .
That makes the right side of the equation sum to , but the
left side is not since it is the leading entry of the row.
That's the contradiction.
State the inductive hypothesis,
Also state what must be shown to follow from that hypothesis.
Check that the inductive hypothesis implies that
in the relationship
the coefficients
are each zero.
Finish the inductive step by arguing, as in the base
case, that and
are impossible.
Answer
The inductive step is to show that if
the statement holds on rows through then it also holds on
row .
That is, we assume that , and
, ..., and ,
and we will show that also holds
(for in ).
Corollary 2.3 gives the
relationship
between rows.
Inside of those row vectors, consider the relationship between
the entries in the column .
Because by the induction hypothesis this is a row greater than the
first , the row has a zero in entry
(the matrix is in echelon form).
But the row
has a nonzero entry in column ; by definition of
it is the leading entry in the first row of .
Thus, in that column, the above relationship among rows resolves
to this equation among numbers: ,
with .
Therefore .
With , a similar argument shows that
.
With those two, another turn gives that .
That is, inside of the larger induction argument used to
prove the entire lemma, here is an subargument by induction
that shows for all in .
(We won't write out the details since it is just like
the induction done in Problem 11.)
Note that the prior item of this exercise shows that the relationship
between rows
reduces to
.
Consider the column entries in this equation.
By definition of as the column number of the leading
entry of , the entries in this column of the other rows
are zeros.
Now if
then the equation of entries from
column would be
,
which is impossible as isn't zero as it leads
its row.
A symmetric argument
shows that also is impossible.
Problem 13
Why, in the proof of Theorem 2.7,
do we bother to restrict to the nonzero rows?
Why not just stick to the relationship that we began with,
, with instead of ,
and argue using it that the only nonzero coefficient
is , which is ?
Answer
The zero rows could have nonzero coefficients, and
so the statement would not be true.
This exercise is recommended for all readers.
Problem 14
Three truck drivers went into a roadside cafe.
One truck driver purchased four sandwiches, a cup of coffee, and ten
doughnuts for $.
Another driver purchased three sandwiches, a cup of coffee, and seven
doughnuts for $.
What did the third truck driver pay for a sandwich, a cup of coffee, and
a doughnut?
(Trono 1991)
Answer
We know that and that , and we'd like to
know what is.
Fortunately, is a linear combination of and .
Calling the unknown price , we have this reduction.
The price paid is $.
Problem 15
The fact that Gaussian reduction disallows multiplication of
a row by zero is needed for the proof of uniqueness of reduced echelon form,
or else every matrix would
be row equivalent to a matrix of all zeros.
Where is it used?
Answer
If multiplication of a row by zero were allowed then
Lemma 2.6
would not hold.
That is, where
all the rows of the second matrix can be expressed as linear combinations
of the rows of the first, but the converse does not hold.
The second row of the first matrix is not a linear combination of the
rows of the second matrix.
This exercise is recommended for all readers.
Problem 16
The Linear Combination Lemma says which equations can be gotten from
Gaussian reduction from a given linear system.
Produce an equation not implied by this system.
Can any equation be derived from an inconsistent system?
Answer
An easy answer is this:
For a less wise-guy-ish answer, solve the system:
gives and .
Now any equation not satisfied by will do,
e.g., .
Every equation can be derived from an inconsistent system.
For instance, here is how to derive "" from
"".
First,
(validity of the case is separate but clear).
Similarly, .
Ditto for .
But now, gives .
Problem 17
Extend the definition of row equivalence to linear systems.
Under your definition, do equivalent systems have the same solution set?
(Hoffman & Kunze 1971)
Answer
Define linear systems to be equivalent if their augmented
matrices are row equivalent.
The proof that equivalent systems have the same solution set is easy.
This exercise is recommended for all readers.
Problem 18
In this matrix
the first and second columns add to the third.
Show that remains true under any row operation.
Make a conjecture.
Prove that it holds.
Answer
The three possible row swaps are easy,
as are the three possible rescalings.
One of the six possible pivots is :
and again the first and second columns add to the third.
The other five pivots are similar.
The obvious conjecture is that row operations do not change
linear relationships among columns.
A case-by-case
proof follows the sketch given in the first item.