The first subsection shows how to convert the representation
of a vector with respect to one basis to the representation of that
same vector with respect to another basis.
Here we will see how to
convert the representation of a map with
respect to one pair of bases to the representation of that
map with respect to a different pair.
That is, we want the relationship between the
matrices in this arrow diagram.
To move from the lower-left of this diagram
to the lower-right we can either go straight over, or
else up to then over to and then down.
Restated in terms of the matrices,
we can calculate
either by simply using and ,
or else by first changing bases with
then multiplying by
and then changing bases with .
This equation summarizes.
(To compare this equation with the sentence before it, remember that the
equation is read from right to left because
function composition is read right to left and matrix multiplication
represent the composition.)
represents, with respect to ,
the transformation that rotates vectors
We can translate that representation with respect to
to one with respect to
by using the arrow diagram and formula () above.
From this, we can use the formula:
Note that can be calculated as
the matrix inverse of .
Although the new matrix is messier-appearing,
the map that it represents is the same.
For instance, to replicate the effect of in the picture,
start with ,
and check it against
to see that it is the same result as above.
On the map
that is represented with respect to the standard basis in this way
can also be represented with respect to another basis
in a way that is simpler, in that the action of a diagonal matrix is easy to understand.
Naturally, we usually prefer basis changes that make the
representation easier to understand.
When the representation with respect to equal starting
and ending bases is a diagonal matrix we say the map or matrix
has been diagonalized.
In Chaper Five we shall see which maps and matrices are diagonalizable,
and where one is not, we shall see how to get a representation that is nearly
We finish this subsection by considering the easier case
where representations are with respect to possibly different starting and
Recall that the prior subsection
shows that a matrix changes bases if and only if it is nonsingular.
That gives us another version of the above arrow diagram
and equation ().
Same-sized matrices and are
matrix equivalent if there are nonsingular matrices
and such that .
Matrix equivalent matrices represent the same map, with respect to appropriate pairs of bases.
Problem 10 checks that
matrix equivalence is an equivalence relation.
Thus it partitions the set of matrices into matrix equivalence classes.
matrix equivalent to
We can get some insight into the classes by comparing matrix equivalence
with row equivalence
(recall that matrices are row equivalent when they can be reduced to each
other by row operations).
In , the matrices and are nonsingular and
thus each can be written as a product of elementary reduction matrices
(see Lemma 4.8 in the previous subsection).
Left-multiplication by the reduction matrices making up
has the effect of performing row operations.
Right-multiplication by the reduction matrices making up
performs column operations.
Therefore, matrix equivalence is a generalization of row equivalence— two
matrices are row equivalent if one can be converted to the other by
a sequence of row reduction steps, while
two matrices are matrix equivalent if one can be converted to the other by a
sequence of row reduction steps followed by a sequence of column reduction
Thus, if matrices are row equivalent then they are also
matrix equivalent (since we can take to be the identity matrix and so
perform no column operations).
The converse, however, does not hold.
are matrix equivalent because the second can be reduced to the first by the column operation of taking times the first column and adding to the second. They are not row equivalent because they have different reduced echelon forms (in fact, both are already in reduced form).
We will close this section by finding a set
of representatives for the matrix equivalence classes.
Any matrix of rank is matrix equivalent to
the matrix that is all zeros except that
the first diagonal entries are ones.
Sometimes this is described as a block
As discussed above, Gauss-Jordan reduce the given matrix and combine all the reduction matrices used there to make . Then use the leading entries to do column reduction and finish by swapping columns to put the leading ones on the diagonal. Combine the reduction matrices used for those column operations into .
We illustrate the proof by finding the and for this matrix.
First Gauss-Jordan row-reduce.
Then column-reduce, which involves right-multiplication.
Finish by swapping columns.
Finally, combine the left-multipliers together as and the
right-multipliers together as to get the equation.
Two same-sized matrices are matrix equivalent if and only if they have
the same rank. That is, the matrix equivalence classes are
characterized by rank.
Two same-sized matrices with the same rank are equivalent to the same block partial-identity matrix.
The matrices have
only three possible ranks: zero, one, or two.
Thus there are three
Three equivalence classes
Each class consists of all of the matrices with the same rank. There is only one rank zero matrix, so that class has only one member, but the other two classes each have infinitely many members.
In this subsection
we have seen how to change the representation of a map with
respect to a first pair of bases to one with respect to a second pair.
That led to a definition describing when matrices are equivalent in
Finally we noted that,
with the proper choice of (possibly different) starting and ending bases, any
map can be represented in block partial-identity form.
One of the nice things about this representation is that,
in some sense, we can completely understand the map when it is
expressed in this way:
if the bases are
then the map sends
where is the map's rank.
Thus, we can understand any linear map as a kind of projection.
Of course, "understanding" a map expressed in this way
requires that we understand the relationship between and .
However, despite that difficulty,
this is a good classification of linear maps.