This subsection is optional, although it is necessary for later material in this section and in the next one.
A linear transformations , because it has the same domain and
codomain, can be iterated.
That is, compositions of with itself such as
and are defined.
Note that this
power notation for the linear transformation functions dovetails with
the notation that we've used earlier for their squared matrix representations
because if then .
For the derivative map
the second power is the second derivative
the third power is the third derivative
and any higher power is the zero map.
This transformation of the space of matrices
has this second power
and this third power.
After that, and , etc.
These examples suggest that on iteration more and more zeros appear
until there is a settling down.
The next result makes this precise.
For any transformation , the rangespaces of the powers
form a descending chain
and the nullspaces form an ascending chain.
Further, there is a such that for powers less than the subsets are proper (if then and ), while for powers greater than the sets are equal (if then and ).
We will do the rangespace half and leave the rest for
Recall, however, that for any map the dimension of its rangespace
plus the dimension of its nullspace equals
the dimension of its domain.
So if the rangespaces shrink then the nullspaces must grow.
That the rangespaces form chains is clear
because if , so that
then and so .
To verify the "further" property,
first observe that if any pair of rangespaces in the
chain are equal
then all subsequent ones are also equal
This is because if
is the same map, with the same domain, as
it therefore has the same range:
(and induction shows that it holds for all higher powers).
So if the chain of rangespaces ever stops being strictly decreasing then it is
stable from that point onward.
But the chain must stop decreasing. Each rangespace is a subspace of the one before it. For it to be a proper subspace it must be of strictly lower dimension (see Problem 4). These spaces are finite-dimensional and so the chain can fall for only finitely-many steps, that is, the power is at most the dimension of .
The derivative map
of Example 1.1 has this chain of rangespaces
and this chain of nullspaces.
The transformation projecting onto the
first two coordinates
has and .
Let be the map
As the lemma describes, on
iteration the rangespace shrinks
and then stabilizes ,
while the nullspace grows
and then stabilizes .
This graph illustrates Lemma 1.3.
The horizontal axis gives the power of a transformation.
The vertical axis gives
the dimension of the rangespace of
as the distance above zero— and thus also shows the dimension of the
nullspace as the distance below the gray horizontal line,
because the two add to the dimension of the domain.
As sketched, on iteration
the rank falls and with it the nullity grows
until the two reach a steady state.
This state must be reached by the -th iterate.
The steady state's distance above zero is the dimension of the
and its distance below is the dimension of
the generalized nullspace.
Let be a transformation on an -dimensional space. The generalized rangespace (or the closure of the rangespace) is The generalized nullspace (or the closure of the nullspace) is .
Give the chains of rangespaces and nullspaces for the zero and
For each map,
give the chain of rangespaces and the chain of nullspaces,
and the generalized rangespace and the
Prove that function composition is associative
and so we can write without specifying a grouping.
Check that a subspace must be of dimension less than or equal to the
dimension of its superspace.
Check that if the subspace is proper (the subspace does not equal the
superspace) then the dimension is strictly less.
(This is used in the proof ofLemma 1.3.)
Prove that the generalized rangespace is the
entire space, and the generalized nullspace is trivial,
if the transformation is nonsingular.
Is this "only if" also?