# Linear Algebra/Mechanics of Matrix Multiplication/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Predict the result of each multiplication by an elementary reduction matrix, and then check by multiplying it out.

1. ${\begin{pmatrix}3&0\\0&0\end{pmatrix}}{\begin{pmatrix}1&2\\3&4\end{pmatrix}}$
2. ${\begin{pmatrix}4&0\\0&2\end{pmatrix}}{\begin{pmatrix}1&2\\3&4\end{pmatrix}}$
3. ${\begin{pmatrix}1&0\\-2&1\end{pmatrix}}{\begin{pmatrix}1&2\\3&4\end{pmatrix}}$
4. ${\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\begin{pmatrix}1&-1\\0&1\end{pmatrix}}$
5. ${\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}$
1. The second matrix has its first row multiplied by $3$  and its second row multiplied by $0$ .
${\begin{pmatrix}3&6\\0&0\end{pmatrix}}$
2. The second matrix has its first row multiplied by $4$  and its second row multiplied by $2$ .
${\begin{pmatrix}4&8\\6&8\end{pmatrix}}$
3. The second matrix undergoes the pivot operation of replacing the second row with $-2$  times the first row added to the second.
${\begin{pmatrix}1&2\\1&0\end{pmatrix}}$
4. The first matrix undergoes the column operation of: the second column is replaced by $-1$  times the first column plus the second.
${\begin{pmatrix}1&1\\3&1\end{pmatrix}}$
5. The first matrix has its columns swapped.
${\begin{pmatrix}2&1\\4&3\end{pmatrix}}$
This exercise is recommended for all readers.
Problem 2

The need to take linear combinations of rows and columns in tables of numbers arises often in practice. For instance, this is a map of part of Vermont and New York.

 In part because of Lake Champlain, there are no roads directly connecting some pairs of towns. For instance, there is no way to go from Winooski to Grand Isle without going through Colchester. (Of course, many other roads and towns have been left off to simplify the graph. From top to bottom of this map is about forty miles.) 1. The incidence matrix of a map is the square matrix whose $i,j$  entry is the number of roads from city $i$  to city $j$ . Produce the incidence matrix of this map (take the cities in alphabetical order).
2. A matrix is symmetric if it equals its transpose. Show that an incidence matrix is symmetric. (These are all two-way streets. Vermont doesn't have many one-way streets.)
3. What is the significance of the square of the incidence matrix? The cube?
1. The incidence matrix is this (e.g, the first row shows that there is only one connection including Burlington, the road to Winooski).
${\begin{pmatrix}0&0&0&0&1\\0&0&1&1&1\\0&1&0&1&0\\0&1&1&0&0\\1&1&0&0&0\end{pmatrix}}$
2. Because these are two-way roads, any road connecting city $i$  to city $j$  gives a connection between city $j$  and city $i$ .
3. The square of the incidence matrix tells how cities are connected by trips involving two roads.
This exercise is recommended for all readers.
Problem 3

This table gives the number of hours of each type done by each worker, and the associated pay rates. Use matrices to compute the wages due.

 regular overtime Alan 40 12 Betty 35 6 Catherine 40 18 Donald 28 0
 wage regular $25.00 overtime$45.00

(Remark. This illustrates, as did the prior problem, that in practice we often want to compute linear combinations of rows and columns in a context where we really aren't interested in any associated linear maps.)

The pay due each person appears in the matrix product of the two arrays.

Problem 4

Find the product of this matrix with its transpose.

${\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}$

The product is the identity matrix (recall that $\cos ^{2}\theta +\sin ^{2}\theta =1$ ). An explanation is that the given matrix represents, with respect to the standard bases, a rotation in $\mathbb {R} ^{2}$  of $\theta$  radians while the transpose represents a rotation of $-\theta$  radians. The two cancel.

This exercise is recommended for all readers.
Problem 5

Prove that the diagonal matrices form a subspace of ${\mathcal {M}}_{n\!\times \!n}$ . What is its dimension?

The set of diagonal matrices is nonempty as the zero matrix is diagonal. Clearly it is closed under scalar multiples and sums. Therefore it is a subspace. The dimension is $n$ ; here is a basis.

$\{{\begin{pmatrix}1&0&\ldots \\0&0\\&&\ddots \\0&0&&0\end{pmatrix}},\ldots ,{\begin{pmatrix}0&0&\ldots \\0&0\\&&\ddots \\0&0&&1\end{pmatrix}}\}$
Problem 6

Does the identity matrix represent the identity map if the bases are unequal?

No. In ${\mathcal {P}}_{1}$ , with respect to the unequal bases $B=\langle 1,x\rangle$  and $D=\langle 1+x,1-x\rangle$ , the identity transformation is represented by by this matrix.

${\rm {Rep}}_{B,D}({\text{id}})={\begin{pmatrix}1/2&1/2\\1/2&-1/2\end{pmatrix}}_{B,D}$
Problem 7

Show that every multiple of the identity commutes with every square matrix. Are there other matrices that commute with all square matrices?

For any scalar $r$  and square matrix $H$  we have $(rI)H=r(IH)=rH=r(HI)=(Hr)I=H(rI)$ .

There are no other such matrices; here is an argument for $2\!\times \!2$  matrices that is easily extended to $n\!\times \!n$ . If a matrix commutes with all others then it commutes with this unit matrix.

${\begin{pmatrix}0&a\\0&c\end{pmatrix}}={\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}0&1\\0&0\end{pmatrix}}={\begin{pmatrix}0&1\\0&0\end{pmatrix}}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}={\begin{pmatrix}c&d\\0&0\end{pmatrix}}$

From this we first conclude that the upper left entry $a$  must equal its lower right entry $d$ . We also conclude that the lower left entry $c$  is zero. The argument for the upper right entry $b$  is similar.

Problem 8

Prove or disprove: nonsingular matrices commute.

It is false; these two don't commute.

${\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&0\\1&0\end{pmatrix}}$
This exercise is recommended for all readers.
Problem 9

Show that the product of a permutation matrix and its transpose is an identity matrix.

A permutation matrix has a single one in each row and column, and all its other entries are zeroes. Fix such a matrix. Suppose that the $i$ -th row has its one in its $j$ -th column. Then no other row has its one in the $j$ -th column; every other row has a zero in the $j$ -th column. Thus the dot product of the $i$ -th row and any other row is zero.

The $i$ -th row of the product is made up of the dot products of the $i$ -th row of the matrix and the columns of the transpose. By the last paragraph, all such dot products are zero except for the $i$ -th one, which is one.

Problem 10

Show that if the first and second rows of $G$  are equal then so are the first and second rows of $GH$ . Generalize.

The generalization is to go from the first and second rows to the $i_{1}$ -th and $i_{2}$ -th rows. Row $i$  of $GH$  is made up of the dot products of row $i$  of $G$  and the columns of $H$ . Thus if rows $i_{1}$  and $i_{2}$  of $G$  are equal then so are rows $i_{1}$  and $i_{2}$  of $GH$ .

Problem 11

Describe the product of two diagonal matrices.

If the product of two diagonal matrices is defined— if both are $n\!\times \!n$ — then the product of the diagonals is the diagonal of the products: where $G,H$  are equal-sized diagonal matrices, $GH$  is all zeros except each that $i,i$  entry is $g_{i,i}h_{i,i}$ .

Problem 12

Write

${\begin{pmatrix}1&0\\-3&3\end{pmatrix}}$

as the product of two elementary reduction matrices.

One way to produce this matrix from the identity is to use the column operations of first multiplying the second column by three, and then adding the negative of the resulting second column to the first.

${\begin{pmatrix}1&0\\0&1\end{pmatrix}}\;{\xrightarrow[{}]{}}\;{\begin{pmatrix}1&0\\0&3\end{pmatrix}}\;{\xrightarrow[{}]{}}\;{\begin{pmatrix}1&0\\-3&3\end{pmatrix}}$

Column operations, in contrast with row operations) are written from left to right, so doing the above two operations is expressed with this matrix product.

${\begin{pmatrix}1&0\\0&3\end{pmatrix}}{\begin{pmatrix}1&0\\-1&1\end{pmatrix}}$

Remark. Alternatively, we could get the required matrix with row operations. Starting with the identity, first adding the negative of the first row to the second, and then multiplying the second row by three will work. Because successive row operations are written as matrix products from right to left, doing these two row operations is expressed with: the same matrix product.

This exercise is recommended for all readers.
Problem 13

Show that if $G$  has a row of zeros then $GH$  (if defined) has a row of zeros. Does that work for columns?

The $i$ -th row of $GH$  is made up of the dot products of the $i$ -th row of $G$  with the columns of $H$ . The dot product of a zero row with a column is zero.

It works for columns if stated correctly: if $H$  has a column of zeros then $GH$  (if defined) has a column of zeros. The proof is easy.

Problem 14

Show that the set of unit matrices forms a basis for ${\mathcal {M}}_{n\!\times \!m}$ .

Perhaps the easiest way is to show that each $n\!\times \!m$  matrix is a linear combination of unit matrices in one and only one way:

$c_{1}{\begin{pmatrix}1&0&\ldots \\0&0\\\vdots \end{pmatrix}}+\dots +c_{n,m}{\begin{pmatrix}0&0&\ldots \\\vdots \\0&\ldots &&1\end{pmatrix}}={\begin{pmatrix}a_{1,1}&a_{1,2}&\ldots \\\vdots \\a_{n,1}&\ldots &&a_{n,m}\end{pmatrix}}$

has the unique solution $c_{1}=a_{1,1}$ , $c_{2}=a_{1,2}$ , etc.

Problem 15

Find the formula for the $n$ -th power of this matrix.

${\begin{pmatrix}1&1\\1&0\end{pmatrix}}$

Call that matrix $F$ . We have

$F^{2}={\begin{pmatrix}2&1\\1&1\end{pmatrix}}\quad F^{3}={\begin{pmatrix}3&2\\2&1\end{pmatrix}}\quad F^{4}={\begin{pmatrix}5&3\\3&2\end{pmatrix}}$

In general,

$F^{n}={\begin{pmatrix}f_{n+1}&f_{n}\\f_{n}&f_{n-1}\end{pmatrix}}$

where $f_{i}$  is the $i$ -th Fibonacci number $f_{i}=f_{i-1}+f_{i-2}$  and $f_{0}=0$ , $f_{1}=1$ , which is verified by induction, based on this equation.

${\begin{pmatrix}f_{i-1}&f_{i-2}\\f_{i-2}&f_{i-3}\end{pmatrix}}{\begin{pmatrix}1&1\\1&0\end{pmatrix}}={\begin{pmatrix}f_{i}&f_{i-1}\\f_{i-1}&f_{i-2}\end{pmatrix}}$
This exercise is recommended for all readers.
Problem 16

The trace of a square matrix is the sum of the entries on its diagonal (its significance appears in Chapter Five). Show that ${\text{trace}}\,(GH)={\text{trace}}\,(HG)$ .

Chapter Five gives a less computational reason— the trace of a matrix is the second coefficient in its characteristic polynomial— but for now we can use indices. We have

${\begin{array}{rl}{\text{trace}}\,(GH)&=(g_{1,1}h_{1,1}+g_{1,2}h_{2,1}+\dots +g_{1,n}h_{n,1})\\&\quad +(g_{2,1}h_{1,2}+g_{2,2}h_{2,2}+\dots +g_{2,n}h_{n,2})\\&\quad +\cdots +(g_{n,1}h_{1,n}+g_{n,2}h_{2,n}+\dots +g_{n,n}h_{n,n})\end{array}}$

while

${\begin{array}{rl}{\text{trace}}\,(HG)&=(h_{1,1}g_{1,1}+h_{1,2}g_{2,1}+\dots +h_{1,n}g_{n,1})\\&\quad +(h_{2,1}g_{1,2}+h_{2,2}g_{2,2}+\dots +h_{2,n}g_{n,2})\\&\quad +\cdots +(h_{n,1}g_{1,n}+h_{n,2}g_{2,n}+\dots +h_{n,n}g_{n,n})\end{array}}$

and the two are equal.

This exercise is recommended for all readers.
Problem 17

A square matrix is upper triangular if its only nonzero entries lie above, or on, the diagonal. Show that the product of two upper triangular matrices is upper triangular. Does this hold for lower triangular also?

A matrix is upper triangular if and only if its $i,j$  entry is zero whenever $i>j$ . Thus, if $G,H$  are upper triangular then $h_{i,j}$  and $g_{i,j}$  are zero when $i>j$ . An entry in the product $p_{i,j}=g_{i,1}h_{1,j}+\dots +g_{i,n}h_{n,j}$  is zero unless at least some of the terms are nonzero, that is, unless for at least some of the summands $g_{i,r}h_{r,j}$  both $i\leq r$  and $r\leq j$ . Of course, if $i>j$  this cannot happen and so the product of two upper triangular matrices is upper triangular. (A similar argument works for lower triangular matrices.)

Problem 18

A square matrix is a Markov matrix if each entry is between zero and one and the sum along each row is one. Prove that a product of Markov matrices is Markov.

The sum along the $i$ -th row of the product is this.

${\begin{array}{rl}p_{i,1}+\cdots +p_{i,n}&=(h_{i,1}g_{1,1}+h_{i,2}g_{2,1}+\dots +h_{i,n}g_{n,1})\\&\quad +(h_{i,1}g_{1,2}+h_{i,2}g_{2,2}+\dots +h_{i,n}g_{n,2})\\&\quad +\dots +(h_{i,1}g_{1,n}+h_{i,2}g_{2,n}+\dots +h_{i,n}g_{n,n})\\&=h_{i,1}(g_{1,1}+g_{1,2}+\dots +g_{1,n})\\&\quad +h_{i,2}(g_{2,1}+g_{2,2}+\dots +g_{2,n})\\&\quad +\dots +h_{i,n}(g_{n,1}+g_{n,2}+\dots +g_{n,n})\\&=h_{i,1}\cdot 1+\dots +h_{i,n}\cdot 1\\&=1\end{array}}$
This exercise is recommended for all readers.
Problem 19

Give an example of two matrices of the same rank with squares of differing rank.

Matrices representing (say, with respect to ${\mathcal {E}}_{2},{\mathcal {E}}_{2}\subset \mathbb {R} ^{2}$ ) the maps that send

${\vec {\beta }}_{1}{\stackrel {h}{\longmapsto }}{\vec {\beta }}_{1}\quad {\vec {\beta }}_{2}{\stackrel {h}{\longmapsto }}{\vec {0}}$

and

${\vec {\beta }}_{1}{\stackrel {g}{\longmapsto }}{\vec {\beta }}_{2}\quad {\vec {\beta }}_{2}{\stackrel {g}{\longmapsto }}{\vec {0}}$

will do.

Problem 20

Combine the two generalizations of the identity matrix, the one allowing entires to be other than ones, and the one allowing the single one in each row and column to be off the diagonal. What is the action of this type of matrix?

The combination is to have all entries of the matrix be zero except for one (possibly) nonzero entry in each row and column. Such a matrix can be written as the product of a permutation matrix and a diagonal matrix, e.g.,

${\begin{pmatrix}0&4&0\\2&0&0\\0&0&-5\end{pmatrix}}={\begin{pmatrix}0&1&0\\1&0&0\\0&0&1\end{pmatrix}}{\begin{pmatrix}4&0&0\\0&2&0\\0&0&-5\end{pmatrix}}$

and its action is thus to rescale the rows and permute them.

Problem 21

On a computer multiplications are more costly than additions, so people are interested in reducing the number of multiplications used to compute a matrix product.

1. How many real number multiplications are needed in formula we gave for the product of a $m\!\times \!r$  matrix and a $r\!\times \!n$  matrix?
2. Matrix multiplication is associative, so all associations yield the same result. The cost in number of multiplications, however, varies. Find the association requiring the fewest real number multiplications to compute the matrix product of a $5\!\times \!10$  matrix, a $10\!\times \!20$  matrix, a $20\!\times \!5$  matrix, and a $5\!\times \!1$  matrix.
3. (Very hard.) Find a way to multiply two $2\!\times \!2$  matrices using only seven multiplications instead of the eight suggested by the naive approach.
1. Each entry $p_{i,j}=g_{i,1}h_{1,j}+\dots +g_{1,r}h_{r,1}$  takes $r$  multiplications and there are $m\cdot n$  entries. Thus there are $m\cdot n\cdot r$  multiplications.
2. Let $H_{1}$  be $5\!\times \!10$ , let $H_{2}$  be $10\!\times \!20$ , let $H_{3}$  be $20\!\times \!5$ , let $H_{4}$  be $5\!\times \!1$ . Then, using the formula from the prior part,

${\begin{array}{l|l}{\textit {this\ association}}&{\textit {uses\ this\ many\ multiplications}}\\\hline ((H_{1}H_{2})H_{3})H_{4}&1000+500+25=1525\\(H_{1}(H_{2}H_{3}))H_{4}&1000+250+25=1275\\(H_{1}H_{2})(H_{3}H_{4})&1000+100+100=1200\\H_{1}(H_{2}(H_{3}H_{4}))&100+200+50=350\\H_{1}((H_{2}H_{3})H_{4})&1000+50+50=1100\end{array}}$

shows which is cheapest.

3. This is reported by Knuth as an improvement by S. Winograd of a formula due to V. Strassen: where $w=aA-(a-c-d)(A-C+D)$ ,
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}A&B\\C&D\end{pmatrix}}$
$={\begin{pmatrix}aA+bB&w+(c+d)(C-A)+(a+b-c-d)D\\w+(a-c)(D-C)-d(A-B-C+D)&w+(a-c)(D-C)+(c+d)(C-A)\end{pmatrix}}$
takes seven multiplications and fifteen additions (save the intermediate results).
? Problem 22

If $A$  and $B$  are square matrices of the same size such that $ABAB=0$ , does it follow that $BABA=0$ ? (Putnam Exam 1990)

This is how the answer was given in the cited source.

No, it does not. Let $A$  and $B$  represent, with respect to the standard bases, these transformations of $\mathbb {R} ^{3}$ .

${\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {a}{\longmapsto }}{\begin{pmatrix}x\\y\\0\end{pmatrix}}\qquad {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {a}{\longmapsto }}{\begin{pmatrix}0\\x\\y\end{pmatrix}}$

Observe that

${\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {abab}{\longmapsto }}{\begin{pmatrix}0\\0\\0\end{pmatrix}}\quad {\text{but}}\quad {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {baba}{\longmapsto }}{\begin{pmatrix}0\\0\\x\end{pmatrix}}.$
Problem 23

Demonstrate these four assertions to get an alternate proof that column rank equals row rank. (Liebeck 1966)

1. ${\vec {y}}\cdot {\vec {y}}={\vec {0}}$  iff ${\vec {y}}={\vec {0}}$ .
2. $A{\vec {x}}={\vec {0}}$  iff ${{A}^{\rm {trans}}}A{\vec {x}}={\vec {0}}$ .
3. $\dim({\mathcal {R}}(A))=\dim({\mathcal {R}}({{A}^{\rm {trans}}}A))$ .
4. ${\text{col rank}}(A)={\text{col rank}}({{A}^{\rm {trans}}})={\text{row rank}}(A)$ .

This is how the answer was given in the cited source.

1. Obvious.
2. If ${{A}^{\rm {trans}}}A{\vec {x}}={\vec {0}}$  then ${\vec {y}}\cdot {\vec {y}}=0$  where ${\vec {y}}=A{\vec {x}}$ . Hence ${\vec {y}}={\vec {0}}$  by (a). The converse is obvious.
3. By (b), $A{\vec {x}}_{1}$ , ... ,$A{\vec {x}}_{n}$  are linearly independent iff ${{A}^{\rm {trans}}}A{\vec {x}}_{1}$ , ... , ${{A}^{\rm {trans}}}A{\vec {v}}_{n}$  are linearly independent.
4. We have ${\text{col rank}}(A)={\text{col rank}}({{A}^{\rm {trans}}}A)=\dim\{{{A}^{\rm {trans}}}(A{\vec {x}})\,{\big |}\,{\text{all }}{\vec {x}}\}\leq \dim\{{{A}^{\rm {trans}}}{\vec {y}}\,{\big |}\,{\text{all }}{\vec {y}}\}={\text{col rank}}({{A}^{\rm {trans}}})$ . Thus also ${\text{col rank}}({{A}^{\rm {trans}}})\leq {\text{col rank}}({{{A}^{\rm {trans}}}^{\rm {trans}}})$  and so we have ${\text{col rank}}(A)={\text{col rank}}({{A}^{\rm {trans}}})={\text{row rank}}(A)$ .
Problem 24

Prove (where $A$  is an $n\!\times \!n$  matrix and so defines a transformation of any $n$ -dimensional space $V$  with respect to $B,B$  where $B$  is a basis) that $\dim({\mathcal {R}}(A)\cap {\mathcal {N}}(A))=\dim({\mathcal {R}}(A))-\dim({\mathcal {R}}(A^{2}))$ . Conclude

1. ${\mathcal {N}}(A)\subset {\mathcal {R}}(A)$  iff $\dim({\mathcal {N}}(A))=\dim({\mathcal {R}}(A))-\dim({\mathcal {R}}(A^{2}))$ ;
2. ${\mathcal {R}}(A)\subseteq {\mathcal {N}}(A)$  iff $A^{2}=0$ ;
3. ${\mathcal {R}}(A)={\mathcal {N}}(A)$  iff $A^{2}=0$  and $\dim({\mathcal {N}}(A))=\dim({\mathcal {R}}(A))$  ;
4. $\dim({\mathcal {R}}(A)\cap {\mathcal {N}}(A))=0$  iff $\dim({\mathcal {R}}(A))=\dim({\mathcal {R}}(A^{2}))$  ;
5. (Requires the Direct Sum subsection, which is optional.) $V={\mathcal {R}}(A)\oplus {\mathcal {N}}(A)$  iff $\dim({\mathcal {R}}(A))=\dim({\mathcal {R}}(A^{2}))$ .
(Ackerson 1955)

This is how the answer was given in the cited source.

Let $\langle {\vec {z}}_{1},\dots ,{\vec {z}}_{k}\rangle$  be a basis for ${\mathcal {R}}(A)\cap {\mathcal {N}}(A)$  ($k$  might be $0$ ). Let ${\vec {x}}_{1},\dots ,{\vec {x}}_{k}\in V$  be such that $A{\vec {x}}_{i}={\vec {z}}_{i}$ . Note $\{A{\vec {x}}_{1},\dots ,A{\vec {x}}_{k}\}$  is linearly independent, and extend to a basis for ${\mathcal {R}}(A)$ : $A{\vec {x}}_{1},\ldots ,A{\vec {x}}_{k},A{\vec {x}}_{k+1},\dots ,A{\vec {x}}_{r_{1}}$  where $r_{1}=\dim({\mathcal {R}}(A))$ .

Now take ${\vec {x}}\in V$ . Write

$A{\vec {x}}=a_{1}(A{\vec {x}}_{1})+\dots +a_{r_{1}}(A{\vec {x}}_{r_{1}})$

and so

$A^{2}{\vec {x}}=a_{1}(A^{2}{\vec {x}}_{1})+\dots +a_{r_{1}}(A^{2}{\vec {x}}_{r_{1}}).$

But $A{\vec {x}}_{1},\dots ,A{\vec {x}}_{k}\in {\mathcal {N}}(A)$ , so $A^{2}{\vec {x}}_{1}={\vec {0}},\dots ,A^{2}{\vec {x}}_{k}={\vec {0}}$  and we now know

$A^{2}{\vec {x}}_{k+1},\dots ,A^{2}{\vec {x}}_{r_{1}}$

spans ${\mathcal {R}}(A^{2})$ .

To see $\{A^{2}{\vec {x}}_{k+1},\dots ,A^{2}{\vec {x}}_{r_{1}}\}$  is linearly independent, write

${\begin{array}{rl}b_{k+1}A^{2}{\vec {x}}_{k+1}+\dots +b_{r_{1}}A^{2}{\vec {x}}_{r_{1}}&={\vec {0}}\\A[b_{k+1}A{\vec {x}}_{k+1}+\dots +b_{r_{1}}A{\vec {x}}_{r_{1}}]&={\vec {0}}\end{array}}$

and, since $b_{k+1}A{\vec {x}}_{k+1}+\dots +b_{r_{1}}A{\vec {x}}_{r_{1}}\in {\mathcal {N}}(A)$  we get a contradiction unless it is ${\vec {0}}$  (clearly it is in ${\mathcal {R}}(A)$ , but $A{\vec {x}}_{1},\ldots ,A{\vec {x}}_{k}$  is a basis for ${\mathcal {R}}(A)\cap {\mathcal {N}}(A)$ ).

Hence $\dim({\mathcal {R}}(A^{2}))=r_{1}-k=\dim({\mathcal {R}}(A))-\dim({\mathcal {R}}(A)\cap {\mathcal {N}}(A))$ .