# Linear Algebra/Matrix Multiplication/Solutions

A=(285)

## Solutions

This exercise is recommended for all readers.
Problem 1

Compute, or state "not defined".

1. ${\displaystyle {\begin{pmatrix}3&1\\-4&2\end{pmatrix}}{\begin{pmatrix}0&5\\0&0.5\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&1&-1\\4&0&3\end{pmatrix}}{\begin{pmatrix}2&-1&-1\\3&1&1\\3&1&1\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}2&-7\\7&4\end{pmatrix}}{\begin{pmatrix}1&0&5\\-1&1&1\\3&8&4\end{pmatrix}}}$
4. ${\displaystyle {\begin{pmatrix}5&2\\3&1\end{pmatrix}}{\begin{pmatrix}-1&2\\3&-5\end{pmatrix}}}$
1. ${\displaystyle {\begin{pmatrix}0&15.5\\0&-19\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}2&-1&-1\\17&-1&-1\end{pmatrix}}}$
3. Not defined.
4. ${\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 2

Where

${\displaystyle A={\begin{pmatrix}1&-1\\2&0\\\end{pmatrix}}\quad B={\begin{pmatrix}5&2\\4&4\\\end{pmatrix}}\quad C={\begin{pmatrix}-2&3\\-4&1\\\end{pmatrix}}}$

compute or state "not defined".

1. ${\displaystyle AB}$
2. ${\displaystyle (AB)C}$
3. ${\displaystyle BC}$
4. ${\displaystyle A(BC)}$
1. ${\displaystyle {\begin{pmatrix}1&-2\\10&4\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}1&-2\\10&4\end{pmatrix}}{\begin{pmatrix}-2&3\\-4&1\end{pmatrix}}={\begin{pmatrix}6&1\\-36&34\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}-18&17\\-24&16\end{pmatrix}}}$
4. ${\displaystyle {\begin{pmatrix}1&-1\\2&0\end{pmatrix}}{\begin{pmatrix}-18&17\\-24&16\end{pmatrix}}={\begin{pmatrix}6&1\\-36&34\end{pmatrix}}}$
Problem 3

Which products are defined?

1. ${\displaystyle 3\!\times \!2}$  times ${\displaystyle 2\!\times \!3}$
2. ${\displaystyle 2\!\times \!3}$  times ${\displaystyle 3\!\times \!2}$
3. ${\displaystyle 2\!\times \!2}$  times ${\displaystyle 3\!\times \!3}$
4. ${\displaystyle 3\!\times \!3}$  times ${\displaystyle 2\!\times \!2}$
1. Yes.
2. Yes.
3. No.
4. No.
This exercise is recommended for all readers.
Problem 4

Give the size of the product or state "not defined".

1. a ${\displaystyle 2\!\times \!3}$  matrix times a ${\displaystyle 3\!\times \!1}$  matrix
2. a ${\displaystyle 1\!\times \!12}$  matrix times a ${\displaystyle 12\!\times \!1}$  matrix
3. a ${\displaystyle 2\!\times \!3}$  matrix times a ${\displaystyle 2\!\times \!1}$  matrix
4. a ${\displaystyle 2\!\times \!2}$  matrix times a ${\displaystyle 2\!\times \!2}$  matrix
1. ${\displaystyle 2\!\times \!1}$
2. ${\displaystyle 1\!\times \!1}$
3. Not defined.
4. ${\displaystyle 2\!\times \!2}$
This exercise is recommended for all readers.
Problem 5

Find the system of equations resulting from starting with

${\displaystyle {\begin{array}{*{3}{rc}r}h_{1,1}x_{1}&+&h_{1,2}x_{2}&+&h_{1,3}x_{3}&=&d_{1}\\h_{2,1}x_{1}&+&h_{2,2}x_{2}&+&h_{2,3}x_{3}&=&d_{2}\end{array}}}$

and making this change of variable (i.e., substitution).

${\displaystyle {\begin{array}{*{2}{rc}r}x_{1}&=&g_{1,1}y_{1}&+&g_{1,2}y_{2}\\x_{2}&=&g_{2,1}y_{1}&+&g_{2,2}y_{2}\\x_{3}&=&g_{3,1}y_{1}&+&g_{3,2}y_{2}\end{array}}}$

We have

${\displaystyle {\begin{array}{*{3}{rc}r}h_{1,1}\cdot (g_{1,1}y_{1}+g_{1,2}y_{2})&+&h_{1,2}\cdot (g_{2,1}y_{1}+g_{2,2}y_{2})&+&h_{1,3}\cdot (g_{3,1}y_{1}+g_{3,2}y_{2})&=&d_{1}\\h_{2,1}\cdot (g_{1,1}y_{1}+g_{1,2}y_{2})&+&h_{2,2}\cdot (g_{2,1}y_{1}+g_{2,2}y_{2})&+&h_{2,3}\cdot (g_{3,1}y_{1}+g_{3,2}y_{2})&=&d_{2}\end{array}}}$

which, after expanding and regrouping about the ${\displaystyle y}$ 's yields this.

${\displaystyle {\begin{array}{*{2}{rc}r}(h_{1,1}g_{1,1}+h_{1,2}g_{2,1}+h_{1,3}g_{3,1})y_{1}&+&(h_{1,1}g_{1,2}+h_{1,2}g_{2,2}+h_{1,3}g_{3,2})y_{2}&=&d_{1}\\(h_{2,1}g_{1,1}+h_{2,2}g_{2,1}+h_{2,3}g_{3,1})y_{1}&+&(h_{2,1}g_{1,2}+h_{2,2}g_{2,2}+h_{2,3}g_{3,2})y_{2}&=&d_{2}\end{array}}}$

The starting system, and the system used for the substitutions, can be expressed in matrix language.

${\displaystyle {\begin{pmatrix}h_{1,1}&h_{1,2}&h_{1,3}\\h_{2,1}&h_{2,2}&h_{2,3}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}=H{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}={\begin{pmatrix}d_{1}\\d_{2}\end{pmatrix}}\qquad {\begin{pmatrix}g_{1,1}&g_{1,2}\\g_{2,1}&g_{2,2}\\g_{3,1}&g_{3,2}\end{pmatrix}}{\begin{pmatrix}y_{1}\\y_{2}\end{pmatrix}}=G{\begin{pmatrix}y_{1}\\y_{2}\end{pmatrix}}={\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}}$

With this, the substitution is ${\displaystyle {\vec {d}}=H{\vec {x}}=H(G{\vec {y}})=(HG){\vec {y}}}$ .

Problem 6

As Definition 2.3 points out, the matrix product operation generalizes the dot product. Is the dot product of a ${\displaystyle 1\!\times \!n}$  row vector and a ${\displaystyle n\!\times \!1}$  column vector the same as their matrix-multiplicative product?

Technically, no. The dot product operation yields a scalar while the matrix product yields a ${\displaystyle 1\!\times \!1}$  matrix. However, we usually will ignore the distinction.

This exercise is recommended for all readers.
Problem 7

Represent the derivative map on ${\displaystyle {\mathcal {P}}_{n}}$  with respect to ${\displaystyle B,B}$  where ${\displaystyle B}$  is the natural basis ${\displaystyle \langle 1,x,\ldots ,x^{n}\rangle }$ . Show that the product of this matrix with itself is defined; what the map does it represent?

The action of ${\displaystyle d/dx}$  on ${\displaystyle B}$  is ${\displaystyle 1\mapsto 0}$ , ${\displaystyle x\mapsto 1}$ , ${\displaystyle x^{2}\mapsto 2x}$ , ... and so this is its ${\displaystyle (n+1)\!\times \!(n+1)}$  matrix representation.

${\displaystyle {\rm {Rep}}_{B,B}({\frac {d}{dx}})={\begin{pmatrix}0&1&0&&0\\0&0&2&&0\\&&&\ddots \\0&0&0&&n\\0&0&0&&0\end{pmatrix}}}$

The product of this matrix with itself is defined because the matrix is square.

${\displaystyle {\begin{pmatrix}0&1&0&&0\\0&0&2&&0\\&&&\ddots \\0&0&0&&n\\0&0&0&&0\end{pmatrix}}^{2}={\begin{pmatrix}0&0&2&0&&0\\0&0&0&6&&0\\&&&&\ddots &\\0&0&0&&&n(n-1)\\0&0&0&&&0\\0&0&0&&&0\end{pmatrix}}}$

The map so represented is the composition

${\displaystyle p\;{\stackrel {\frac {d}{dx}}{\longmapsto }}\;{\frac {d\,p}{dx}}\;{\stackrel {\frac {d}{dx}}{\longmapsto }}\;{\frac {d^{2}\,p}{dx^{2}}}}$

which is the second derivative operation.

Problem 8

Show that composition of linear transformations on ${\displaystyle \mathbb {R} ^{1}}$  is commutative. Is this true for any one-dimensional space?

It is true for all one-dimensional spaces. Let ${\displaystyle f}$  and ${\displaystyle g}$  be transformations of a one-dimensional space. We must show that ${\displaystyle g\circ f\,({\vec {v}})=f\circ g\,({\vec {v}})}$  for all vectors. Fix a basis ${\displaystyle B}$  for the space and then the transformations are represented by ${\displaystyle 1\!\times \!1}$  matrices.

${\displaystyle F={\rm {Rep}}_{B,B}(f)={\begin{pmatrix}f_{1,1}\end{pmatrix}}\qquad G={\rm {Rep}}_{B,B}(g)={\begin{pmatrix}g_{1,1}\end{pmatrix}}}$

Therefore, the compositions can be represented as ${\displaystyle GF}$  and ${\displaystyle FG}$ .

${\displaystyle GF={\rm {Rep}}_{B,B}(g\circ f)={\begin{pmatrix}g_{1,1}f_{1,1}\end{pmatrix}}\qquad FG={\rm {Rep}}_{B,B}(f\circ g)={\begin{pmatrix}f_{1,1}g_{1,1}\end{pmatrix}}}$

These two matrices are equal and so the compositions have the same effect on each vector in the space.

Problem 9

Why is matrix multiplication not defined as entry-wise multiplication? That would be easier, and commutative too.

It would not represent linear map composition; Theorem 2.6 would fail.

This exercise is recommended for all readers.
Problem 10
1. Prove that ${\displaystyle H^{p}H^{q}=H^{p+q}}$  and ${\displaystyle (H^{p})^{q}=H^{pq}}$  for positive integers ${\displaystyle p,q}$ .
2. Prove that ${\displaystyle (rH)^{p}=r^{p}\cdot H^{p}}$  for any positive integer ${\displaystyle p}$  and scalar ${\displaystyle r\in \mathbb {R} }$ .

Each follows easily from the associated map fact. For instance, ${\displaystyle p}$  applications of the transformation ${\displaystyle h}$ , following ${\displaystyle q}$  applications, is simply ${\displaystyle p+q}$  applications.

This exercise is recommended for all readers.
Problem 11
1. How does matrix multiplication interact with scalar multiplication: is ${\displaystyle r(GH)=(rG)H}$ ? Is ${\displaystyle G(rH)=r(GH)}$ ?
2. How does matrix multiplication interact with linear combinations: is ${\displaystyle F(rG+sH)=r(FG)+s(FH)}$ ? Is ${\displaystyle (rF+sG)H=rFH+sGH}$ ?

Although these can be done by going through the indices, they are best understood in terms of the represented maps. That is, fix spaces and bases so that the matrices represent linear maps ${\displaystyle f,g,h}$ .

1. Yes; we have both ${\displaystyle r\cdot (g\circ h)\,({\vec {v}})=r\cdot g(\,h({\vec {v}})\,)=(r\cdot g)\circ h\,({\vec {v}})}$  and ${\displaystyle g\circ (r\cdot h)\,({\vec {v}})=g(\,r\cdot h({\vec {v}})\,)=r\cdot g(h({\vec {v}}))=r\cdot (g\circ h)\,({\vec {v}})}$  (the second equality holds because of the linearity of ${\displaystyle g}$ ).
2. Both answers are yes. First, ${\displaystyle f\circ (rg+sh)}$  and ${\displaystyle r\cdot (f\circ g)+s\cdot (f\circ h)}$  both send ${\displaystyle {\vec {v}}}$  to ${\displaystyle r\cdot f(g({\vec {v}}))+s\cdot f(h({\vec {v}}))}$ ; the calculation is as in the prior item (using the linearity of ${\displaystyle f}$  for the first one). For the other, ${\displaystyle (rf+sg)\circ h}$  and ${\displaystyle r\cdot (f\circ h)+s\cdot (g\circ h)}$  both send ${\displaystyle {\vec {v}}}$  to ${\displaystyle r\cdot f(h({\vec {v}}))+s\cdot g(h({\vec {v}}))}$ .
Problem 12

We can ask how the matrix product operation interacts with the transpose operation.

1. Show that ${\displaystyle {{(GH)}^{\rm {trans}}}={{H}^{\rm {trans}}}{{G}^{\rm {trans}}}}$ .
2. A square matrix is symmetric if each ${\displaystyle i,j}$  entry equals the ${\displaystyle j,i}$  entry, that is, if the matrix equals its own transpose. Show that the matrices ${\displaystyle H{{H}^{\rm {trans}}}}$  and ${\displaystyle {{H}^{\rm {trans}}}H}$  are symmetric.

We have not seen a map interpretation of the transpose operation, so we will verify these by considering the entries.

1. The ${\displaystyle i,j}$  entry of ${\displaystyle {{GH}^{\rm {trans}}}}$  is the ${\displaystyle j,i}$  entry of ${\displaystyle GH}$ , which is the dot product of the ${\displaystyle j}$ -th row of ${\displaystyle G}$  and the ${\displaystyle i}$ -th column of ${\displaystyle H}$ . The ${\displaystyle i,j}$  entry of ${\displaystyle {{H}^{\rm {trans}}}{{G}^{\rm {trans}}}}$  is the dot product of the ${\displaystyle i}$ -th row of ${\displaystyle {{H}^{\rm {trans}}}}$  and the ${\displaystyle j}$ -th column of ${\displaystyle {{G}^{\rm {trans}}}}$ , which is the dot product of the ${\displaystyle i}$ -th column of ${\displaystyle H}$  and the ${\displaystyle j}$ -th row of ${\displaystyle G}$ . Dot product is commutative and so these two are equal.
2. By the prior item each equals its transpose, e.g., ${\displaystyle {{(H{{H}^{\rm {trans}}})}^{\rm {trans}}}={{{H}^{\rm {trans}}}^{\rm {trans}}}{{H}^{\rm {trans}}}=H{{H}^{\rm {trans}}}}$ .
This exercise is recommended for all readers.
Problem 13

Rotation of vectors in ${\displaystyle \mathbb {R} ^{3}}$  about an axis is a linear map. Show that linear maps do not commute by showing geometrically that rotations do not commute.

Consider ${\displaystyle r_{x},r_{y}:\mathbb {R} ^{3}\to \mathbb {R} ^{3}}$  rotating all vectors ${\displaystyle \pi /2}$  radians counterclockwise about the ${\displaystyle x}$  and ${\displaystyle y}$  axes (counterclockwise in the sense that a person whose head is at ${\displaystyle {\vec {e}}_{1}}$  or ${\displaystyle {\vec {e}}_{2}}$  and whose feet are at the origin sees, when looking toward the origin, the rotation as counterclockwise).

Rotating ${\displaystyle r_{x}}$  first and then ${\displaystyle r_{y}}$  is different than rotating ${\displaystyle r_{y}}$  first and then ${\displaystyle r_{x}}$ . In particular, ${\displaystyle r_{x}({\vec {e}}_{3})=-{\vec {e}}_{2}}$  so ${\displaystyle r_{y}\circ r_{x}({\vec {e}}_{3})=-{\vec {e}}_{2}}$ , while ${\displaystyle r_{y}({\vec {e}}_{3})={\vec {e}}_{1}}$  so ${\displaystyle r_{x}\circ r_{y}({\vec {e}}_{3})={\vec {e}}_{1}}$ , and hence the maps do not commute.

Problem 14

In the proof of Theorem 2.12 some maps are used. What are the domains and codomains?

It doesn't matter (as long as the spaces have the appropriate dimensions).

For associativity, suppose that ${\displaystyle F}$  is ${\displaystyle m\!\times \!r}$ , that ${\displaystyle G}$  is ${\displaystyle r\!\times \!n}$ , and that ${\displaystyle H}$  is ${\displaystyle n\!\times \!k}$ . We can take any ${\displaystyle r}$  dimensional space, any ${\displaystyle m}$  dimensional space, any ${\displaystyle n}$  dimensional space, and any ${\displaystyle k}$  dimensional space— for instance, ${\displaystyle \mathbb {R} ^{r}}$ , ${\displaystyle \mathbb {R} ^{m}}$ , ${\displaystyle \mathbb {R} ^{n}}$ , and ${\displaystyle \mathbb {R} ^{k}}$  will do. We can take any bases ${\displaystyle A}$ , ${\displaystyle B}$ , ${\displaystyle C}$ , and ${\displaystyle D}$ , for those spaces. Then, with respect to ${\displaystyle C,D}$  the matrix ${\displaystyle H}$  represents a linear map ${\displaystyle h}$ , with respect to ${\displaystyle B,C}$  the matrix ${\displaystyle G}$  represents a ${\displaystyle g}$ , and with respect to ${\displaystyle A,B}$  the matrix ${\displaystyle F}$  represents an ${\displaystyle f}$ . We can use those maps in the proof.

The second half is done similarly, except that ${\displaystyle G}$  and ${\displaystyle H}$  are added and so we must take them to represent maps with the same domain and codomain.

Problem 15

How does matrix rank interact with matrix multiplication?

1. Can the product of rank ${\displaystyle n}$  matrices have rank less than ${\displaystyle n}$ ? Greater?
2. Show that the rank of the product of two matrices is less than or equal to the minimum of the rank of each factor.
1. The product of rank ${\displaystyle n}$  matrices can have rank less than or equal to ${\displaystyle n}$  but not greater than ${\displaystyle n}$ . To see that the rank can fall, consider the maps ${\displaystyle \pi _{x},\pi _{y}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$  projecting onto the axes. Each is rank one but their composition ${\displaystyle \pi _{x}\circ \pi _{y}}$ , which is the zero map, is rank zero. That can be translated over to matrices representing those maps in this way.
${\displaystyle {\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{2}}(\pi _{x})\cdot {\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{2}}(\pi _{y})={\begin{pmatrix}1&0\\0&0\end{pmatrix}}{\begin{pmatrix}0&0\\0&1\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}}$
To prove that the product of rank ${\displaystyle n}$  matrices cannot have rank greater than ${\displaystyle n}$ , we can apply the map result that the image of a linearly dependent set is linearly dependent. That is, if ${\displaystyle h:V\to W}$  and ${\displaystyle g:W\to X}$  both have rank ${\displaystyle n}$  then a set in the range ${\displaystyle {\mathcal {R}}(g\circ h)}$  of size larger than ${\displaystyle n}$  is the image under ${\displaystyle g}$  of a set in ${\displaystyle W}$  of size larger than ${\displaystyle n}$  and so is linearly dependent (since the rank of ${\displaystyle h}$  is ${\displaystyle n}$ ). Now, the image of a linearly dependent set is dependent, so any set of size larger than ${\displaystyle n}$  in the range is dependent. (By the way, observe that the rank of ${\displaystyle g}$  was not mentioned. See the next part.)
2. Fix spaces and bases and consider the associated linear maps ${\displaystyle f}$  and ${\displaystyle g}$ . Recall that the dimension of the image of a map (the map's rank) is less than or equal to the dimension of the domain, and consider the arrow diagram.
${\displaystyle {\begin{matrix}V&{\stackrel {f}{\longmapsto }}&{\mathcal {R}}(f)&{\stackrel {g}{\longmapsto }}&{\mathcal {R}}(g\circ f)\end{matrix}}}$
First, the image of ${\displaystyle {\mathcal {R}}(f)}$  must have dimension less than or equal to the dimension of ${\displaystyle {\mathcal {R}}(f)}$ , by the prior sentence. On the other hand, ${\displaystyle {\mathcal {R}}(f)}$  is a subset of the domain of ${\displaystyle g}$ , and thus its image has dimension less than or equal the dimension of the domain of ${\displaystyle g}$ . Combining those two, the rank of a composition is less than or equal to the minimum of the two ranks. The matrix fact follows immediately.
Problem 16

Is "commutes with" an equivalence relation among ${\displaystyle n\!\times \!n}$  matrices?

The "commutes with" relation is reflexive and symmetric. However, it is not transitive: for instance, with

${\displaystyle G={\begin{pmatrix}1&2\\3&4\end{pmatrix}}\quad H={\begin{pmatrix}1&0\\0&1\end{pmatrix}}\quad J={\begin{pmatrix}5&6\\7&8\end{pmatrix}}}$

${\displaystyle G}$  commutes with ${\displaystyle H}$  and ${\displaystyle H}$  commutes with ${\displaystyle J}$ , but ${\displaystyle G}$  does not commute with ${\displaystyle J}$ .

This exercise is recommended for all readers.
Problem 17

(This will be used in the Matrix Inverses exercises.) Here is another property of matrix multiplication that might be puzzling at first sight.

1. Prove that the composition of the projections ${\displaystyle \pi _{x},\pi _{y}:\mathbb {R} ^{3}\to \mathbb {R} ^{3}}$  onto the ${\displaystyle x}$  and ${\displaystyle y}$  axes is the zero map despite that neither one is itself the zero map.
2. Prove that the composition of the derivatives ${\displaystyle d^{2}/dx^{2},\,d^{3}/dx^{3}:{\mathcal {P}}_{4}\to {\mathcal {P}}_{4}}$  is the zero map despite that neither is the zero map.
3. Give a matrix equation representing the first fact.
4. Give a matrix equation representing the second.

When two things multiply to give zero despite that neither is zero, each is said to be a zero divisor.

1. Either of these.
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {\pi _{x}}{\longmapsto }}{\begin{pmatrix}x\\0\\0\end{pmatrix}}{\stackrel {\pi _{y}}{\longmapsto }}{\begin{pmatrix}0\\0\\0\end{pmatrix}}\qquad {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {\pi _{y}}{\longmapsto }}{\begin{pmatrix}0\\y\\0\end{pmatrix}}{\stackrel {\pi _{x}}{\longmapsto }}{\begin{pmatrix}0\\0\\0\end{pmatrix}}}$
2. The composition is the fifth derivative map ${\displaystyle d^{5}/dx^{5}}$  on the space of fourth-degree polynomials.
3. With respect to the natural bases,
${\displaystyle {\rm {Rep}}_{{\mathcal {E}}_{3},{\mathcal {E}}_{3}}(\pi _{x})={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}}\qquad {\rm {Rep}}_{{\mathcal {E}}_{3},{\mathcal {E}}_{3}}(\pi _{y})={\begin{pmatrix}0&0&0\\0&1&0\\0&0&0\end{pmatrix}}}$
and their product (in either order) is the zero matrix.
4. Where ${\displaystyle B=\langle 1,x,x^{2},x^{3},x^{4}\rangle }$ ,
${\displaystyle {\rm {Rep}}_{B,B}({\frac {d^{2}}{dx^{2}}})={\begin{pmatrix}0&0&2&0&0\\0&0&0&6&0\\0&0&0&0&12\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}}\qquad {\rm {Rep}}_{B,B}({\frac {d^{3}}{dx^{3}}})={\begin{pmatrix}0&0&0&6&0\\0&0&0&0&24\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}}}$
and their product (in either order) is the zero matrix.
Problem 18

Show that, for square matrices, ${\displaystyle (S+T)(S-T)}$  need not equal ${\displaystyle S^{2}-T^{2}}$ .

Note that ${\displaystyle (S+T)(S-T)=S^{2}-ST+TS-T^{2}}$ , so a reasonable try is to look at matrices that do not commute so that ${\displaystyle -ST}$  and ${\displaystyle TS}$  don't cancel: with

${\displaystyle S={\begin{pmatrix}1&2\\3&4\end{pmatrix}}\quad T={\begin{pmatrix}5&6\\7&8\end{pmatrix}}}$

we have the desired inequality.

${\displaystyle (S+T)(S-T)={\begin{pmatrix}-56&-56\\-88&-88\end{pmatrix}}\qquad S^{2}-T^{2}={\begin{pmatrix}-60&-68\\-76&-84\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 19

Represent the identity transformation ${\displaystyle {\mbox{id}}:V\to V}$  with respect to ${\displaystyle B,B}$  for any basis ${\displaystyle B}$ . This is the identity matrix ${\displaystyle I}$ . Show that this matrix plays the role in matrix multiplication that the number ${\displaystyle 1}$  plays in real number multiplication: ${\displaystyle HI=IH=H}$  (for all matrices ${\displaystyle H}$  for which the product is defined).

Because the identity map acts on the basis ${\displaystyle B}$  as ${\displaystyle {\vec {\beta }}_{1}\mapsto {\vec {\beta }}_{1}}$ , ..., ${\displaystyle {\vec {\beta }}_{n}\mapsto {\vec {\beta }}_{n}}$ , the representation is this.

${\displaystyle {\begin{pmatrix}1&0&0&&0\\0&1&0&&0\\0&0&1&&0\\&&&\ddots \\0&0&0&&1\end{pmatrix}}}$

The second part of the question is obvious from Theorem 2.6.

Problem 20

In real number algebra, quadratic equations have at most two solutions. That is not so with matrix algebra. Show that the ${\displaystyle 2\!\times \!2}$  matrix equation ${\displaystyle T^{2}=I}$  has more than two solutions, where ${\displaystyle I}$  is the identity matrix (this matrix has ones in its ${\displaystyle 1,1}$  and ${\displaystyle 2,2}$  entries and zeroes elsewhere; see Problem 19).

Here are four solutions.

${\displaystyle T={\begin{pmatrix}\pm 1&0\\0&\pm 1\end{pmatrix}}}$
Problem 21
1. Prove that for any ${\displaystyle 2\!\times \!2}$  matrix ${\displaystyle T}$  there are scalars ${\displaystyle c_{0},\dots ,c_{4}}$  that are not all ${\displaystyle 0}$  such that the combination ${\displaystyle c_{4}T^{4}+c_{3}T^{3}+c_{2}T^{2}+c_{1}T+c_{0}I}$  is the zero matrix (where ${\displaystyle I}$  is the ${\displaystyle 2\!\times \!2}$  identity matrix, with ${\displaystyle 1}$ 's in its ${\displaystyle 1,1}$  and ${\displaystyle 2,2}$  entries and zeroes elsewhere; see Problem 19).
2. Let ${\displaystyle p(x)}$  be a polynomial ${\displaystyle p(x)=c_{n}x^{n}+\dots +c_{1}x+c_{0}}$ . If ${\displaystyle T}$  is a square matrix we define ${\displaystyle p(T)}$  to be the matrix ${\displaystyle c_{n}T^{n}+\dots +c_{1}T+I}$  (where ${\displaystyle I}$  is the appropriately-sized identity matrix). Prove that for any square matrix there is a polynomial such that ${\displaystyle p(T)}$  is the zero matrix.
3. The minimal polynomial ${\displaystyle m(x)}$  of a square matrix is the polynomial of least degree, and with leading coefficient ${\displaystyle 1}$ , such that ${\displaystyle m(T)}$  is the zero matrix. Find the minimal polynomial of this matrix.
${\displaystyle {\begin{pmatrix}{\sqrt {3}}/2&-1/2\\1/2&{\sqrt {3}}/2\end{pmatrix}}}$
(This is the representation with respect to ${\displaystyle {\mathcal {E}}_{2},{\mathcal {E}}_{2}}$ , the standard basis, of a rotation through ${\displaystyle \pi /6}$  radians counterclockwise.)
1. The vector space ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$  has dimension four. The set ${\displaystyle \{T^{4},\dots ,T,I\}}$  has five elements and thus is linearly dependent.
2. Where ${\displaystyle T}$  is ${\displaystyle n\!\times \!n}$ , generalizing the argument from the prior item shows that there is such a polynomial of degree ${\displaystyle n^{2}}$  or less, since ${\displaystyle \{T^{n^{2}},\dots ,T,I\}}$  is a ${\displaystyle n^{2}+1}$ -member subset of the ${\displaystyle n^{2}}$ -dimensional space ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$ .
3. First compute the powers
${\displaystyle T^{2}={\begin{pmatrix}1/2&-{\sqrt {3}}/2\\{\sqrt {3}}/2&1/2\end{pmatrix}}\qquad T^{3}={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}\qquad T^{4}={\begin{pmatrix}-1/2&-{\sqrt {3}}/2\\{\sqrt {3}}/2&-1/2\end{pmatrix}}}$
(observe that rotating by ${\displaystyle \pi /6}$  three times results in a rotation by ${\displaystyle \pi /2}$ , which is indeed what ${\displaystyle T^{3}}$  represents). Then set ${\displaystyle c_{4}T^{4}+c_{3}T^{3}+c_{2}T^{2}+c_{1}T+c_{0}I}$  equal to the zero matrix
${\displaystyle {\begin{pmatrix}-1/2&-{\sqrt {3}}/2\\{\sqrt {3}}/2&-1/2\end{pmatrix}}c_{4}+{\begin{pmatrix}0&-1\\1&0\end{pmatrix}}c_{3}+{\begin{pmatrix}1/2&-{\sqrt {3}}/2\\{\sqrt {3}}/2&1/2\end{pmatrix}}c_{2}+{\begin{pmatrix}{\sqrt {3}}/2&-1/2\\1/2&{\sqrt {3}}/2\end{pmatrix}}c_{1}+{\begin{pmatrix}1&0\\0&1\end{pmatrix}}c_{0}}$
${\displaystyle ={\begin{pmatrix}0&0\\0&0\end{pmatrix}}}$
to get this linear system.
${\displaystyle {\begin{array}{*{5}{rc}r}-(1/2)c_{4}&&&+&(1/2)c_{2}&+&({\sqrt {3}}/2)c_{1}&+&c_{0}&=&0\\-({\sqrt {3}}/2)c_{4}&-&c_{3}&-&({\sqrt {3}}/2)c_{2}&-&(1/2)c_{1}&&&=&0\\({\sqrt {3}}/2)c_{4}&+&c_{3}&+&({\sqrt {3}}/2)c_{2}&+&(1/2)c_{1}&&&=&0\\-(1/2)c_{4}&&&+&(1/2)c_{2}&+&({\sqrt {3}}/2)c_{1}&+&c_{0}&=&0\end{array}}}$
Apply Gaussian reduction.
${\displaystyle {\begin{array}{rcl}&{\xrightarrow[{}]{-\rho _{1}+\rho _{4}}}\;{\xrightarrow[{}]{\rho _{2}+\rho _{3}}}&{\begin{array}{*{5}{rc}r}-(1/2)c_{4}&&&+&(1/2)c_{2}&+&({\sqrt {3}}/2)c_{1}&+&c_{0}&=&0\\-({\sqrt {3}}/2)c_{4}&-&c_{3}&-&({\sqrt {3}}/2)c_{2}&-&(1/2)c_{1}&&&=&0\\&&&&&&&&0&=&0\\&&&&&&&&0&=&0\end{array}}\\&\;{\xrightarrow[{}]{-{\sqrt {3}}\rho _{1}+\rho _{2}}}&{\begin{array}{*{5}{rc}r}-(1/2)c_{4}&&&+&(1/2)c_{2}&+&({\sqrt {3}}/2)c_{1}&+&c_{0}&=&0\\&-&c_{3}&-&{\sqrt {3}}c_{2}&-&2c_{1}&-&{\sqrt {3}}c_{0}&=&0\\&&&&&&&&0&=&0\\&&&&&&&&0&=&0\end{array}}\end{array}}}$
Setting ${\displaystyle c_{4}}$ , ${\displaystyle c_{3}}$ , and ${\displaystyle c_{2}}$  to zero makes ${\displaystyle c_{1}}$  and ${\displaystyle c_{0}}$  also come out to be zero so no degree one or degree zero polynomial will do. Setting ${\displaystyle c_{4}}$  and ${\displaystyle c_{3}}$  to zero (and ${\displaystyle c_{2}}$  to one) gives a linear system
${\displaystyle {\begin{array}{*{3}{rc}r}(1/2)&+&({\sqrt {3}}/2)c_{1}&+&c_{0}&=&0\\-{\sqrt {3}}&-&2c_{1}&-&{\sqrt {3}}c_{0}&=&0\end{array}}}$
that can be solved with ${\displaystyle c_{1}=-{\sqrt {3}}}$  and ${\displaystyle c_{0}=1}$ . Conclusion: the polynomial ${\displaystyle m(x)=x^{2}-{\sqrt {3}}x+1}$  is minimal for the matrix ${\displaystyle T}$ .
Problem 22

The infinite-dimensional space ${\displaystyle {\mathcal {P}}}$  of all finite-degree polynomials gives a memorable example of the non-commutativity of linear maps. Let ${\displaystyle d/dx:{\mathcal {P}}\to {\mathcal {P}}}$  be the usual derivative and let ${\displaystyle s:{\mathcal {P}}\to {\mathcal {P}}}$  be the shift map.

${\displaystyle a_{0}+a_{1}x+\dots +a_{n}x^{n}\;{\stackrel {s}{\longmapsto }}\;0+a_{0}x+a_{1}x^{2}+\dots +a_{n}x^{n+1}}$

Show that the two maps don't commute ${\displaystyle d/dx\circ s\neq s\circ d/dx}$ ; in fact, not only is ${\displaystyle (d/dx\circ s)-(s\circ d/dx)}$  not the zero map, it is the identity map.

The check is routine:

${\displaystyle a_{0}+a_{1}x+\dots +a_{n}x^{n}{\stackrel {s}{\longmapsto }}a_{0}x+a_{1}x^{2}+\dots +a_{n}x^{n+1}{\stackrel {d/dx}{\longmapsto }}a_{0}+2a_{1}x+\dots +(n+1)a_{n}x^{n}}$

while

${\displaystyle a_{0}+a_{1}x+\dots +a_{n}x^{n}{\stackrel {d/dx}{\longmapsto }}a_{1}+\dots +na_{n}x^{n-1}{\stackrel {s}{\longmapsto }}a_{1}x+\dots +a_{n}x^{n}}$

so that under the map ${\displaystyle (d/dx\circ s)-(s\circ d/dx)}$  we have ${\displaystyle a_{0}+a_{1}x+\dots +a_{n}x^{n}\mapsto a_{0}+a_{1}x+\dots +a_{n}x^{n}}$ .

Problem 23

Recall the notation for the sum of the sequence of numbers ${\displaystyle a_{1},a_{2},\dots ,a_{n}}$ .

${\displaystyle \sum _{i=1}^{n}a_{i}=a_{1}+a_{2}+\dots +a_{n}}$

In this notation, the ${\displaystyle i,j}$  entry of the product of ${\displaystyle G}$  and ${\displaystyle H}$  is this.

${\displaystyle p_{i,j}=\sum _{k=1}^{r}g_{i,k}h_{k,j}}$

Using this notation,

1. reprove that matrix multiplication is associative;
2. reprove Theorem 2.6.
1. Tracing through the remark at the end of the subsection gives that the ${\displaystyle i,j}$  entry of ${\displaystyle (FG)H}$  is this ${\displaystyle \sum _{t=1}^{s}{\bigl (}\sum _{k=1}^{r}f_{i,k}g_{k,t}{\bigr )}h_{t,j}=\sum _{t=1}^{s}\sum _{k=1}^{r}(f_{i,k}g_{k,t})h_{t,j}=\sum _{t=1}^{s}\sum _{k=1}^{r}f_{i,k}(g_{k,t}h_{t,j})}$
${\displaystyle =\sum _{k=1}^{r}\sum _{t=1}^{s}f_{i,k}(g_{k,t}h_{t,j})=\sum _{k=1}^{r}f_{i,k}{\bigl (}\sum _{t=1}^{s}g_{k,t}h_{t,j}{\bigr )}}$
(the first equality comes from using the distributive law to multiply through the ${\displaystyle h}$ 's, the second equality is the associative law for real numbers, the third is the commutative law for reals, and the fourth equality follows on using the distributive law to factor the ${\displaystyle f}$ 's out), which is the ${\displaystyle i,j}$  entry of ${\displaystyle F(GH)}$ .
2. The ${\displaystyle k}$ -th component of ${\displaystyle h({\vec {v}})}$  is
${\displaystyle \sum _{j=1}^{n}h_{k,j}v_{j}}$
and so the ${\displaystyle i}$ -th component of ${\displaystyle g\circ h\,({\vec {v}})}$  is this
${\displaystyle \sum _{k=1}^{r}g_{i,k}{\bigl (}\sum _{j=1}^{n}h_{k,j}v_{j}{\bigr )}=\sum _{k=1}^{r}\sum _{j=1}^{n}g_{i,k}h_{k,j}v_{j}=\sum _{k=1}^{r}\sum _{j=1}^{n}(g_{i,k}h_{k,j})v_{j}}$
${\displaystyle =\sum _{j=1}^{n}\sum _{k=1}^{r}(g_{i,k}h_{k,j})v_{j}=\sum _{j=1}^{n}(\sum _{k=1}^{r}g_{i,k}h_{k,j})\,v_{j}}$
(the first equality holds by using the distributive law to multiply the ${\displaystyle g}$ 's through, the second equality represents the use of associativity of reals, the third follows by commutativity of reals, and the fourth comes from using the distributive law to factor the ${\displaystyle v}$ 's out).