# Linear Algebra/Sums and Scalar Products/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Perform the indicated operations, if defined.

1. ${\displaystyle {\begin{pmatrix}5&-1&2\\6&1&1\end{pmatrix}}+{\begin{pmatrix}2&1&4\\3&0&5\end{pmatrix}}}$
2. ${\displaystyle 6\cdot {\begin{pmatrix}2&-1&-1\\1&2&3\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}2&1\\0&3\end{pmatrix}}+{\begin{pmatrix}2&1\\0&3\end{pmatrix}}}$
4. ${\displaystyle 4{\begin{pmatrix}1&2\\3&-1\end{pmatrix}}+5{\begin{pmatrix}-1&4\\-2&1\end{pmatrix}}}$
5. ${\displaystyle 3{\begin{pmatrix}2&1\\3&0\end{pmatrix}}+2{\begin{pmatrix}1&1&4\\3&0&5\end{pmatrix}}}$
1. ${\displaystyle {\begin{pmatrix}7&0&6\\9&1&6\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}12&-6&-6\\6&12&18\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}4&2\\0&6\end{pmatrix}}}$
4. ${\displaystyle {\begin{pmatrix}-1&28\\2&1\end{pmatrix}}}$
5. Not defined.
Problem 2

Prove Theorem 1.5.

2. Prove that matrix scalar multiplication represents scalar multiplication of linear maps.

Represent the domain vector ${\displaystyle {\vec {v}}\in V}$  and the maps ${\displaystyle g,h:V\to W}$  with respect to bases ${\displaystyle B,D}$  in the usual way.

1. The representation of ${\displaystyle (g+h)\,({\vec {v}})=g({\vec {v}})+h({\vec {v}})}$
${\displaystyle {\bigl (}(g_{1,1}v_{1}+\dots +g_{1,n}v_{n}){\vec {\delta }}_{1}+\cdots +(g_{m,1}v_{1}+\dots +g_{m,n}v_{n}){\vec {\delta }}_{m}{\bigr )}}$
${\displaystyle +{\bigl (}(h_{1,1}v_{1}+\cdots +h_{1,n}v_{n}){\vec {\delta }}_{1}+\cdots +(h_{m,1}v_{1}+\dots +h_{m,n}v_{n}){\vec {\delta }}_{m}{\bigr )}}$
regroups
${\displaystyle =((g_{1,1}+h_{1,1})v_{1}+\dots +(g_{1,1}+h_{1,n})v_{n})\cdot {\vec {\delta }}_{1}+\cdots +((g_{m,1}+h_{m,1})v_{1}+\dots +(g_{m,n}+h_{m,n})v_{n})\cdot {\vec {\delta }}_{m}}$
to the entry-by-entry sum of the representation of ${\displaystyle g({\vec {v}})}$  and the representation of ${\displaystyle h({\vec {v}})}$ .
2. The representation of ${\displaystyle (r\cdot h)\,({\vec {v}})=r\cdot {\bigl (}h({\vec {v}}){\bigr )}}$
${\displaystyle r\cdot {\bigl (}(h_{1,1}v_{1}+h_{1,2}v_{2}+\dots +h_{1,n}v_{n}){\vec {\delta }}_{1}+\dots +(h_{m,1}v_{1}+h_{m,2}v_{2}+\dots +h_{m,n}v_{n}){\vec {\delta }}_{m}{\bigr )}}$
${\displaystyle =(rh_{1,1}v_{1}+\dots +rh_{1,n}v_{n})\cdot {\vec {\delta }}_{1}+\dots +(rh_{m,1}v_{1}+\dots +rh_{m,n}v_{n})\cdot {\vec {\delta }}_{m}}$
is the entry-by-entry multiple of ${\displaystyle r}$  and the representation of ${\displaystyle h}$ .
This exercise is recommended for all readers.
Problem 3

Prove each, where the operations are defined, where ${\displaystyle G}$ , ${\displaystyle H}$ , and ${\displaystyle J}$  are matrices, where ${\displaystyle Z}$  is the zero matrix, and where ${\displaystyle r}$  and ${\displaystyle s}$  are scalars.

1. Matrix addition is commutative ${\displaystyle G+H=H+G}$ .
2. Matrix addition is associative ${\displaystyle G+(H+J)=(G+H)+J}$ .
3. The zero matrix is an additive identity ${\displaystyle G+Z=G}$ .
4. ${\displaystyle 0\cdot G=Z}$
5. ${\displaystyle (r+s)G=rG+sG}$
6. Matrices have an additive inverse ${\displaystyle G+(-1)\cdot G=Z}$ .
7. ${\displaystyle r(G+H)=rG+rH}$
8. ${\displaystyle (rs)G=r(sG)}$

First, each of these properties is easy to check in an entry-by-entry way. For example, writing

${\displaystyle G={\begin{pmatrix}g_{1,1}&\ldots &g_{1,n}\\\vdots &&\vdots \\g_{m,1}&\ldots &g_{m,n}\end{pmatrix}}\qquad H={\begin{pmatrix}h_{1,1}&\ldots &h_{1,n}\\\vdots &&\vdots \\h_{m,1}&\ldots &h_{m,n}\end{pmatrix}}}$

then, by definition we have

${\displaystyle G+H={\begin{pmatrix}g_{1,1}+h_{1,1}&\ldots &g_{1,n}+h_{1,n}\\\vdots &&\vdots \\g_{m,1}+h_{m,1}&\ldots &g_{m,n}+h_{m,n}\end{pmatrix}}\qquad H+G={\begin{pmatrix}h_{1,1}+g_{1,1}&\ldots &h_{1,n}+g_{1,n}\\\vdots &&\vdots \\h_{m,1}+g_{m,1}&\ldots &h_{m,n}+g_{m,n}\end{pmatrix}}}$

and the two are equal since their entries are equal ${\displaystyle g_{i,j}+h_{i,j}=h_{i,j}+g_{i,j}}$ . That is, each of these is easy to check by using Definition 1.3 alone.

However, each property is also easy to understand in terms of the represented maps, by applying Theorem 1.5 as well as the definition.

1. The two maps ${\displaystyle g+h}$  and ${\displaystyle h+g}$  are equal because ${\displaystyle g({\vec {v}})+h({\vec {v}})=h({\vec {v}})+g({\vec {v}})}$ , as addition is commutative in any vector space. Because the maps are the same, they must have the same representative.
2. As with the prior answer, except that here we apply that vector space addition is associative.
3. As before, except that here we note that ${\displaystyle g({\vec {v}})+z({\vec {v}})=g({\vec {v}})+{\vec {0}}=g({\vec {v}})}$ .
4. Apply that ${\displaystyle 0\cdot g({\vec {v}})={\vec {0}}=z({\vec {v}})}$ .
5. Apply that ${\displaystyle (r+s)\cdot g({\vec {v}})=r\cdot g({\vec {v}})+s\cdot g({\vec {v}})}$ .
6. Apply the prior two items with ${\displaystyle r=1}$  and ${\displaystyle s=-1}$ .
7. Apply that ${\displaystyle r\cdot (g({\vec {v}})+h({\vec {v}}))=r\cdot g({\vec {v}})+r\cdot h({\vec {v}})}$ .
8. Apply that ${\displaystyle (rs)\cdot g({\vec {v}})=r\cdot (s\cdot g({\vec {v}}))}$ .
Problem 4

Fix domain and codomain spaces. In general, one matrix can represent many different maps with respect to different bases. However, prove that a zero matrix represents only a zero map. Are there other such matrices?

For any ${\displaystyle V,W}$  with bases ${\displaystyle B,D}$ , the (appropriately-sized) zero matrix represents this map.

${\displaystyle {\vec {\beta }}_{1}\mapsto 0\cdot {\vec {\delta }}_{1}+\dots +0\cdot {\vec {\delta }}_{m}\quad \cdots \quad {\vec {\beta }}_{n}\mapsto 0\cdot {\vec {\delta }}_{1}+\dots +0\cdot {\vec {\delta }}_{m}}$

This is the zero map.

There are no other matrices that represent only one map. For, suppose that ${\displaystyle H}$  is not the zero matrix. Then it has a nonzero entry; assume that ${\displaystyle h_{i,j}\neq 0}$ . With respect to bases ${\displaystyle B,D}$ , it represents ${\displaystyle h_{1}:V\to W}$  sending

${\displaystyle {\vec {\beta }}_{j}\mapsto h_{1,j}{\vec {\delta }}_{1}+\dots +h_{i,j}{\vec {\delta }}_{i}+\dots +h_{m,j}{\vec {\delta }}_{m}}$

and with respcet to ${\displaystyle B,2\cdot D}$  it also represents ${\displaystyle h_{2}:V\to W}$  sending

${\displaystyle {\vec {\beta }}_{j}\mapsto h_{1,j}\cdot (2{\vec {\delta }}_{1})+\dots +h_{i,j}\cdot (2{\vec {\delta }}_{i})+\dots +h_{m,j}\cdot (2{\vec {\delta }}_{m})}$

(the notation ${\displaystyle 2\cdot D}$  means to double all of the members of D). These maps are easily seen to be unequal.

This exercise is recommended for all readers.
Problem 5

Let ${\displaystyle V}$  and ${\displaystyle W}$  be vector spaces of dimensions ${\displaystyle n}$  and ${\displaystyle m}$ . Show that the space ${\displaystyle \mathop {\mathcal {L}} (V,W)}$  of linear maps from ${\displaystyle V}$  to ${\displaystyle W}$  is isomorphic to ${\displaystyle {\mathcal {M}}_{m\!\times \!n}}$ .

Fix bases ${\displaystyle B}$  and ${\displaystyle D}$  for ${\displaystyle V}$  and ${\displaystyle W}$ , and consider ${\displaystyle {\mbox{Rep}}_{B,D}:\mathop {\mathcal {L}} (V,W)\to {\mathcal {M}}_{m\!\times \!n}}$  associating each linear map with the matrix representing that map ${\displaystyle h\mapsto {\rm {Rep}}_{B,D}(h)}$ . From the prior section we know that (under fixed bases) the matrices correspond to linear maps, so the representation map is one-to-one and onto. That it preserves linear operations is Theorem 1.5.

This exercise is recommended for all readers.
Problem 6

Show that it follows from the prior questions that for any six transformations ${\displaystyle t_{1},\dots ,t_{6}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$  there are scalars ${\displaystyle c_{1},\dots ,c_{6}\in \mathbb {R} }$  such that ${\displaystyle c_{1}t_{1}+\dots +c_{6}t_{6}}$  is the zero map. (Hint: this is a bit of a misleading question.)

Fix bases and represent the transformations with ${\displaystyle 2\!\times \!2}$  matrices. The space of matrices ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$  has dimension four, and hence the above six-element set is linearly dependent. By the prior exercise that extends to a dependence of maps. (The misleading part is only that there are six transformations, not five, so that we have more than we need to give the existence of the dependence.)

Problem 7

The trace of a square matrix is the sum of the entries on the main diagonal (the ${\displaystyle 1,1}$  entry plus the ${\displaystyle 2,2}$  entry, etc.; we will see the significance of the trace in Chapter Five). Show that ${\displaystyle {\mbox{trace}}(H+G)={\mbox{trace}}(H)+{\mbox{trace}}(G)}$ . Is there a similar result for scalar multiplication?

That the trace of a sum is the sum of the traces holds because both ${\displaystyle {\text{trace}}(H+G)}$  and ${\displaystyle {\text{trace}}(H)+{\text{trace}}(G)}$  are the sum of ${\displaystyle h_{1,1}+g_{1,1}}$  with ${\displaystyle h_{2,2}+g_{2,2}}$ , etc. For scalar multiplication we have ${\displaystyle {\mbox{trace}}(r\cdot H)=r\cdot {\mbox{trace}}(H)}$ ; the proof is easy. Thus the trace map is a homomorphism from ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$  to ${\displaystyle \mathbb {R} }$ .

Problem 8

Recall that the transpose of a matrix ${\displaystyle M}$  is another matrix, whose ${\displaystyle i,j}$  entry is the ${\displaystyle j,i}$  entry of ${\displaystyle M}$ . Verifiy these identities.

1. ${\displaystyle {{(G+H)}^{\rm {trans}}}={{G}^{\rm {trans}}}+{{H}^{\rm {trans}}}}$
2. ${\displaystyle {{(r\cdot H)}^{\rm {trans}}}=r\cdot {{H}^{\rm {trans}}}}$
1. The ${\displaystyle i,j}$  entry of ${\displaystyle {{(G+H)}^{\rm {trans}}}}$  is ${\displaystyle g_{j,i}+h_{j,i}}$ . That is also the ${\displaystyle i,j}$  entry of ${\displaystyle {{G}^{\rm {trans}}}+{{H}^{\rm {trans}}}}$ .
2. The ${\displaystyle i,j}$  entry of ${\displaystyle {{(r\cdot H)}^{\rm {trans}}}}$  is ${\displaystyle rh_{j,i}}$ , which is also the ${\displaystyle i,j}$  entry of ${\displaystyle r\cdot {{H}^{\rm {trans}}}}$ .
This exercise is recommended for all readers.
Problem 9

A square matrix is symmetric if each ${\displaystyle i,j}$  entry equals the ${\displaystyle j,i}$  entry, that is, if the matrix equals its transpose.

1. Prove that for any ${\displaystyle H}$ , the matrix ${\displaystyle H+{{H}^{\rm {trans}}}}$  is symmetric. Does every symmetric matrix have this form?
2. Prove that the set of ${\displaystyle n\!\times \!n}$  symmetric matrices is a subspace of ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$ .
1. For ${\displaystyle H+{{H}^{\rm {trans}}}}$ , the ${\displaystyle i,j}$  entry is ${\displaystyle h_{i,j}+h_{j,i}}$  and the ${\displaystyle j,i}$  entry of is ${\displaystyle h_{j,i}+h_{i,j}}$ . The two are equal and thus ${\displaystyle H+{{H}^{\rm {trans}}}}$  is symmetric. Every symmetric matrix does have that form, since it can be written ${\displaystyle H=(1/2)\cdot (H+{{H}^{\rm {trans}}})}$ .
2. The set of symmetric matrices is nonempty as it contains the zero matrix. Clearly a scalar multiple of a symmetric matrix is symmetric. A sum ${\displaystyle H+G}$  of two symmetric matrices is symmetric because ${\displaystyle h_{i,j}+g_{i,j}=h_{j,i}+g_{j,i}}$  (since ${\displaystyle h_{i,j}=h_{j,i}}$  and ${\displaystyle g_{i,j}=g_{j,i}}$ ). Thus the subset is nonempty and closed under the inherited operations, and so it is a subspace.
1. How does matrix rank interact with scalar multiplication— can a scalar product of a rank ${\displaystyle n}$  matrix have rank less than ${\displaystyle n}$ ? Greater?
2. How does matrix rank interact with matrix addition— can a sum of rank ${\displaystyle n}$  matrices have rank less than ${\displaystyle n}$ ? Greater?
2. A sum of rank ${\displaystyle n}$  matrices can have rank less than ${\displaystyle n}$ . For instance, for any matrix ${\displaystyle H}$ , the sum ${\displaystyle H+(-1)\cdot H}$  has rank zero. A sum of rank ${\displaystyle n}$  matrices can have rank greater than ${\displaystyle n}$ . Here are rank one matrices that sum to a rank two matrix.
${\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}+{\begin{pmatrix}0&0\\0&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}}$