# Linear Algebra/Inverses/Solutions

## Solutions

Problem 1

Supply the intermediate steps in Example 4.10.

Here is one way to proceed.

${\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&3&-1&1&0&0\\1&-1&0&0&0&1\end{array}}\right)\;{\xrightarrow[{}]{-\rho _{1}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&3&-1&1&0&0\\0&-1&-1&0&-1&1\end{array}}\right)$
{\begin{aligned}&{\xrightarrow[{}]{(1/3)\rho _{2}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&3&-1&1&0&0\\0&0&-4/3&1/3&-1&1\end{array}}\right)\;{\xrightarrow[{-(3/4)\rho _{3}}]{(1/3)\rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&1&-1/3&1/3&0&0\\0&0&1&-1/4&3/4&-3/4\end{array}}\right)\\&{\xrightarrow[{-\rho _{3}+\rho _{1}}]{(1/3)\rho _{3}+\rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&0&0&1/4&1/4&3/4\\0&1&0&1/4&1/4&-1/4\\0&0&1&-1/4&3/4&-3/4\end{array}}\right)\end{aligned}}
This exercise is recommended for all readers.
Problem 2

Use Corollary 4.12 to decide if each matrix has an inverse.

1. ${\begin{pmatrix}2&1\\-1&1\end{pmatrix}}$
2. ${\begin{pmatrix}0&4\\1&-3\end{pmatrix}}$
3. ${\begin{pmatrix}2&-3\\-4&6\end{pmatrix}}$
1. Yes, it has an inverse: $ad-bc=2\cdot 1-1\cdot (-1)\neq 0$ .
2. Yes.
3. No.
This exercise is recommended for all readers.
Problem 3

For each invertible matrix in the prior problem, use Corollary 4.12 to find its inverse.

1. $\displaystyle {\frac {1}{2\cdot 1-1\cdot (-1)}}\cdot {\begin{pmatrix}1&-1\\1&2\end{pmatrix}}={\frac 1}3}}\cdot {\begin{pmatrix}1&-1\\1&2\end{pmatrix}}={\begin{pmatrix}1/3&-1/3\\1/3&2/3\end{pmatrix}}$
2. $\displaystyle {\frac {1}{0\cdot (-3)-4\cdot 1}}\cdot {\begin{pmatrix}-3&-4\\-1&0\end{pmatrix}}={\begin{pmatrix}3/4&1\\1/4&0\end{pmatrix}}$
3. The prior question shows that no inverse exists.
This exercise is recommended for all readers.
Problem 4

Find the inverse, if it exists, by using the Gauss-Jordan method. Check the answers for the $2\!\times \!2$  matrices with Corollary 4.12.

1. ${\begin{pmatrix}3&1\\0&2\end{pmatrix}}$
2. ${\begin{pmatrix}2&1/2\\3&1\end{pmatrix}}$
3. ${\begin{pmatrix}2&-4\\-1&2\end{pmatrix}}$
4. ${\begin{pmatrix}1&1&3\\0&2&4\\-1&1&0\end{pmatrix}}$
5. ${\begin{pmatrix}0&1&5\\0&-2&4\\2&3&-2\end{pmatrix}}$
6. ${\begin{pmatrix}2&2&3\\1&-2&-3\\4&-2&-3\end{pmatrix}}$
1. The reduction is routine.
$\left({\begin{array}{cc|cc}3&1&1&0\\0&2&0&1\end{array}}\right)\;{\xrightarrow[{(1/2)\rho _{2}}]{(1/3)\rho _{1}}}\;\left({\begin{array}{cc|cc}1&1/3&1/3&0\\0&1&0&1/2\end{array}}\right)\;{\xrightarrow[{}]{-(1/3)\rho _{2}+\rho _{1}}}\;\left({\begin{array}{cc|cc}1&0&1/3&-1/6\\0&1&0&1/2\end{array}}\right)$
${\begin{pmatrix}3&1\\0&2\end{pmatrix}}^{-1}={\frac {1}{3\cdot 2-0\cdot 1}}\cdot {\begin{pmatrix}2&-1\\0&3\end{pmatrix}}={\frac {1}{6}}\cdot {\begin{pmatrix}2&-1\\0&3\end{pmatrix}}$
2. This reduction is easy. $\left({\begin{array}{cc|cc}2&1/2&1&0\\3&1&0&1\end{array}}\right)\;{\xrightarrow[{}]{-(3/2)\rho _{1}+\rho _{2}}}\;\left({\begin{array}{cc|cc}2&1/2&1&0\\0&1/4&-3/2&1\end{array}}\right)$
$\;{\xrightarrow[{4\rho _{2}}]{(1/2)\rho _{1}}}\;\left({\begin{array}{cc|cc}1&1/4&1/2&0\\0&1&-6&4\end{array}}\right)\;{\xrightarrow[{}]{-(1/4)\rho _{2}+\rho _{1}}}\;\left({\begin{array}{cc|cc}1&0&2&-1\\0&1&-6&4\end{array}}\right)$
The check agrees.
${\frac {1}{2\cdot 1-3\cdot (1/2)}}\cdot {\begin{pmatrix}1&-1/2\\-3&2\end{pmatrix}}=2\cdot {\begin{pmatrix}1&-1/2\\-3&2\end{pmatrix}}$
3. Trying the Gauss-Jordan reduction
$\left({\begin{array}{cc|cc}2&-4&1&0\\-1&2&0&1\end{array}}\right)\;{\xrightarrow[{}]{(1/2)\rho _{1}+\rho _{2}}}\;\left({\begin{array}{cc|cc}2&-4&1&0\\0&0&1/2&1\end{array}}\right)$
shows that the left side won't reduce to the identity, so no inverse exists. The check $ad-bc=2\cdot 2-(-4)\cdot (-1)=0$  agrees.
4. This produces an inverse.
$\left({\begin{array}{ccc|ccc}1&1&3&1&0&0\\0&2&4&0&1&0\\-1&1&0&0&0&1\end{array}}\right)\;{\xrightarrow[{}]{\rho _{1}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}1&1&3&1&0&0\\0&2&4&0&1&0\\0&2&3&1&0&1\end{array}}\right)\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}1&1&3&1&0&0\\0&2&4&0&1&0\\0&0&-1&1&-1&1\end{array}}\right)$
{\begin{aligned}&{\xrightarrow[{-\rho _{3}}]{(1/2)\rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&1&3&1&0&0\\0&1&2&0&1/2&0\\0&0&1&-1&1&-1\end{array}}\right)\;{\xrightarrow[{-3\rho _{3}+\rho _{1}}]{-2\rho _{3}+\rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&1&0&4&-3&3\\0&1&0&2&-3/2&2\\0&0&1&-1&1&-1\end{array}}\right)\\&{\xrightarrow[{}]{-\rho _{2}+\rho _{1}}}\;\left({\begin{array}{ccc|ccc}1&0&0&2&-3/2&1\\0&1&0&2&-3/2&2\\0&0&1&-1&1&-1\end{array}}\right)\end{aligned}}
5. This is one way to do the reduction.
$\left({\begin{array}{ccc|ccc}0&1&5&1&0&0\\0&-2&4&0&1&0\\2&3&-2&0&0&1\end{array}}\right)\;{\xrightarrow[{}]{\rho _{3}\leftrightarrow \rho _{1}}}\;\left({\begin{array}{ccc|ccc}2&3&-2&0&0&1\\0&-2&4&0&1&0\\0&1&5&1&0&0\end{array}}\right)$
{\begin{aligned}&\;{\xrightarrow[{}]{(1/2)\rho _{2}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}2&3&-2&0&0&1\\0&-2&4&0&1&0\\0&0&7&1&1/2&0\end{array}}\right){\xrightarrow[{\begin{array}{c}\\[-19pt]-(1/2)\rho _{2}\\[-5pt](1/7)\rho _{3}\end{array}}]{(1/2)\rho _{1}}}\;\left({\begin{array}{ccc|ccc}1&3/2&-1&0&0&1/2\\0&1&-2&0&-1/2&0\\0&0&1&1/7&1/14&0\end{array}}\right)\\&\;{\xrightarrow[{\rho _{3}+\rho _{1}}]{2\rho _{3}+\rho _{2}}}\;\left({\begin{array}{ccc|ccc}1&3/2&0&1/7&1/14&1/2\\0&1&0&2/7&-5/14&0\\0&0&1&1/7&1/14&0\end{array}}\right){\xrightarrow[{}]{-(3/2)\rho _{2}+\rho _{1}}}\;\left({\begin{array}{ccc|ccc}1&0&0&-2/7&17/28&1/2\\0&1&0&2/7&-5/14&0\\0&0&1&1/7&1/14&0\end{array}}\right)\end{aligned}}
6. There is no inverse.
${\begin{array}{rl}\left({\begin{array}{ccc|ccc}2&2&3&1&0&0\\1&-2&-3&0&1&0\\4&-2&-3&0&0&1\end{array}}\right)&\;{\xrightarrow[{-2\rho _{1}+\rho _{3}}]{-(1/2)\rho _{1}+\rho _{2}}}\;\left({\begin{array}{ccc|ccc}2&2&3&1&0&0\\0&-3&-9/2&-1/2&1&0\\0&-6&-9&-2&0&1\end{array}}\right)\\&{\xrightarrow[{}]{-2\rho _{2}+\rho _{3}}}\;\left({\begin{array}{ccc|ccc}2&2&3&1&0&0\\0&-3&-9/2&-1/2&1&0\\0&0&0&-1&-2&1\end{array}}\right)\end{array}}$
As a check, note that the third column of the starting matrix is $3/2$  times the second, and so it is indeed singular and therefore has no inverse.
This exercise is recommended for all readers.
Problem 5

What matrix has this one for its inverse?

${\begin{pmatrix}1&3\\2&5\end{pmatrix}}$

We can use Corollary 4.12.

${\frac {1}{1\cdot 5-2\cdot 3}}\cdot {\begin{pmatrix}5&-3\\-2&1\end{pmatrix}}={\begin{pmatrix}-5&3\\2&-1\end{pmatrix}}$
Problem 6

How does the inverse operation interact with scalar multiplication and addition of matrices?

1. What is the inverse of $rH$ ?
2. Is $(H+G)^{-1}=H^{-1}+G^{-1}$ ?
1. The proof that the inverse is $r^{-1}H^{-1}=(1/r)\cdot H^{-1}$  (provided, of course, that the matrix is invertible) is easy.
2. No. For one thing, the fact that $H+G$  has an inverse doesn't imply that $H$  has an inverse or that $G$  has an inverse. Neither of these matrices is invertible but their sum is.
${\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&0\\0&1\end{pmatrix}}$
Another point is that just because $H$  and $G$  each has an inverse doesn't mean $H+G$  has an inverse; here is an example.
${\begin{pmatrix}1&0\\0&1\end{pmatrix}}\qquad {\begin{pmatrix}-1&0\\0&-1\end{pmatrix}}$
Still a third point is that, even if the two matrices have inverses, and the sum has an inverse, doesn't imply that the equation holds:
${\begin{pmatrix}2&0\\0&2\end{pmatrix}}^{-1}={\begin{pmatrix}1/2&0\\0&1/2\end{pmatrix}}^{-1}\qquad {\begin{pmatrix}3&0\\0&3\end{pmatrix}}^{-1}={\begin{pmatrix}1/3&0\\0&1/3\end{pmatrix}}^{-1}$
but
${\begin{pmatrix}5&0\\0&5\end{pmatrix}}^{-1}={\begin{pmatrix}1/5&0\\0&1/5\end{pmatrix}}^{-1}$
and $(1/2)_{+}(1/3)$  does not equal $1/5$ .
This exercise is recommended for all readers.
Problem 7

Is $(T^{k})^{-1}=(T^{-1})^{k}$ ?

Yes: $T^{k}(T^{-1})^{k}=(TT\cdots T)\cdot (T^{-1}T^{-1}\cdots T^{-1})=T^{k-1}(TT^{-1})(T^{-1})^{k-1}=\dots =I$ .

Problem 8

Is $H^{-1}$  invertible?

Yes, the inverse of $H^{-1}$  is $H$ .

Problem 9

For each real number $\theta$  let $t_{\theta }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$  be represented with respect to the standard bases by this matrix.

${\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}$

Show that $t_{\theta _{1}+\theta _{2}}=t_{\theta _{1}}\cdot t_{\theta _{2}}$ . Show also that ${t_{\theta }}^{-1}=t_{-\theta }$ .

One way to check that the first is true is with the angle sum formulas from trigonometry.

${\begin{array}{rl}{\begin{pmatrix}\cos(\theta _{1}+\theta _{2})&-\sin(\theta _{1}+\theta _{2})\\\sin(\theta _{1}+\theta _{2})&\cos(\theta _{1}+\theta _{2})\end{pmatrix}}&={\begin{pmatrix}\cos \theta _{1}\cos \theta _{2}-\sin \theta _{1}\sin \theta _{2}&-\sin \theta _{1}\cos \theta _{2}-\cos \theta _{1}\sin \theta _{2}\\\sin \theta _{1}\cos \theta _{2}+\cos \theta _{1}\sin \theta _{2}&\cos \theta _{1}\cos \theta _{2}-\sin \theta _{1}\sin \theta _{2}\end{pmatrix}}\\&={\begin{pmatrix}\cos \theta _{1}&-\sin \theta _{1}\\\sin \theta _{1}&\cos \theta _{1}\end{pmatrix}}{\begin{pmatrix}\cos \theta _{2}&-\sin \theta _{2}\\\sin \theta _{2}&\cos \theta _{2}\end{pmatrix}}\end{array}}$

Checking the second equation in this way is similar.

Of course, the equations can be not just checked but also understood by recalling that $t_{\theta }$  is the map that rotates vectors about the origin through an angle of $\theta$  radians.

Problem 10

Do the calculations for the proof of Corollary 4.12.

There are two cases. For the first case we assume that $a$  is nonzero. Then

${\xrightarrow[{}]{-(c/a)\rho _{1}+\rho _{2}}}\left({\begin{array}{cc|cc}a&b&1&0\\0&-(bc/a)+d&-c/a&1\end{array}}\right)=\left({\begin{array}{cc|cc}a&b&1&0\\0&(ad-bc)/a&-c/a&1\end{array}}\right)$

shows that the matrix is invertible (in this $a\neq 0$  case) if and only if $ad-bc\neq 0$ . To find the inverse, we finish with the Jordan half of the reduction.

${\xrightarrow[{(a/ad-bc)\rho _{2}}]{(1/a)\rho _{1}}}\left({\begin{array}{cc|cc}1&b/a&1/a&0\\0&1&-c/(ad-bc)&a/(ad-bc)\end{array}}\right){\xrightarrow[{}]{-(b/a)\rho _{2}+\rho _{1}}}\left({\begin{array}{cc|cc}1&0&d/(ad-bc)&-b/(ad-bc)\\0&1&-c/(ad-bc)&a/(ad-bc)\end{array}}\right)$

The other case is the $a=0$  case. We swap to get $c$  into the $1,1$  position.

${\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}\;\left({\begin{array}{cc|cc}c&d&0&1\\0&b&1&0\end{array}}\right)$

This matrix is nonsingular if and only if both $b$  and $c$  are nonzero (which, under the case assumption that $a=0$ , holds if and only if $ad-bc\neq 0$ ). To find the inverse we do the Jordan half.

${\xrightarrow[{(1/b)\rho _{2}}]{(1/c)\rho _{1}}}\;\left({\begin{array}{cc|cc}1&d/c&0&1/c\\0&1&1/b&0\end{array}}\right)\;{\xrightarrow[{}]{-(d/c)\rho _{2}+\rho _{1}}}\;\left({\begin{array}{cc|cc}1&0&-d/bc&1/c\\0&1&1/b&0\end{array}}\right)$

(Note that this is what is required, since $a=0$  gives that $ad-bc=-bc$ ).

Problem 11

Show that this matrix

$H={\begin{pmatrix}1&0&1\\0&1&0\end{pmatrix}}$

has infinitely many right inverses. Show also that it has no left inverse.

With $H$  a $2\!\times \!3$  matrix, in looking for a matrix $G$  such that the combination $HG$  acts as the $2\!\times \!2$  identity we need $G$  to be $3\!\times \!2$ . Setting up the equation

${\begin{pmatrix}1&0&1\\0&1&0\end{pmatrix}}{\begin{pmatrix}m&n\\p&q\\r&s\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}$

and solving the resulting linear system

${\begin{array}{*{6}{rc}r}m&&&&+r&&=&1\\&n&&&&+s&=&0\\&&p&&&&=&0\\&&&q&&&=&1\end{array}}$

gives infinitely many solutions.

$\{{\begin{pmatrix}m\\n\\p\\q\\r\\s\end{pmatrix}}={\begin{pmatrix}1\\0\\0\\1\\0\\0\end{pmatrix}}+r\cdot {\begin{pmatrix}-1\\0\\0\\0\\1\\0\end{pmatrix}}+s\cdot {\begin{pmatrix}0\\-1\\0\\0\\0\\1\end{pmatrix}}\,{\big |}\,r,s\in \mathbb {R} \}$

Thus $H$  has infinitely many right inverses.

As for left inverses, the equation

${\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}1&0&1\\0&1&0\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}$

gives rise to a linear system with nine equations and four unknowns.

${\begin{array}{*{6}{rc}r}a&&&&&&&&&&&=&1\\&&b&&&&&&&&&=&0\\a&&&&&&&&&&&=&0\\&&&&c&&&&&&&=&0\\&&&&&&d&&&&&=&1\\&&&&c&&&&&&&=&0\\&&&&&&&&e&&&=&0\\&&&&&&&&&&f&=&0\\&&&&&&&&e&&&=&1\end{array}}$

This system is inconsistent (the first equation conflicts with the third, as do the seventh and ninth) and so there is no left inverse.

Problem 12

In Example 4.1, how many left inverses has $\eta$ ?

With respect to the standard bases we have

${\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{3}}(\eta )={\begin{pmatrix}1&0\\0&1\\0&0\end{pmatrix}}$

and setting up the equation to find the matrix inverse

${\begin{pmatrix}a&b&c\\d&e&f\end{pmatrix}}{\begin{pmatrix}1&0\\0&1\\0&0\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}={\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{2}}({\mbox{id}})$

gives rise to a linear system.

${\begin{array}{*{6}{rc}r}a&&&&&&&&&&&=&1\\&&b&&&&&&&&&=&0\\&&&&&&d&&&&&=&0\\&&&&&&&&e&&&=&1\end{array}}$

There are infinitely many solutions in $a,\ldots ,f$  to this system because two of these variables are entirely unrestricted

$\{{\begin{pmatrix}a\\b\\c\\d\\e\\f\end{pmatrix}}={\begin{pmatrix}1\\0\\0\\0\\1\\0\end{pmatrix}}+c\cdot {\begin{pmatrix}0\\0\\1\\0\\0\\0\end{pmatrix}}+f\cdot {\begin{pmatrix}0\\0\\0\\0\\0\\1\end{pmatrix}}\,{\big |}\,c,f\in \mathbb {R} \}$

and so there are infinitely many solutions to the matrix equation.

$\{{\begin{pmatrix}1&0&c\\0&1&f\end{pmatrix}}\,{\big |}\,c,f\in \mathbb {R} \}$

With the bases still fixed at ${\mathcal {E}}_{2},{\mathcal {E}}_{2}$ , for instance taking $c=2$  and $f=3$  gives a matrix representing this map.

${\begin{pmatrix}x\\y\\z\end{pmatrix}}\;{\stackrel {f_{2,3}}{\longmapsto }}\;{\begin{pmatrix}x+2z\\y+3z\end{pmatrix}}$

The check that $f_{2,3}\circ \eta$  is the identity map on $\mathbb {R} ^{2}$  is easy.

Problem 13

If a matrix has infinitely many right-inverses, can it have infinitely many left-inverses? Must it have?

By Lemma 4.3 it cannot have infinitely many left inverses, because a matrix with both left and right inverses has only one of each (and that one of each is one of both— the left and right inverse matrices are equal).

This exercise is recommended for all readers.
Problem 14

Assume that $H$  is invertible and that $HG$  is the zero matrix. Show that $G$  is a zero matrix.

The associativity of matrix multiplication gives on the one hand $H^{-1}(HG)=H^{-1}Z=Z$ , and on the other that $H^{-1}(HG)=(H^{-1}H)G=IG=G$ .

Problem 15

Prove that if $H$  is invertible then the inverse commutes with a matrix $GH^{-1}=H^{-1}G$  if and only if $H$  itself commutes with that matrix $GH=HG$ .

Multiply both sides of the first equation by $H$ .

This exercise is recommended for all readers.
Problem 16

Show that if $T$  is square and if $T^{4}$  is the zero matrix then $(I-T)^{-1}=I+T+T^{2}+T^{3}$ . Generalize.

Checking that when $I-T$  is multiplied on both sides by that expression (assuming that $T^{4}$  is the zero matrix) then the result is the identity matrix is easy. The obvious generalization is that if $T^{n}$  is the zero matrix then $(I-T)^{-1}=I+T+T^{2}+\cdots +T^{n-1}$ ; the check again is easy.

This exercise is recommended for all readers.
Problem 17

Let $D$  be diagonal. Describe $D^{2}$ , $D^{3}$ , ... , etc. Describe $D^{-1}$ , $D^{-2}$ , ... , etc. Define $D^{0}$  appropriately.

The powers of the matrix are formed by taking the powers of the diagonal entries. That is, $D^{2}$  is all zeros except for diagonal entries of ${d_{1,1}}^{2}$ , ${d_{2,2}}^{2}$ , etc. This suggests defining $D^{0}$  to be the identity matrix.

Problem 18

Prove that any matrix row-equivalent to an invertible matrix is also invertible.

Assume that $B$  is row equivalent to $A$  and that $A$  is invertible. Because they are row-equivalent, there is a sequence of row steps to reduce one to the other. That reduction can be done with matrices, for instance, $A$  can be changed by row operations to $B$  as $B=R_{n}\cdots R_{1}A$ . This equation gives $B$  as a product of invertible matrices and by Lemma 4.5 then, $B$  is also invertible.

Problem 19

The first question below appeared as Problem 15 in the Matrix Multiplication subsection.

1. Show that the rank of the product of two matrices is less than or equal to the minimum of the rank of each.
2. Show that if $T$  and $S$  are square then $TS=I$  if and only if $ST=I$ .
1. See the answer to Problem 15 in the Matrix Multiplication subsection.
2. We will show that both conditions are equivalent to the condition that the two matrices be nonsingular. As $T$  and $S$  are square and their product is defined, they are equal-sized, say $n\!\times \!n$ . Consider the $TS=I$  half. By the prior item the rank of $I$  is less than or equal to the minimum of the rank of $T$  and the rank of $S$ . But the rank of $I$  is $n$ , so the rank of $T$  and the rank of $S$  must each be $n$ . Hence each is nonsingular. The same argument shows that $ST=I$  implies that each is nonsingular.
Problem 20

Show that the inverse of a permutation matrix is its transpose.

Inverses are unique, so we need only show that it works. The check appears above as Problem 9 of the Mechanics of Matrix Multiplication subsection.

Problem 21

The first two parts of this question appeared as Problem 12. of the Matrix Multiplication subsection

1. Show that ${{(GH)}^{\rm {trans}}}={{H}^{\rm {trans}}}{{G}^{\rm {trans}}}$ .
2. A square matrix is symmetric if each $i,j$  entry equals the $j,i$  entry (that is, if the matrix equals its transpose). Show that the matrices $H{{H}^{\rm {trans}}}$  and ${{H}^{\rm {trans}}}H$  are symmetric.
3. Show that the inverse of the transpose is the transpose of the inverse.
4. Show that the inverse of a symmetric matrix is symmetric.
1. See the answer for Problem 12 of the Matrix Multiplication subsection.
2. See the answer for Problem 12 of the Matrix Multiplication subsection.
3. Apply the first part to $I=AA^{-1}$  to get $I={{I}^{\rm {trans}}}={{(AA^{-1})}^{\rm {trans}}}={{(A^{-1})}^{\rm {trans}}}{{A}^{\rm {trans}}}$ .
4. Apply the prior item with ${{A}^{\rm {trans}}}=A$ , as $A$  is symmetric.
This exercise is recommended for all readers.
Problem 22

The items starting this question appeared as Problem 17 of the Matrix Multiplication subsection.

1. Prove that the composition of the projections $\pi _{x},\pi _{y}:\mathbb {R} ^{3}\to \mathbb {R} ^{3}$  is the zero map despite that neither is the zero map.
2. Prove that the composition of the derivatives $d^{2}/dx^{2},\,d^{3}/dx^{3}:{\mathcal {P}}_{4}\to {\mathcal {P}}_{4}$  is the zero map despite that neither map is the zero map.
3. Give matrix equations representing each of the prior two items.

When two things multiply to give zero despite that neither is zero, each is said to be a zero divisor. Prove that no zero divisor is invertible.

For the answer to the items making up the first half, see Problem 17 of the Matrix Multiplication subsection. For the proof in the second half, assume that $A$  is a zero divisor so there is a nonzero matrix $B$  with $AB=Z$  (or else $BA=Z$ ; this case is similar), If $A$  is invertible then $A^{-1}(AB)=(A^{-1}A)B=IB=B$  but also $A^{-1}(AB)=A^{-1}Z=Z$ , contradicting that $B$  is nonzero.

Problem 23

In real number algebra, there are exactly two numbers, $1$  and $-1$ , that are their own multiplicative inverse. Does $H^{2}=I$  have exactly two solutions for $2\!\times \!2$  matrices?

No, there are at least four.

${\begin{pmatrix}\pm 1&0\\0&\pm 1\end{pmatrix}}$
Problem 24

Is the relation "is a two-sided inverse of" transitive? Reflexive? Symmetric?

It is not reflexive since, for instance,

$H={\begin{pmatrix}1&0\\0&2\end{pmatrix}}$

is not a two-sided inverse of itself. The same example shows that it is not transitive. That matrix has this two-sided inverse

$G={\begin{pmatrix}1&0\\0&1/2\end{pmatrix}}$

and while $H$  is a two-sided inverse of $G$  and $G$  is a two-sided inverse of $H$ , we know that $H$  is not a two-sided inverse of $H$ . However, the relation is symmetric: if $G$  is a two-sided inverse of $H$  then $GH=I=HG$  and therefore $H$  is also a two-sided inverse of $G$ .

Problem 25

Prove: if the sum of the elements of a square matrix is $k$ , then the sum of the elements in each row of the inverse matrix is $1/k$ . (Wilansky 1951)

Let $A$  be $m\!\times \!m$ , non-singular, with the stated property. Let $B$  be its inverse. Then for $n\leq m$ ,
$1=\sum _{r=1}^{m}\delta _{nr}=\sum _{r=1}^{m}\sum _{s=1}^{m}b_{ns}a_{sr}=\sum _{s=1}^{m}\sum _{r=1}^{m}b_{ns}a_{sr}=k\sum _{s=1}^{m}b_{ns}$
($A$  is singular if $k=0$ ).