# Linear Algebra/Diagonalizability/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Repeat Example 2.5 for the matrix from Example 2.2.

Because the basis vectors are chosen arbitrarily, many different answers are possible. However, here is one way to go; to diagonalize

$T={\begin{pmatrix}4&-2\\1&1\end{pmatrix}}$

take it as the representation of a transformation with respect to the standard basis $T={\rm {Rep}}_{{\mathcal {E}}_{2},{\mathcal {E}}_{2}}(t)$  and look for $B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2}\rangle$  such that

${\rm {Rep}}_{B,B}(t)={\begin{pmatrix}\lambda _{1}&0\\0&\lambda _{2}\end{pmatrix}}$

that is, such that $t({\vec {\beta }}_{1})=\lambda _{1}$  and $t({\vec {\beta }}_{2})=\lambda _{2}$ .

${\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\vec {\beta }}_{1}=\lambda _{1}\cdot {\vec {\beta }}_{1}\qquad {\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\vec {\beta }}_{2}=\lambda _{2}\cdot {\vec {\beta }}_{2}$

We are looking for scalars $x$  such that this equation

${\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}$

has solutions $b_{1}$  and $b_{2}$ , which are not both zero. Rewrite that as a linear system

${\begin{array}{*{2}{rc}r}(4-x)\cdot b_{1}&+&-2\cdot b_{2}&=&0\\1\cdot b_{1}&+&(1-x)\cdot b_{2}&=&0\end{array}}$

If $x=4$  then the first equation gives that $b_{2}=0$ , and then the second equation gives that $b_{1}=0$ . The case where both $b$ 's are zero is disallowed so we can assume that $x\neq 4$ .

${\xrightarrow[{}]{(-1/(4-x))\rho _{1}+\rho _{2}}}\;{\begin{array}{*{2}{rc}r}(4-x)\cdot b_{1}&+&-2\cdot b_{2}&=&0\\&&((x^{2}-5x+6)/(4-x))\cdot b_{2}&=&0\end{array}}$

Consider the bottom equation. If $b_{2}=0$  then the first equation gives $b_{1}=0$  or $x=4$ . The $b_{1}=b_{2}=0$  case is disallowed. The other possibility for the bottom equation is that the numerator of the fraction $x^{2}-5x+6=(x-2)(x-3)$  is zero. The $x=2$  case gives a first equation of $2b_{1}-2b_{2}=0$ , and so associated with $x=2$  we have vectors whose first and second components are equal:

${\vec {\beta }}_{1}={\begin{pmatrix}1\\1\end{pmatrix}}\qquad {\text{(so }}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}1\\1\end{pmatrix}}=2\cdot {\begin{pmatrix}1\\1\end{pmatrix}}{\text{, and }}\lambda _{1}=2{\text{).}}$

If $x=3$  then the first equation is $b_{1}-2b_{2}=0$  and so the associated vectors are those whose first component is twice their second:

${\vec {\beta }}_{2}={\begin{pmatrix}2\\1\end{pmatrix}}\qquad {\text{(so }}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}2\\1\end{pmatrix}}=3\cdot {\begin{pmatrix}2\\1\end{pmatrix}}{\text{, and so }}\lambda _{2}=3{\text{).}}$

This picture

shows how to get the diagonalization.

${\begin{pmatrix}2&0\\0&3\end{pmatrix}}={\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}{\begin{pmatrix}1&2\\1&1\end{pmatrix}}$

Comment. This equation matches the $T=PSP^{-1}$  definition under this renaming.

$T={\begin{pmatrix}2&0\\0&3\end{pmatrix}}\quad P={\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}\quad P^{-1}={\begin{pmatrix}1&2\\1&1\end{pmatrix}}\quad S={\begin{pmatrix}4&-2\\1&1\end{pmatrix}}$
Problem 2

Diagonalize these upper triangular matrices.

1. ${\begin{pmatrix}-2&1\\0&2\end{pmatrix}}$
2. ${\begin{pmatrix}5&4\\0&1\end{pmatrix}}$
1. Setting up
${\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\qquad \Longrightarrow \qquad {\begin{array}{*{2}{rc}r}(-2-x)\cdot b_{1}&+&b_{2}&=&0\\&&(2-x)\cdot b_{2}&=&0\end{array}}$
gives the two possibilities that $b_{2}=0$  and $x=2$ . Following the $b_{2}=0$  possibility leads to the first equation $(-2-x)b_{1}=0$  with the two cases that $b_{1}=0$  and that $x=-2$ . Thus, under this first possibility, we find $x=-2$  and the associated vectors whose second component is zero, and whose first component is free.
${\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\0\end{pmatrix}}=-2\cdot {\begin{pmatrix}b_{1}\\0\end{pmatrix}}\qquad {\vec {\beta }}_{1}={\begin{pmatrix}1\\0\end{pmatrix}}$
Following the other possibility leads to a first equation of $-4b_{1}+b_{2}=0$  and so the vectors associated with this solution have a second component that is four times their first component.
${\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}b_{1}\\4b_{1}\end{pmatrix}}=2\cdot {\begin{pmatrix}b_{1}\\4b_{1}\end{pmatrix}}\qquad {\vec {\beta }}_{2}={\begin{pmatrix}1\\4\end{pmatrix}}$
The diagonalization is this.
${\begin{pmatrix}1&1\\0&4\end{pmatrix}}^{-1}{\begin{pmatrix}-2&1\\0&2\end{pmatrix}}{\begin{pmatrix}1&1\\0&4\end{pmatrix}}^{-1}{\begin{pmatrix}-2&0\\0&2\end{pmatrix}}$
2. The calculations are like those in the prior part.
${\begin{pmatrix}5&4\\0&1\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}=x\cdot {\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\qquad \Longrightarrow \qquad {\begin{array}{*{2}{rc}r}(5-x)\cdot b_{1}&+&4\cdot b_{2}&=&0\\&&(1-x)\cdot b_{2}&=&0\end{array}}$
The bottom equation gives the two possibilities that $b_{2}=0$  and $x=1$ . Following the $b_{2}=0$  possibility, and discarding the case where both $b_{2}$  and $b_{1}$  are zero, gives that $x=5$ , associated with vectors whose second component is zero and whose first component is free.
${\vec {\beta }}_{1}={\begin{pmatrix}1\\0\end{pmatrix}}$
The $x=1$  possibility gives a first equation of $4b_{1}+4b_{2}=0$  and so the associated vectors have a second component that is the negative of their first component.
${\vec {\beta }}_{1}={\begin{pmatrix}1\\-1\end{pmatrix}}$
We thus have this diagonalization.
${\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}5&4\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}5&0\\0&1\end{pmatrix}}$
This exercise is recommended for all readers.
Problem 3

What form do the powers of a diagonal matrix have?

For any integer $p$ ,

${\begin{pmatrix}d_{1}&0&\\0&\ddots &\\&&d_{n}\end{pmatrix}}^{p}={\begin{pmatrix}d_{1}^{p}&0&\\0&\ddots &\\&&d_{n}^{p}\end{pmatrix}}.$
Problem 4

Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from different similarity classes?

These two are not similar

${\begin{pmatrix}0&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}1&0\\0&1\end{pmatrix}}$

because each is alone in its similarity class.

For the second half, these

${\begin{pmatrix}2&0\\0&3\end{pmatrix}}\qquad {\begin{pmatrix}3&0\\0&2\end{pmatrix}}$

are similar via the matrix that changes bases from $\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2}\rangle$  to $\langle {\vec {\beta }}_{2},{\vec {\beta }}_{1}\rangle$ . (Question. Are two diagonal matrices similar if and only if their diagonal entries are permutations of each other's?)

Problem 5

Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular?

Contrast these two.

${\begin{pmatrix}2&0\\0&1\end{pmatrix}}\qquad {\begin{pmatrix}2&0\\0&0\end{pmatrix}}$

The first is nonsingular, the second is singular.

This exercise is recommended for all readers.
Problem 6

Show that the inverse of a diagonal matrix is the diagonal of the inverses, if no element on that diagonal is zero. What happens when a diagonal entry is zero?

To check that the inverse of a diagonal matrix is the diagonal matrix of the inverses, just multiply.

${\begin{pmatrix}a_{1,1}&0\\0&a_{2,2}\\&&\ddots \\&&&a_{n,n}\end{pmatrix}}{\begin{pmatrix}1/a_{1,1}&0\\0&1/a_{2,2}\\&&\ddots \\&&&1/a_{n,n}\end{pmatrix}}$

(Showing that it is a left inverse is just as easy.)

If a diagonal entry is zero then the diagonal matrix is singular; it has a zero determinant.

Problem 7

The equation ending Example 2.5

${\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}3&0\\0&1\end{pmatrix}}$

is a bit jarring because for $P$  we must take the first matrix, which is shown as an inverse, and for $P^{-1}$  we take the inverse of the first matrix, so that the two $-1$  powers cancel and this matrix is shown without a superscript $-1$ .

1. Check that this nicer-appearing equation holds.
${\begin{pmatrix}3&0\\0&1\end{pmatrix}}={\begin{pmatrix}1&1\\0&-1\end{pmatrix}}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}$
2. Is the previous item a coincidence? Or can we always switch the $P$  and the $P^{-1}$ ?
1. The check is easy.
${\begin{pmatrix}1&1\\0&-1\end{pmatrix}}{\begin{pmatrix}3&2\\0&1\end{pmatrix}}={\begin{pmatrix}3&3\\0&-1\end{pmatrix}}\qquad {\begin{pmatrix}3&3\\0&-1\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}={\begin{pmatrix}3&0\\0&1\end{pmatrix}}$
2. It is a coincidence, in the sense that if $T=PSP^{-1}$  then $T$  need not equal $P^{-1}SP$ . Even in the case of a diagonal matrix $D$ , the condition that $D=PTP^{-1}$  does not imply that $D$  equals $P^{-1}TP$ . The matrices from Example 2.2 show this.
${\begin{pmatrix}1&2\\1&1\end{pmatrix}}{\begin{pmatrix}4&-2\\1&1\end{pmatrix}}={\begin{pmatrix}6&0\\5&-1\end{pmatrix}}\qquad {\begin{pmatrix}6&0\\5&-1\end{pmatrix}}{\begin{pmatrix}1&2\\1&1\end{pmatrix}}^{-1}={\begin{pmatrix}-6&12\\-6&11\end{pmatrix}}$
Problem 8

Show that the $P$  used to diagonalize in Example 2.5 is not unique.

The columns of the matrix are chosen as the vectors associated with the $x$ 's. The exact choice, and the order of the choice was arbitrary. We could, for instance, get a different matrix by swapping the two columns.

Problem 9

Find a formula for the powers of this matrix Hint: see Problem 3.

${\begin{pmatrix}-3&1\\-4&2\end{pmatrix}}$

Diagonalizing and then taking powers of the diagonal matrix shows that

${\begin{pmatrix}-3&1\\-4&2\end{pmatrix}}^{k}={\frac {1}{3}}{\begin{pmatrix}-1&1\\-4&4\end{pmatrix}}+({\frac {-2}{3}})^{k}{\begin{pmatrix}4&-1\\4&-1\end{pmatrix}}.$
This exercise is recommended for all readers.
Problem 10

Diagonalize these.

1. ${\begin{pmatrix}1&1\\0&0\end{pmatrix}}$
2. ${\begin{pmatrix}0&1\\1&0\end{pmatrix}}$
1. ${\begin{pmatrix}1&1\\0&-1\end{pmatrix}}^{-1}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}1&0\\0&0\end{pmatrix}}$
2. ${\begin{pmatrix}1&1\\1&-1\end{pmatrix}}^{-1}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}{\begin{pmatrix}1&1\\0&-1\end{pmatrix}}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}$
Problem 11

We can ask how diagonalization interacts with the matrix operations. Assume that $t,s:V\to V$  are each diagonalizable. Is $ct$  diagonalizable for all scalars $c$ ? What about $t+s$ ? $t\circ s$ ?

Yes, $ct$  is diagonalizable by the final theorem of this subsection.

No, $t+s$  need not be diagonalizable. Intuitively, the problem arises when the two maps diagonalize with respect to different bases (that is, when they are not simultaneously diagonalizable). Specifically, these two are diagonalizable but their sum is not:

${\begin{pmatrix}1&1\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}-1&0\\0&0\end{pmatrix}}$

(the second is already diagonal; for the first, see Problem 10). The sum is not diagonalizable because its square is the zero matrix.

The same intuition suggests that $t\circ s$  is not be diagonalizable. These two are diagonalizable but their product is not:

${\begin{pmatrix}1&0\\0&0\end{pmatrix}}\qquad {\begin{pmatrix}0&1\\1&0\end{pmatrix}}$

(for the second, see Problem 10).

This exercise is recommended for all readers.
Problem 12

Show that matrices of this form are not diagonalizable.

${\begin{pmatrix}1&c\\0&1\end{pmatrix}}\qquad c\neq 0$

If

$P{\begin{pmatrix}1&c\\0&1\end{pmatrix}}P^{-1}={\begin{pmatrix}a&0\\0&b\end{pmatrix}}$

then

$P{\begin{pmatrix}1&c\\0&1\end{pmatrix}}={\begin{pmatrix}a&0\\0&b\end{pmatrix}}P$

so

${\begin{array}{rl}{\begin{pmatrix}p&q\\r&s\end{pmatrix}}{\begin{pmatrix}1&c\\0&1\end{pmatrix}}&={\begin{pmatrix}a&0\\0&b\end{pmatrix}}{\begin{pmatrix}p&q\\r&s\end{pmatrix}}\\{\begin{pmatrix}p&cp+q\\r&cr+s\end{pmatrix}}&={\begin{pmatrix}ap&aq\\br&bs\end{pmatrix}}\end{array}}$

The $1,1$  entries show that $a=1$  and the $1,2$  entries then show that $pc=0$ . Since $c\neq 0$  this means that $p=0$ . The $2,1$  entries show that $b=1$  and the $2,2$  entries then show that $rc=0$ . Since $c\neq 0$  this means that $r=0$ . But if both $p$  and $r$  are $0$  then $P$  is not invertible.

Problem 13

Show that each of these is diagonalizable.

1. ${\begin{pmatrix}1&2\\2&1\end{pmatrix}}$
2. ${\begin{pmatrix}x&y\\y&z\end{pmatrix}}\qquad x,y,z{\text{ scalars}}$
1. Using the formula for the inverse of a $2\!\times \!2$  matrix gives this.
${\begin{array}{rl}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}1&2\\2&1\end{pmatrix}}\cdot {\frac {1}{ad-bc}}\cdot {\begin{pmatrix}d&-b\\-c&a\end{pmatrix}}&={\frac {1}{ad-bc}}{\begin{pmatrix}ad+2bd-2ac-bc&-ab-2b^{2}+2a^{2}+ab\\cd+2d^{2}-2c^{2}-cd&-bc-2bd+2ac+ad\end{pmatrix}}\end{array}}$
Now pick scalars $a,\ldots ,d$  so that $ad-bc\neq 0$  and $2d^{2}-2c^{2}=0$  and $2a^{2}-2b^{2}=0$ . For example, these will do.
${\begin{pmatrix}1&1\\1&-1\end{pmatrix}}{\begin{pmatrix}1&2\\2&1\end{pmatrix}}\cdot {\frac {1}{-2}}\cdot {\begin{pmatrix}-1&-1\\-1&1\end{pmatrix}}={\frac {1}{-2}}{\begin{pmatrix}-6&0\\0&2\end{pmatrix}}$
2. As above,
${\begin{array}{rl}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\begin{pmatrix}x&y\\y&z\end{pmatrix}}\cdot {\frac {1}{ad-bc}}\cdot {\begin{pmatrix}d&-b\\-c&a\end{pmatrix}}&={\frac {1}{ad-bc}}{\begin{pmatrix}adx+bdy-acy-bcz&-abx-b^{2}y+a^{2}y+abz\\cdx+d^{2}y-c^{2}y-cdz&-bcx-bdy+acy+adz\end{pmatrix}}\end{array}}$
we are looking for scalars $a,\ldots ,d$  so that $ad-bc\neq 0$  and $-abx-b^{2}y+a^{2}y+abz=0$  and $cdx+d^{2}y-c^{2}y-cdz=0$ , no matter what values $x$ , $y$ , and $z$  have. For starters, we assume that $y\neq 0$ , else the given matrix is already diagonal. We shall use that assumption because if we (arbitrarily) let $a=1$  then we get
${\begin{array}{rl}-bx-b^{2}y+y+bz&=0\\(-y)b^{2}+(z-x)b+y&=0\end{array}}$
$b={\frac {-(z-x)\pm {\sqrt {(z-x)^{2}-4(-y)(y)}}}{-2y}}\qquad y\neq 0$
(note that if $x$ , $y$ , and $z$  are real then these two $b$ 's are real as the discriminant is positive). By the same token, if we (arbitrarily) let $c=1$  then
${\begin{array}{rl}dx+d^{2}y-y-dz&=0\\(y)d^{2}+(x-z)d-y&=0\end{array}}$
$d={\frac {-(x-z)\pm {\sqrt {(x-z)^{2}-4(y)(-y)}}}{2y}}\qquad y\neq 0$
(as above, if $x,y,z\in \mathbb {R}$  then this discriminant is positive so a symmetric, real, $2\!\times \!2$  matrix is similar to a real diagonal matrix). For a check we try $x=1$ , $y=2$ , $z=1$ .
$b={\frac {0\pm {\sqrt {0+16}}}{-4}}=\mp 1\qquad d={\frac {0\pm {\sqrt {0+16}}}{4}}=\pm 1$
Note that not all four choices $(b,d)=(+1,+1),\dots ,(-1,-1)$  satisfy $ad-bc\neq 0$ .