# Linear Algebra/Definition and Examples of Isomorphisms/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Verify, using Example 1.4 as a model, that the two correspondences given before the definition are isomorphisms.

1. Call the map $f$ .
${\begin{pmatrix}a&b\end{pmatrix}}{\stackrel {f}{\longmapsto }}{\begin{pmatrix}a\\b\end{pmatrix}}$
It is one-to-one because if $f$  sends two members of the domain to the same image, that is, if $f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)=f\left({\begin{pmatrix}c&d\end{pmatrix}}\right)$ , then the definition of $f$  gives that
${\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}c\\d\end{pmatrix}}$
and since column vectors are equal only if they have equal components, we have that $a=c$  and that $b=d$ . Thus, if $f$  maps two row vectors from the domain to the same column vector then the two row vectors are equal: ${\begin{pmatrix}a&b\end{pmatrix}}={\begin{pmatrix}c&d\end{pmatrix}}$ . To show that $f$  is onto we must show that any member of the codomain $\mathbb {R} ^{2}$  is the image under $f$  of some row vector. That's easy;
${\begin{pmatrix}x\\y\end{pmatrix}}$
is $f\left({\begin{pmatrix}x&y\end{pmatrix}}\right)$ . The computation for preservation of addition is this.
$f\left({\begin{pmatrix}a&b\end{pmatrix}}+{\begin{pmatrix}c&d\end{pmatrix}}\right)=f\left({\begin{pmatrix}a+c&b+d\end{pmatrix}}\right)={\begin{pmatrix}a+c\\b+d\end{pmatrix}}={\begin{pmatrix}a\\b\end{pmatrix}}+{\begin{pmatrix}c\\d\end{pmatrix}}=f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)+f\left({\begin{pmatrix}c&d\end{pmatrix}}\right)$
The computation for preservation of scalar multiplication is similar.
$f\left(r\cdot {\begin{pmatrix}a&b\end{pmatrix}}\right)=f\left({\begin{pmatrix}ra&rb\end{pmatrix}}\right)={\begin{pmatrix}ra\\rb\end{pmatrix}}=r\cdot {\begin{pmatrix}a\\b\end{pmatrix}}=r\cdot f\left({\begin{pmatrix}a&b\end{pmatrix}}\right)$
2. Denote the map from Example 1.2 by $f$ . To show that it is one-to-one, assume that $f(a_{0}+a_{1}x+a_{2}x^{2})=f(b_{0}+b_{1}x+b_{2}x^{2})$ . Then by the definition of the function,
${\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}$
and so $a_{0}=b_{0}$  and $a_{1}=b_{1}$  and $a_{2}=b_{2}$ . Thus $a_{0}+a_{1}x+a_{2}x^{2}=b_{0}+b_{1}x+b_{2}x^{2}$ , and consequently $f$  is one-to-one. The function $f$  is onto because there is a polynomial sent to
${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
by $f$ , namely, $a+bx+cx^{2}$ . As for structure, this shows that $f$  preserves addition
${\begin{array}{rl}f\left(\,(a_{0}+a_{1}x+a_{2}x^{2})+(b_{0}+b_{1}x+b_{2}x^{2})\,\right)&=f\left(\,(a_{0}+b_{0})+(a_{1}+b_{1})x+(a_{2}+b_{2})x^{2}\,\right)\\&={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}\\&={\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}\\&=f(a_{0}+a_{1}x+a_{2}x^{2})+f(b_{0}+b_{1}x+b_{2}x^{2})\end{array}}$
and this shows
${\begin{array}{rl}f(\,r(a_{0}+a_{1}x+a_{2}x^{2})\,)&=f(\,(ra_{0})+(ra_{1})x+(ra_{2})x^{2}\,)\\&={\begin{pmatrix}ra_{0}\\ra_{1}\\ra_{2}\end{pmatrix}}\\&=r\cdot {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}\\&=r\,f(a_{0}+a_{1}x+a_{2}x^{2})\end{array}}$
that it preserves scalar multiplication.
This exercise is recommended for all readers.
Problem 2

For the map $f:{\mathcal {P}}_{1}\to \mathbb {R} ^{2}$  given by

$a+bx{\stackrel {f}{\longmapsto }}{\begin{pmatrix}a-b\\b\end{pmatrix}}$

Find the image of each of these elements of the domain.

1. $3-2x$
2. $2+2x$
3. $x$

Show that this map is an isomorphism.

These are the images.

1. ${\begin{pmatrix}5\\-2\end{pmatrix}}$
2. ${\begin{pmatrix}0\\2\end{pmatrix}}$
3. ${\begin{pmatrix}-1\\1\end{pmatrix}}$

To prove that $f$  is one-to-one, assume that it maps two linear polynomials to the same image $f(a_{1}+b_{1}x)=f(a_{2}+b_{2}x)$ . Then

${\begin{pmatrix}a_{1}-b_{1}\\b_{1}\end{pmatrix}}={\begin{pmatrix}a_{2}-b_{2}\\b_{2}\end{pmatrix}}$

and so, since column vectors are equal only when their components are equal, $b_{1}=b_{2}$  and $a_{1}=a_{2}$ . That shows that the two linear polynomials are equal, and so $f$  is one-to-one.

To show that $f$  is onto, note that this member of the codomain

${\begin{pmatrix}s\\t\end{pmatrix}}$

is the image of this member of the domain $(s+t)+tx$ .

To check that $f$  preserves structure, we can use item 2 of Lemma 1.9.

${\begin{array}{rl}f\left(c_{1}\cdot (a_{1}+b_{1}x)+c_{2}\cdot (a_{2}+b_{2}x)\right)&=f\left((c_{1}a_{1}+c_{2}a_{2})+(c_{1}b_{1}+c_{2}b_{2})x\right)\\&={\begin{pmatrix}(c_{1}a_{1}+c_{2}a_{2})-(c_{1}b_{1}+c_{2}b_{2})\\c_{1}b_{1}+c_{2}b_{2}\end{pmatrix}}\\&=c_{1}\cdot {\begin{pmatrix}a_{1}-b_{1}\\b_{1}\end{pmatrix}}+c_{2}\cdot {\begin{pmatrix}a_{2}-b_{2}\\b_{2}\end{pmatrix}}\\&=c_{1}\cdot f(a_{1}+b_{1}x)+c_{2}\cdot f(a_{2}+b_{2}x)\end{array}}$
Problem 3

Show that the natural map $f_{1}$  from Example 1.5 is an isomorphism.

To verify it is one-to-one, assume that $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$ . Then $c_{1}+c_{2}x+c_{3}x^{2}=d_{1}+d_{2}x+d_{3}x^{2}$  by the definition of $f_{1}$ . Members of ${\mathcal {P}}_{2}$  are equal only when they have the same coefficients, so this implies that $c_{1}=d_{1}$  and $c_{2}=d_{2}$  and $c_{3}=d_{3}$ . Therefore $f_{1}(c_{1}x+c_{2}y+c_{3}z)=f_{1}(d_{1}x+d_{2}y+d_{3}z)$  implies that $c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z$ , and so $f_{1}$  is one-to-one.

To verify that it is onto, consider an arbitrary member of the codomain $a_{1}+a_{2}x+a_{3}x^{2}$  and observe that it is indeed the image of a member of the domain, namely, it is $f_{1}(a_{1}x+a_{2}y+a_{3}z)$ . (For instance, $0+3x+6x^{2}=f_{1}(0x+3y+6z)$ .)

The computation checking that $f_{1}$  preserves addition is this.

${\begin{array}{rl}f_{1}\left(\,(c_{1}x+c_{2}y+c_{3}z)+(d_{1}x+d_{2}y+d_{3}z)\,\right)&=f_{1}\left(\,(c_{1}+d_{1})x+(c_{2}+d_{2})y+(c_{3}+d_{3})z\,\right)\\&=(c_{1}+d_{1})+(c_{2}+d_{2})x+(c_{3}+d_{3})x^{2}\\&=(c_{1}+c_{2}x+c_{3}x^{2})+(d_{1}+d_{2}x+d_{3}x^{2})\\&=f_{1}(c_{1}x+c_{2}y+c_{3}z)+f_{1}(d_{1}x+d_{2}y+d_{3}z)\end{array}}$

The check that $f_{1}$  preserves scalar multiplication is this.

${\begin{array}{rl}f_{1}(\,r\cdot (c_{1}x+c_{2}y+c_{3}z)\,)&=f_{1}(\,(rc_{1})x+(rc_{2})y+(rc_{3})z\,)\\&=(rc_{1})+(rc_{2})x+(rc_{3})x^{2}\\&=r\cdot (c_{1}+c_{2}x+c_{3}x^{2})\\&=r\cdot f_{1}(c_{1}x+c_{2}y+c_{3}z)\end{array}}$
This exercise is recommended for all readers.
Problem 4

Decide whether each map is an isomorphism (if it is an isomorphism then prove it and if it isn't then state a condition that it fails to satisfy).

1. $f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R}$  given by
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto ad-bc$
2. $f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} ^{4}$  given by
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto {\begin{pmatrix}a+b+c+d\\a+b+c\\a+b\\a\end{pmatrix}}$
3. $f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}$  given by
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a)x^{2}+ax^{3}$
4. $f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}$  given by
${\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a+1)x^{2}+ax^{3}$
1. No; this map is not one-to-one. In particular, the matrix of all zeroes is mapped to the same image as the matrix of all ones.
2. Yes, this is an isomorphism. It is one-to-one:
${\text{if }}f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})=f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}}){\text{ then }}{\begin{pmatrix}a_{1}+b_{1}+c_{1}+d_{1}\\a_{1}+b_{1}+c_{1}\\a_{1}+b_{1}\\a_{1}\end{pmatrix}}={\begin{pmatrix}a_{2}+b_{2}+c_{2}+d_{2}\\a_{2}+b_{2}+c_{2}\\a_{2}+b_{2}\\a_{2}\end{pmatrix}}$
gives that $a_{1}=a_{2}$ , and that $b_{1}=b_{2}$ , and that $c_{1}=c_{2}$ , and that $d_{1}=d_{2}$ . It is onto, since this shows
${\begin{pmatrix}x\\y\\z\\w\end{pmatrix}}=f({\begin{pmatrix}w&z-w\\y-z&x-y\end{pmatrix}})$
that any four-tall vector is the image of a $2\!\times \!2$  matrix. Finally, it preserves combinations
${\begin{array}{rl}f(\,r_{1}\cdot {\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}}\,)&=f({\begin{pmatrix}r_{1}a_{1}+r_{2}a_{2}&r_{1}b_{1}+r_{2}b_{2}\\r_{1}c_{1}+r_{2}c_{2}&r_{1}d_{1}+r_{2}d_{2}\end{pmatrix}})\\&={\begin{pmatrix}r_{1}a_{1}+\dots +r_{2}d_{2}\\r_{1}a_{1}+\dots +r_{2}c_{2}\\r_{1}a_{1}+\dots +r_{2}b_{2}\\r_{1}a_{1}+r_{2}a_{2}\end{pmatrix}}\\&=r_{1}\cdot {\begin{pmatrix}a_{1}+\dots +d_{1}\\a_{1}+\dots +c_{1}\\a_{1}+b_{1}\\a_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}+\dots +d_{2}\\a_{2}+\dots +c_{2}\\a_{2}+b_{2}\\a_{2}\end{pmatrix}}\\&=r_{1}\cdot f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})+r_{2}\cdot f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})\end{array}}$
and so item 2 of Lemma 1.9 shows that it preserves structure.
3. Yes, it is an isomorphism. To show that it is one-to-one, we suppose that two members of the domain have the same image under $f$ .
$f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})=f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})$
This gives, by the definition of $f$ , that $c_{1}+(d_{1}+c_{1})x+(b_{1}+a_{1})x^{2}+a_{1}x^{3}=c_{2}+(d_{2}+c_{2})x+(b_{2}+a_{2})x^{2}+a_{2}x^{3}$  and then the fact that polynomials are equal only when their coefficients are equal gives a set of linear equations
${\begin{array}{rl}c_{1}&=c_{2}\\d_{1}+c_{1}&=d_{2}+c_{2}\\b_{1}+a_{1}&=b_{2}+a_{2}\\a_{1}&=a_{2}\end{array}}$
that has only the solution $a_{1}=a_{2}$ , $b_{1}=b_{2}$ , $c_{1}=c_{2}$ , and $d_{1}=d_{2}$ . To show that $f$  is onto, we note that $p+qx+rx^{2}+sx^{3}$  is the image under $f$  of this matrix.
${\begin{pmatrix}s&r-s\\p&q-p\end{pmatrix}}$
We can check that $f$  preserves structure by using item 2 of Lemma 1.9.
${\begin{array}{rl}f(r_{1}\cdot {\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}}+r_{2}\cdot {\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})&=f({\begin{pmatrix}r_{1}a_{1}+r_{2}a_{2}&r_{1}b_{1}+r_{2}b_{2}\\r_{1}c_{1}+r_{2}c_{2}&r_{1}d_{1}+r_{2}d_{2}\end{pmatrix}})\\&={\begin{array}{rl}&(r_{1}c_{1}+r_{2}c_{2})+(r_{1}d_{1}+r_{2}d_{2}+r_{1}c_{1}+r_{2}c_{2})x\\&\,\quad +(r_{1}b_{1}+r_{2}b_{2}+r_{1}a_{1}+r_{2}a_{2})x^{2}+(r_{1}a_{1}+r_{2}a_{2})x^{3}\end{array}}\\&={\begin{array}{rl}&r_{1}\cdot \left(c_{1}+(d_{1}+c_{1})x+(b_{1}+a_{1})x^{2}+a_{1}x^{3}\right)\\&\,\quad +r_{2}\cdot \left(c_{2}+(d_{2}+c_{2})x+(b_{2}+a_{2})x^{2}+a_{2}x^{3}\right)\end{array}}\\&=r_{1}\cdot f({\begin{pmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{pmatrix}})+r_{2}\cdot f({\begin{pmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{pmatrix}})\end{array}}$
4. No, this map does not preserve structure. For instance, it does not send the zero matrix to the zero polynomial.
Problem 5

Show that the map $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$  given by $f(x)=x^{3}$  is one-to-one and onto.Is it an isomorphism?

It is one-to-one and onto, a correspondence, because it has an inverse (namely, $f^{-1}(x)={\sqrt[{3}]{x}}$ ). However, it is not an isomorphism. For instance, $f(1)+f(1)\neq f(1+1)$ .

This exercise is recommended for all readers.
Problem 6

Refer to Example 1.1. Produce two more isomorphisms (of course, that they satisfy the conditions in the definition of isomorphism must be verified).

Many maps are possible. Here are two.

${\begin{pmatrix}a&b\end{pmatrix}}\mapsto {\begin{pmatrix}b\\a\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}a&b\end{pmatrix}}\mapsto {\begin{pmatrix}2a\\b\end{pmatrix}}$

The verifications are straightforward adaptations of the others above.

Problem 7

Refer to Example 1.2. Produce two more isomorphisms (and verify that they satisfy the conditions).

Here are two.

$a_{0}+a_{1}x+a_{2}x^{2}\mapsto {\begin{pmatrix}a_{1}\\a_{0}\\a_{2}\end{pmatrix}}\quad {\text{and}}\quad a_{0}+a_{1}x+a_{2}x^{2}\mapsto {\begin{pmatrix}a_{0}+a_{1}\\a_{1}\\a_{2}\end{pmatrix}}$

Verification is straightforward (for the second, to show that it is onto, note that

${\begin{pmatrix}s\\t\\u\end{pmatrix}}$

is the image of $(s-t)+tx+ux^{2}$ ).

This exercise is recommended for all readers.
Problem 8

Show that, although $\mathbb {R} ^{2}$  is not itself a subspace of $\mathbb {R} ^{3}$ , it is isomorphic to the $xy$ -plane subspace of $\mathbb {R} ^{3}$ .

The space $\mathbb {R} ^{2}$  is not a subspace of $\mathbb {R} ^{3}$  because it is not a subset of $\mathbb {R} ^{3}$ . The two-tall vectors in $\mathbb {R} ^{2}$  are not members of $\mathbb {R} ^{3}$ .

The natural isomorphism $\iota :\mathbb {R} ^{2}\to \mathbb {R} ^{3}$  (called the injection map) is this.

${\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {\iota }{\longmapsto }}{\begin{pmatrix}x\\y\\0\end{pmatrix}}$

This map is one-to-one because

$f({\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}})=f({\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}})\quad {\text{implies}}\quad {\begin{pmatrix}x_{1}\\y_{1}\\0\end{pmatrix}}={\begin{pmatrix}x_{2}\\y_{2}\\0\end{pmatrix}}$

which in turn implies that $x_{1}=x_{2}$  and $y_{1}=y_{2}$ , and therefore the initial two two-tall vectors are equal.

Because

${\begin{pmatrix}x\\y\\0\end{pmatrix}}=f({\begin{pmatrix}x\\y\end{pmatrix}})$

this map is onto the $xy$ -plane.

To show that this map preserves structure, we will use item 2 of Lemma 1.9 and show

$f(c_{1}\cdot {\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+c_{2}\cdot {\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}})=f({\begin{pmatrix}c_{1}x_{1}+c_{2}x_{2}\\c_{1}y_{1}+c_{2}y_{2}\end{pmatrix}})={\begin{pmatrix}c_{1}x_{1}+c_{2}x_{2}\\c_{1}y_{1}+c_{2}y_{2}\\0\end{pmatrix}}$
$=c_{1}\cdot {\begin{pmatrix}x_{1}\\y_{1}\\0\end{pmatrix}}+c_{2}\cdot {\begin{pmatrix}x_{2}\\y_{2}\\0\end{pmatrix}}=c_{1}\cdot f({\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}})+c_{2}\cdot f({\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}})$

that it preserves combinations of two vectors.

Problem 9

Find two isomorphisms between $\mathbb {R} ^{16}$  and ${\mathcal {M}}_{4\!\times \!4}$ .

Here are two:

${\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{16}\end{pmatrix}}\mapsto {\begin{pmatrix}r_{1}&r_{2}&\ldots \\&\\&&\ldots &r_{16}\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{16}\end{pmatrix}}\mapsto {\begin{pmatrix}r_{1}&\\r_{2}&\\\vdots &&&\vdots \\&&&r_{16}\end{pmatrix}}$

Verification that each is an isomorphism is easy.

This exercise is recommended for all readers.
Problem 10

For what $k$  is ${\mathcal {M}}_{m\!\times \!n}$  isomorphic to $\mathbb {R} ^{k}$ ?

When $k$  is the product $k=mn$ , here is an isomorphism.

${\begin{pmatrix}r_{1}&r_{2}&\ldots \\&\vdots \\&&\ldots &r_{m\cdot n}\end{pmatrix}}\mapsto {\begin{pmatrix}r_{1}\\r_{2}\\\vdots \\r_{m\cdot n}\end{pmatrix}}$

Checking that this is an isomorphism is easy.

Problem 11

For what $k$  is ${\mathcal {P}}_{k}$  isomorphic to $\mathbb {R} ^{n}$ ?

If $n\geq 1$  then ${\mathcal {P}}_{n-1}\cong \mathbb {R} ^{n}$ . (If we take ${\mathcal {P}}_{-1}$  and $\mathbb {R} ^{0}$  to be trivial vector spaces, then the relationship extends one dimension lower.) The natural isomorphism between them is this.

$a_{0}+a_{1}x+\dots +a_{n-1}x^{n-1}\mapsto {\begin{pmatrix}a_{0}\\a_{1}\\\vdots \\a_{n-1}\end{pmatrix}}$

Checking that it is an isomorphism is straightforward.

Problem 12

Prove that the map in Example 1.7, from ${\mathcal {P}}_{5}$  to ${\mathcal {P}}_{5}$  given by $p(x)\mapsto p(x-1)$ , is a vector space isomorphism.

This is the map, expanded.

${\begin{array}{rl}f(a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}+a_{5}x^{5})&={\begin{array}{rl}&a_{0}+a_{1}(x-1)+a_{2}(x-1)^{2}+a_{3}(x-1)^{3}\\&\,\quad +a_{4}(x-1)^{4}+a_{5}(x-1)^{5}\end{array}}\\&={\begin{array}{rl}&a_{0}+a_{1}(x-1)+a_{2}(x^{2}-2x+1)\\&\,\quad +a_{3}(x^{3}-3x^{2}+3x-1)\\&\,\quad +a_{4}(x^{4}-4x^{3}+6x^{2}-4x+1)\\&\,\quad +a_{5}(x^{5}-5x^{4}+10x^{3}-10x^{2}+5x-1)\end{array}}\\&={\begin{array}{rl}&(a_{0}-a_{1}+a_{2}-a_{3}+a_{4}-a_{5})\\&\,\quad +(a_{1}-2a_{2}+3a_{3}-4a_{4}+5a_{5})x\\&\,\quad +(a_{2}-3a_{3}+6a_{4}-10a_{5})x^{2}+(a_{3}-4a_{4}+10a_{5})x^{3}\\&\,\quad +(a_{4}-5a_{5})x^{4}+a_{5}x^{5}\end{array}}\end{array}}$

To finish checking that it is an isomorphism, we apply item 2 of Lemma 1.9 and show that it preserves linear combinations of two polynomials. Briefly, the check goes like this.

$f(c\cdot (a_{0}+a_{1}x+\dots +a_{5}x^{5})+d\cdot (b_{0}+b_{1}x+\dots +b_{5}x^{5}))$
$=\dots =(ca_{0}-ca_{1}+ca_{2}-ca_{3}+ca_{4}-ca_{5}+db_{0}-db_{1}+db_{2}-db_{3}+db_{4}-db_{5})+\dots +(ca_{5}+db_{5})x^{5}$
$=\dots =c\cdot f(a_{0}+a_{1}x+\dots +a_{5}x^{5})+d\cdot f(b_{0}+b_{1}x+\dots +b_{5}x^{5})$
Problem 13

Why, in Lemma 1.8, must there be a ${\vec {v}}\in V$ ? That is, why must $V$  be nonempty?

No vector space has the empty set underlying it. We can take ${\vec {v}}$  to be the zero vector.

Problem 14

Are any two trivial spaces isomorphic?

Yes; where the two spaces are $\{{\vec {a}}\}$  and $\{{\vec {b}}\}$ , the map sending ${\vec {a}}$  to ${\vec {b}}$  is clearly one-to-one and onto, and also preserves what little structure there is.

Problem 15

In the proof of Lemma 1.9, what about the zero-summands case (that is, if $n$  is zero)?

A linear combination of $n=0$  vectors adds to the zero vector and so Lemma 1.8 shows that the three statements are equivalent in this case.

Problem 16

Show that any isomorphism $f:{\mathcal {P}}_{0}\to \mathbb {R} ^{1}$  has the form $a\mapsto ka$  for some nonzero real number $k$ .

Consider the basis $\langle 1\rangle$  for ${\mathcal {P}}_{0}$  and let $f(1)\in \mathbb {R}$  be $k$ . For any $a\in {\mathcal {P}}_{0}$  we have that $f(a)=f(a\cdot 1)=af(1)=ak$  and so $f$ 's action is multiplication by $k$ . Note that $k\neq 0$  or else the map is not one-to-one. (Incidentally, any such map $a\mapsto ka$  is an isomorphism, as is easy to check.)

This exercise is recommended for all readers.
Problem 17

These prove that isomorphism is an equivalence relation.

1. Show that the identity map ${\mbox{id}}:V\to V$  is an isomorphism. Thus, any vector space is isomorphic to itself.
2. Show that if $f:V\to W$  is an isomorphism then so is its inverse $f^{-1}:W\to V$ . Thus, if $V$  is isomorphic to $W$  then also $W$  is isomorphic to $V$ .
3. Show that a composition of isomorphisms is an isomorphism: if $f:V\to W$  is an isomorphism and $g:W\to U$  is an isomorphism then so also is $g\circ f:V\to U$ . Thus, if $V$  is isomorphic to $W$  and $W$  is isomorphic to $U$ , then also $V$  is isomorphic to $U$ .

In each item, following item 2 of Lemma 1.9, we show that the map preserves structure by showing that the it preserves linear combinations of two members of the domain.

1. The identity map is clearly one-to-one and onto. For linear combinations the check is easy.
${\mbox{id}}(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}=c_{1}\cdot {\mbox{id}}({\vec {v}}_{1})+c_{2}\cdot {\mbox{id}}({\vec {v}}_{2})$
2. The inverse of a correspondence is also a correspondence (as stated in the appendix), so we need only check that the inverse preserves linear combinations. Assume that ${\vec {w}}_{1}=f({\vec {v}}_{1})$  (so $f^{-1}({\vec {w}}_{1})={\vec {v}}_{1}$ ) and assume that ${\vec {w}}_{2}=f({\vec {v}}_{2})$ .
${\begin{array}{rl}f^{-1}(c_{1}\cdot {\vec {w}}_{1}+c_{2}\cdot {\vec {w}}_{2})&=f^{-1}{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=f^{-1}(\,f{\bigl (}c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})\,{\bigr )}\\&=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}\\&=c_{1}\cdot f^{-1}({\vec {w}}_{1})+c_{2}\cdot f^{-1}({\vec {w}}_{2})\end{array}}$
3. The composition of two correspondences is a correspondence (as stated in the appendix), so we need only check that the composition map preserves linear combinations.
${\begin{array}{rl}g\circ f\,{\bigl (}c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2}{\bigr )}&=g{\bigl (}\,f(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})\,{\bigr )}\\&=g{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=c_{1}\cdot g{\bigl (}f({\vec {v}}_{1}))+c_{2}\cdot g(f({\vec {v}}_{2}){\bigr )}\\&=c_{1}\cdot g\circ f\,({\vec {v}}_{1})+c_{2}\cdot g\circ f\,({\vec {v}}_{2})\end{array}}$
Problem 18

Suppose that $f:V\to W$  preserves structure. Show that $f$  is one-to-one if and only if the unique member of $V$  mapped by $f$  to ${\vec {0}}_{W}$  is ${\vec {0}}_{V}$ .

One direction is easy: by definition, if $f$  is one-to-one then for any ${\vec {w}}\in W$  at most one ${\vec {v}}\in V$  has $f({\vec {v}}\,)={\vec {w}}$ , and so in particular, at most one member of $V$  is mapped to ${\vec {0}}_{W}$ . The proof of Lemma 1.8 does not use the fact that the map is a correspondence and therefore shows that any structure-preserving map $f$  sends ${\vec {0}}_{V}$  to ${\vec {0}}_{W}$ .

For the other direction, assume that the only member of $V$  that is mapped to ${\vec {0}}_{W}$  is ${\vec {0}}_{V}$ . To show that $f$  is one-to-one assume that $f({\vec {v}}_{1})=f({\vec {v}}_{2})$ . Then $f({\vec {v}}_{1})-f({\vec {v}}_{2})={\vec {0}}_{W}$  and so $f({\vec {v}}_{1}-{\vec {v}}_{2})={\vec {0}}_{W}$ . Consequently ${\vec {v}}_{1}-{\vec {v}}_{2}={\vec {0}}_{V}$ , so ${\vec {v}}_{1}={\vec {v}}_{2}$ , and so $f$  is one-to-one.

Problem 19

Suppose that $f:V\to W$  is an isomorphism. Prove that the set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{k}\}\subseteq V$  is linearly dependent if and only if the set of images $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{k})\}\subseteq W$  is linearly dependent.

We will prove something stronger— not only is the existence of a dependence preserved by isomorphism, but each instance of a dependence is preserved, that is,

${\vec {v}}_{i}=c_{1}{\vec {v}}_{1}+\cdots +c_{i-1}{\vec {v}}_{i-1}+c_{i+1}{\vec {v}}_{i+1}+\cdots +c_{k}{\vec {v}}_{k}$
$\iff f({\vec {v}}_{i})=c_{1}f({\vec {v}}_{1})+\cdots +c_{i-1}f({\vec {v}}_{i-1})+c_{i+1}f({\vec {v}}_{i+1})+\cdots +c_{k}f({\vec {v}}_{k}).$

The $\implies$  direction of this statement holds by item 3 of Lemma 1.9. The $\Longleftarrow$  direction holds by regrouping

${\begin{array}{rl}f({\vec {v}}_{i})&=c_{1}f({\vec {v}}_{1})+\dots +c_{i-1}f({\vec {v}}_{i-1})+c_{i+1}f({\vec {v}}_{i+1})+\dots +c_{k}f({\vec {v}}_{k})\\&=f(c_{1}{\vec {v}}_{1}+\dots +c_{i-1}{\vec {v}}_{i-1}+c_{i+1}{\vec {v}}_{i+1}+\dots +c_{k}{\vec {v}}_{k})\end{array}}$

and applying the fact that $f$  is one-to-one, and so for the two vectors ${\vec {v}}_{i}$  and $c_{1}{\vec {v}}_{1}+\dots +c_{i-1}{\vec {v}}_{i-1}+c_{i+1}{\vec {v}}_{i+1}+\dots +c_{k}{\vec {v}}_{k}$  to be mapped to the same image by $f$ , they must be equal.

This exercise is recommended for all readers.
Problem 20

Show that each type of map from Example 1.6 is an automorphism.

1. Dilation $d_{s}$  by a nonzero scalar $s$ .
2. Rotation $t_{\theta }$  through an angle $\theta$ .
3. Reflection $f_{\ell }$  over a line through the origin.

Hint. For the second and third items, polar coordinates are useful.

1. This map is one-to-one because if $d_{s}({\vec {v}}_{1})=d_{s}({\vec {v}}_{2})$  then by definition of the map, $s\cdot {\vec {v}}_{1}=s\cdot {\vec {v}}_{2}$  and so ${\vec {v}}_{1}={\vec {v}}_{2}$ , as $s$  is nonzero. This map is onto as any ${\vec {w}}\in \mathbb {R} ^{2}$  is the image of ${\vec {v}}=(1/s)\cdot {\vec {w}}$  (again, note that $s$  is nonzero). (Another way to see that this map is a correspondence is to observe that it has an inverse: the inverse of $d_{s}$  is $d_{1/s}$ .) To finish, note that this map preserves linear combinations
$d_{s}(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=s(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})=c_{1}s{\vec {v}}_{1}+c_{2}s{\vec {v}}_{2}=c_{1}\cdot d_{s}({\vec {v}}_{1})+c_{2}\cdot d_{s}({\vec {v}}_{2})$
and therefore is an isomorphism.
2. As in the prior item, we can show that the map $t_{\theta }$  is a correspondence by noting that it has an inverse, $t_{-\theta }$ . That the map preserves structure is geometrically easy to see. For instance, adding two vectors and then rotating them has the same effect as rotating first and then adding. For an algebraic argument, consider polar coordinates: the map $t_{\theta }$  sends the vector with endpoint $(r,\phi )$  to the vector with endpoint $(r,\phi +\theta )$ . Then the familiar trigonometric formulas $\cos(\phi +\theta )=\cos \phi \,\cos \theta -\sin \phi \,\sin \theta$  and $\sin(\phi +\theta )=\sin \phi \,\cos \theta +\cos \phi \,\sin \theta$  show how to express the map's action in the usual rectangular coordinate system.
${\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}r\cos \phi \\r\sin \phi \end{pmatrix}}{\stackrel {t_{\theta }}{\longmapsto }}{\begin{pmatrix}r\cos(\phi +\theta )\\r\sin(\phi +\theta )\end{pmatrix}}={\begin{pmatrix}x\cos \theta -y\sin \theta \\x\sin \theta +y\cos \theta \end{pmatrix}}$
Now the calculation for preservation of addition is routine.
${\begin{pmatrix}x_{1}+x_{2}\\y_{1}+y_{2}\end{pmatrix}}{\stackrel {t_{\theta }}{\longmapsto }}{\begin{pmatrix}(x_{1}+x_{2})\cos \theta -(y_{1}+y_{2})\sin \theta \\(x_{1}+x_{2})\sin \theta +(y_{1}+y_{2})\cos \theta \end{pmatrix}}={\begin{pmatrix}x_{1}\cos \theta -y_{1}\sin \theta \\x_{1}\sin \theta +y_{1}\cos \theta \end{pmatrix}}+{\begin{pmatrix}x_{2}\cos \theta -y_{2}\sin \theta \\x_{2}\sin \theta +y_{2}\cos \theta \end{pmatrix}}$
The calculation for preservation of scalar multiplication is similar.
3. This map is a correspondence because it has an inverse (namely, itself). As in the last item, that the reflection map preserves structure is geometrically easy to see: adding vectors and then reflecting gives the same result as reflecting first and then adding, for instance. For an algebraic proof, suppose that the line $\ell$  has slope $k$  (the case of a line with undefined slope can be done as a separate, but easy, case). We can follow the hint and use polar coordinates: where the line $\ell$  forms an angle of $\phi$  with the $x$ -axis, the action of $f_{\ell }$  is to send the vector with endpoint $(r\cos \theta ,r\sin \theta )$  to the one with endpoint $(r\cos(2\phi -\theta ),r\sin(2\phi -\theta ))$ .

To convert to rectangular coordinates, we will use some trigonometric formulas, as we did in the prior item. First observe that $\cos \phi$  and $\sin \phi$  can be determined from the slope $k$  of the line. This picture

gives that $\cos \phi =1/{\sqrt {1+k^{2}}}$  and $\sin \phi =k/{\sqrt {1+k^{2}}}$ . Now,

${\begin{array}{rl}\cos(2\phi -\theta )&=\cos(2\phi )\,\cos \theta +\sin(2\phi )\,\sin \theta \\&=\left(\cos ^{2}\phi -\sin ^{2}\phi \right)\,\cos \theta +\left(2\sin \phi \cos \phi \right)\,\sin \theta \\&=\left(({\frac {1}{\sqrt {1+k^{2}}}})^{2}-({\frac {k}{\sqrt {1+k^{2}}}})^{2}\right)\,\cos \theta +\left(2{\frac {k}{\sqrt {1+k^{2}}}}{\frac {1}{\sqrt {1+k^{2}}}}\right)\,\sin \theta \\&=\left({\frac {1-k^{2}}{1+k^{2}}}\right)\,\cos \theta +\left({\frac {2k}{1+k^{2}}}\right)\,\sin \theta \end{array}}$

and thus the first component of the image vector is this.

$r\cdot \cos(2\phi -\theta )={\frac {1-k^{2}}{1+k^{2}}}\cdot x+{\frac {2k}{1+k^{2}}}\cdot y$

A similar calculation shows that the second component of the image vector is this.

$r\cdot \sin(2\phi -\theta )={\frac {2k}{1+k^{2}}}\cdot x-{\frac {1-k^{2}}{1+k^{2}}}\cdot y$

With this algebraic description of the action of $f_{\ell }$

${\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {f_{\ell }}{\longmapsto }}{\begin{pmatrix}(1-k^{2}/1+k^{2})\cdot x+(2k/1+k^{2})\cdot y\\(2k/1+k^{2})\cdot x-(1-k^{2}/1+k^{2})\cdot y\end{pmatrix}}$

checking that it preserves structure is routine.

Problem 21

Produce an automorphism of ${\mathcal {P}}_{2}$  other than the identity map, and other than a shift map $p(x)\mapsto p(x-k)$ .

First, the map $p(x)\mapsto p(x+k)$  doesn't count because it is a version of $p(x)\mapsto p(x-k)$ . Here is a correct answer (many others are also correct): $a_{0}+a_{1}x+a_{2}x^{2}\mapsto a_{2}+a_{0}x+a_{1}x^{2}$ . Verification that this is an isomorphism is straightforward.

Problem 22
1. Show that a function $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$  is an automorphism if and only if it has the form $x\mapsto kx$  for some $k\neq 0$ .
2. Let $f$  be an automorphism of $\mathbb {R} ^{1}$  such that $f(3)=7$ . Find $f(-2)$ .
3. Show that a function $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$  is an automorphism if and only if it has the form
${\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}$
for some $a,b,c,d\in \mathbb {R}$  with $ad-bc\neq 0$ . Hint. Exercises in prior subsections have shown that
${\begin{pmatrix}b\\d\end{pmatrix}}{\text{ is not a multiple of }}{\begin{pmatrix}a\\c\end{pmatrix}}$
if and only if $ad-bc\neq 0$ .
4. Let $f$  be an automorphism of $\mathbb {R} ^{2}$  with
$f({\begin{pmatrix}1\\3\end{pmatrix}})={\begin{pmatrix}2\\-1\end{pmatrix}}\quad {\text{and}}\quad f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}0\\1\end{pmatrix}}.$
Find
$f({\begin{pmatrix}0\\-1\end{pmatrix}}).$
1. For the "only if" half, let $f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}$  to be an isomorphism. Consider the basis $\langle 1\rangle \subseteq \mathbb {R} ^{1}$ . Designate $f(1)$  by $k$ . Then for any $x$  we have that $f(x)=f(x\cdot 1)=x\cdot f(1)=xk$ , and so $f$ 's action is multiplication by $k$ . To finish this half, just note that $k\neq 0$  or else $f$  would not be one-to-one. For the "if" half we only have to check that such a map is an isomorphism when $k\neq 0$ . To check that it is one-to-one, assume that $f(x_{1})=f(x_{2})$  so that $kx_{1}=kx_{2}$  and divide by the nonzero factor $k$  to conclude that $x_{1}=x_{2}$ . To check that it is onto, note that any $y\in \mathbb {R} ^{1}$  is the image of $x=y/k$  (again, $k\neq 0$ ). Finally, to check that such a map preserves combinations of two members of the domain, we have this.
$f(c_{1}x_{1}+c_{2}x_{2})=k(c_{1}x_{1}+c_{2}x_{2})=c_{1}kx_{1}+c_{2}kx_{2}=c_{1}f(x_{1})+c_{2}f(x_{2})$
2. By the prior item, $f$ 's action is $x\mapsto (7/3)x$ . Thus $f(-2)=-14/3$ .
3. For the "only if" half, assume that $f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}$  is an automorphism. Consider the standard basis ${\mathcal {E}}_{2}$  for $\mathbb {R} ^{2}$ . Let
$f({\vec {e}}_{1})={\begin{pmatrix}a\\c\end{pmatrix}}\quad {\text{and}}\quad f({\vec {e}}_{2})={\begin{pmatrix}b\\d\end{pmatrix}}.$
Then the action of $f$  on any vector is determined by by its action on the two basis vectors.
$f({\begin{pmatrix}x\\y\end{pmatrix}})=f(x\cdot {\vec {e}}_{1}+y\cdot {\vec {e}}_{2})=x\cdot f({\vec {e}}_{1})+y\cdot f({\vec {e}}_{2})=x\cdot {\begin{pmatrix}a\\c\end{pmatrix}}+y\cdot {\begin{pmatrix}b\\d\end{pmatrix}}={\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}$
To finish this half, note that if $ad-bc=0$ , that is, if $f({\vec {e}}_{2})$  is a multiple of $f({\vec {e}}_{1})$ , then $f$  is not one-to-one. For "if" we must check that the map is an isomorphism, under the condition that $ad-bc\neq 0$ . The structure-preservation check is easy; we will here show that $f$  is a correspondence. For the argument that the map is one-to-one, assume this.
$f({\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}})=f({\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}})\quad {\text{and so}}\quad {\begin{pmatrix}ax_{1}+by_{1}\\cx_{1}+dy_{1}\end{pmatrix}}={\begin{pmatrix}ax_{2}+by_{2}\\cx_{2}+dy_{2}\end{pmatrix}}$
Then, because $ad-bc\neq 0$ , the resulting system
${\begin{array}{*{2}{rc}r}a(x_{1}-x_{2})&+&b(y_{1}-y_{2})&=&0\\c(x_{1}-x_{2})&+&d(y_{1}-y_{2})&=&0\end{array}}$
has a unique solution, namely the trivial one $x_{1}-x_{2}=0$  and $y_{1}-y_{2}=0$  (this follows from the hint). The argument that this map is onto is closely related— this system
${\begin{array}{*{2}{rc}r}ax_{1}&+&by_{1}&=&x\\cx_{1}&+&dy_{1}&=&y\end{array}}$
has a solution for any $x$  and $y$  if and only if this set
$\{{\begin{pmatrix}a\\c\end{pmatrix}},{\begin{pmatrix}b\\d\end{pmatrix}}\}$
spans $\mathbb {R} ^{2}$ , i.e., if and only if this set is a basis (because it is a two-element subset of $\mathbb {R} ^{2}$ ), i.e., if and only if $ad-bc\neq 0$ .
4. $f({\begin{pmatrix}0\\-1\end{pmatrix}})=f({\begin{pmatrix}1\\3\end{pmatrix}}-{\begin{pmatrix}1\\4\end{pmatrix}})=f({\begin{pmatrix}1\\3\end{pmatrix}})-f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}2\\-1\end{pmatrix}}-{\begin{pmatrix}0\\1\end{pmatrix}}={\begin{pmatrix}2\\-2\end{pmatrix}}$
Problem 23

Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.

There are many answers; two are linear independence and subspaces.

To show that if a set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$  is linearly independent then its image $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$  is also linearly independent, consider a linear relationship among members of the image set.

$0=c_{1}f({\vec {v}}_{1})+\dots +c_{n}f({\vec {v_{n}}})=f(c_{1}{\vec {v}}_{1})+\dots +f(c_{n}{\vec {v_{n}}})=f(c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v_{n}}})$

Because this map is an isomorphism, it is one-to-one. So $f$  maps only one vector from the domain to the zero vector in the range, that is, $c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v}}_{n}$  equals the zero vector (in the domain, of course). But, if $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$  is linearly independent then all of the $c$ 's are zero, and so $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$  is linearly independent also. (Remark. There is a small point about this argument that is worth mention. In a set, repeats collapse, that is, strictly speaking, this is a one-element set: $\{{\vec {v}},{\vec {v}}\}$ , because the things listed as in it are the same thing. Observe, however, the use of the subscript $n$  in the above argument. In moving from the domain set $\{{\vec {v}}_{1},\dots ,{\vec {v}}_{n}\}$  to the image set $\{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{n})\}$ , there is no collapsing, because the image set does not have repeats, because the isomorphism $f$  is one-to-one.)

To show that if $f:V\to W$  is an isomorphism and if $U$  is a subspace of the domain $V$  then the set of image vectors $f(U)=\{{\vec {w}}\in W\,{\big |}\,{\vec {w}}=f({\vec {u}}){\text{ for some }}{\vec {u}}\in U\}$  is a subspace of $W$ , we need only show that it is closed under linear combinations of two of its members (it is nonempty because it contains the image of the zero vector). We have

$c_{1}\cdot f({\vec {u}}_{1})+c_{2}\cdot f({\vec {u}}_{2})=f(c_{1}{\vec {u}}_{1})+f(c_{2}{\vec {u}}_{2})=f(c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2})$

and $c_{1}{\vec {u}}_{1}+c_{2}{\vec {u}}_{2}$  is a member of $U$  because of the closure of a subspace under combinations. Hence the combination of $f({\vec {u}}_{1})$  and $f({\vec {u}}_{2})$  is a member of $f(U)$ .

Problem 24

We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we can produce an isomorphism associating those vectors.

1. Let $B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2},{\vec {\beta }}_{3}\rangle$  be a basis for ${\mathcal {P}}_{2}$  so that any ${\vec {p}}\in {\mathcal {P}}_{2}$  has a unique representation as ${\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}$ , which we denote in this way.
${\rm {Rep}}_{B}({\vec {p}})={\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}$
Show that the ${\rm {Rep}}_{B}(\cdot )$  operation is a function from ${\mathcal {P}}_{2}$  to $\mathbb {R} ^{3}$  (this entails showing that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$  there is an associated image vector in $\mathbb {R} ^{3}$ , and further, that with every domain vector ${\vec {v}}\in {\mathcal {P}}_{2}$  there is at most one associated image vector).
2. Show that this ${\rm {Rep}}_{B}(\cdot )$  function is one-to-one and onto.
3. Show that it preserves structure.
4. Produce an isomorphism from ${\mathcal {P}}_{2}$  to $\mathbb {R} ^{3}$  that fits these specifications.
$x+x^{2}\mapsto {\begin{pmatrix}1\\0\\0\end{pmatrix}}\quad {\text{and}}\quad 1-x\mapsto {\begin{pmatrix}0\\1\\0\end{pmatrix}}$
1. The association
${\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}{\stackrel {{\rm {Rep}}_{B}(\cdot )}{\longmapsto }}{\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}$
is a function if every member ${\vec {p}}$  of the domain is associated with at least one member of the codomain, and if every member ${\vec {p}}$  of the domain is associated with at most one member of the codomain. The first condition holds because the basis $B$  spans the domain— every ${\vec {p}}$  can be written as at least one linear combination of ${\vec {\beta }}$ 's. The second condition holds because the basis $B$  is linearly independent— every member ${\vec {p}}$  of the domain can be written as at most one linear combination of the ${\vec {\beta }}$ 's.
2. For the one-to-one argument, if ${\rm {Rep}}_{B}({\vec {p}})={\rm {Rep}}_{B}({\vec {q}})$ , that is, if ${\rm {Rep}}_{B}(p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3})={\rm {Rep}}_{B}(q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3})$  then
${\begin{pmatrix}p_{1}\\p_{2}\\p_{3}\end{pmatrix}}={\begin{pmatrix}q_{1}\\q_{2}\\q_{3}\end{pmatrix}}$
and so $p_{1}=q_{1}$  and $p_{2}=q_{2}$  and $p_{3}=q_{3}$ , which gives the conclusion that ${\vec {p}}={\vec {q}}$ . Therefore this map is one-to-one. For onto, we can just note that
${\begin{pmatrix}a\\b\\c\end{pmatrix}}$
equals ${\rm {Rep}}_{B}(a{\vec {\beta }}_{1}+b{\vec {\beta }}_{2}+c{\vec {\beta }}_{3})$ , and so any member of the codomain $\mathbb {R} ^{3}$  is the image of some member of the domain ${\mathcal {P}}_{2}$ .
3. This map respects addition and scalar multiplication because it respects combinations of two members of the domain (that is, we are using item 2 of Lemma 1.9): where ${\vec {p}}=p_{1}{\vec {\beta }}_{1}+p_{2}{\vec {\beta }}_{2}+p_{3}{\vec {\beta }}_{3}$  and ${\vec {q}}=q_{1}{\vec {\beta }}_{1}+q_{2}{\vec {\beta }}_{2}+q_{3}{\vec {\beta }}_{3}$ , we have this.
${\begin{array}{rl}{\rm {Rep}}_{B}(c\cdot {\vec {p}}+d\cdot {\vec {q}})&={\rm {Rep}}_{B}(\,(cp_{1}+dq_{1}){\vec {\beta }}_{1}+(cp_{2}+dq_{2}){\vec {\beta }}_{2}+(cp_{3}+dq_{3}){\vec {\beta }}_{3}\,)\\&={\begin{pmatrix}cp_{1}+dq_{1}\\cp_{2}+dq_{2}\\cp_{3}+dq_{3}\end{pmatrix}}\\&=c\cdot {\begin{pmatrix}p_{1}\\p_{2}\\p_{3}\end{pmatrix}}+d\cdot {\begin{pmatrix}q_{1}\\q_{2}\\q_{3}\end{pmatrix}}\\&={\rm {Rep}}_{B}({\vec {p}})+{\rm {Rep}}_{B}({\vec {q}})\end{array}}$
4. Use any basis $B$  for ${\mathcal {P}}_{2}$  whose first two members are $x+x^{2}$  and $1-x$ , say $B=\langle x+x^{2},1-x,1\rangle$ .
Problem 25

Prove that a space is $n$ -dimensional if and only if it is isomorphic to $\mathbb {R} ^{n}$ . Hint. Fix a basis $B$  for the space and consider the map sending a vector over to its representation with respect to $B$ .

See the next subsection.

Problem 26

(Requires the subsection on Combining Subspaces, which is optional.) Let $U$  and $W$  be vector spaces. Define a new vector space, consisting of the set $U\times W=\{({\vec {u}},{\vec {w}})\,{\big |}\,{\vec {u}}\in U{\text{ and }}{\vec {w}}\in W\}$  along with these operations.

$({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\quad {\text{and}}\quad r\cdot ({\vec {u}},{\vec {w}})=(r{\vec {u}},r{\vec {w}})$

This is a vector space, the external direct sum of $U$  and $W$ .

1. Check that it is a vector space.
2. Find a basis for, and the dimension of, the external direct sum ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$ .
3. What is the relationship among $\dim(U)$ , $\dim(W)$ , and $\dim(U\times W)$ ?
4. Suppose that $U$  and $W$  are subspaces of a vector space $V$  such that $V=U\oplus W$  (in this case we say that $V$  is the internal direct sum of $U$  and $W$ ). Show that the map $f:U\times W\to V$  given by
$({\vec {u}},{\vec {w}}){\stackrel {f}{\longmapsto }}{\vec {u}}+{\vec {w}}$
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are isomorphic.
1. Most of the conditions in the definition of a vector space are routine. We here sketch the verification of part 1 of that definition. For closure of $U\times W$ , note that because $U$  and $W$  are closed, we have that ${\vec {u}}_{1}+{\vec {u}}_{2}\in U$  and ${\vec {w}}_{1}+{\vec {w}}_{2}\in W$  and so $({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\in U\times W$ . Commutativity of addition in $U\times W$  follows from commutativity of addition in $U$  and $W$ .
$({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})=({\vec {u}}_{2}+{\vec {u}}_{1},{\vec {w}}_{2}+{\vec {w}}_{1})=({\vec {u}}_{2},{\vec {w}}_{2})+({\vec {u}}_{1},{\vec {w}}_{1})$
The check for associativity of addition is similar. The zero element is $({\vec {0}}_{U},{\vec {0}}_{W})\in U\times W$  and the additive inverse of $({\vec {u}},{\vec {w}})$  is $(-{\vec {u}},-{\vec {w}})$ . The checks for the second part of the definition of a vector space are also straightforward.
$\langle \,(1,{\begin{pmatrix}0\\0\end{pmatrix}}),(x,{\begin{pmatrix}0\\0\end{pmatrix}}),(x^{2},{\begin{pmatrix}0\\0\end{pmatrix}}),(1,{\begin{pmatrix}1\\0\end{pmatrix}}),(1,{\begin{pmatrix}0\\1\end{pmatrix}})\,\rangle$
because there is one and only one way to represent any member of ${\mathcal {P}}_{2}\times \mathbb {R} ^{2}$  with respect to this set; here is an example.