# Chapter III - Maps Between Spaces

## Section I - Isomorphisms

In the examples following the definition of a vector space we developed the intuition that some spaces are "the same" as others. For instance, the space of two-tall column vectors and the space of two-wide row vectors are not equal because their elements—column vectors and row vectors—are not equal, but we have the idea that these spaces differ only in how their elements appear. We will now make this idea precise.

This section illustrates a common aspect of a mathematical investigation. With the help of some examples, we've gotten an idea. We will next give a formal definition, and then we will produce some results backing our contention that the definition captures the idea. We've seen this happen already, for instance, in the first section of the Vector Space chapter. There, the study of linear systems led us to consider collections closed under linear combinations. We defined such a collection as a vector space, and we followed it with some supporting results.

Of course, that definition wasn't an end point, instead it led to new insights such as the idea of a basis. Here too, after producing a definition, and supporting it, we will get two surprises (pleasant ones). First, we will find that the definition applies to some unforeseen, and interesting, cases. Second, the study of the definition will lead to new ideas. In this way, our investigation will build a momentum.

### 1 - Definition and Examples

Example 1.1

Consider the example mentioned above, the space of two-wide row vectors and the space of two-tall column vectors. They are "the same" in that if we associate the vectors that have the same components, e.g.,

${\displaystyle {\begin{pmatrix}1&2\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}1\\2\end{pmatrix}}}$

then this correspondence preserves the operations, for instance this addition

${\displaystyle {\begin{pmatrix}1&2\end{pmatrix}}+{\begin{pmatrix}3&4\end{pmatrix}}={\begin{pmatrix}4&6\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}1\\2\end{pmatrix}}+{\begin{pmatrix}3\\4\end{pmatrix}}={\begin{pmatrix}4\\6\end{pmatrix}}}$

and this scalar multiplication.

${\displaystyle 5\cdot {\begin{pmatrix}1&2\end{pmatrix}}={\begin{pmatrix}5&10\end{pmatrix}}\quad \longleftrightarrow \quad 5\cdot {\begin{pmatrix}1\\2\end{pmatrix}}={\begin{pmatrix}5\\10\end{pmatrix}}}$

More generally stated, under the correspondence

${\displaystyle {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}}$

both operations are preserved:

${\displaystyle {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}+{\begin{pmatrix}b_{0}&b_{1}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}&a_{1}+b_{1}\end{pmatrix}}\longleftrightarrow {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\end{pmatrix}}}$

and

${\displaystyle r\cdot {\begin{pmatrix}a_{0}&a_{1}\end{pmatrix}}={\begin{pmatrix}ra_{0}&ra_{1}\end{pmatrix}}\quad \longleftrightarrow \quad r\cdot {\begin{pmatrix}a_{0}\\a_{1}\end{pmatrix}}={\begin{pmatrix}ra_{0}\\ra_{1}\end{pmatrix}}}$

(all of the variables are real numbers).

Example 1.2

Another two spaces we can think of as "the same" are ${\displaystyle {\mathcal {P}}_{2}}$, the space of quadratic polynomials, and ${\displaystyle \mathbb {R} ^{3}}$. A natural correspondence is this.

${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}\qquad \qquad ({\text{e.g., }}1+2x+3x^{2}\,\longleftrightarrow \,{\begin{pmatrix}1\\2\\3\end{pmatrix}})}$

The structure is preserved: corresponding elements add in a corresponding way

${\displaystyle {\begin{array}{r}a_{0}+a_{1}x+a_{2}x^{2}\\+\,\,b_{0}+b_{1}x+b_{2}x^{2}\\\hline (a_{0}+b_{0})+(a_{1}+b_{1})x+(a_{2}+b_{2})x^{2}\end{array}}\quad \longleftrightarrow \quad {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{0}\\b_{1}\\b_{2}\end{pmatrix}}={\begin{pmatrix}a_{0}+b_{0}\\a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}}$

and scalar multiplication corresponds also.

${\displaystyle r\cdot (a_{0}+a_{1}x+a_{2}x^{2})=(ra_{0})+(ra_{1})x+(ra_{2})x^{2}\quad \longleftrightarrow \quad r\cdot {\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}ra_{0}\\ra_{1}\\ra_{2}\end{pmatrix}}}$
Definition 1.3

An isomorphism between two vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ is a map ${\displaystyle f:V\to W}$ that

1. is a correspondence: ${\displaystyle f}$ is one-to-one and onto;[1]
2. preserves structure: if ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2}\in V}$ then
${\displaystyle f({\vec {v}}_{1}+{\vec {v}}_{2})=f({\vec {v}}_{1})+f({\vec {v}}_{2})}$
and if ${\displaystyle {\vec {v}}\in V}$ and ${\displaystyle r\in \mathbb {R} }$ then
${\displaystyle f(r{\vec {v}})=r\,f({\vec {v}})}$

(we write ${\displaystyle V\cong W}$, read "${\displaystyle V}$ is isomorphic to ${\displaystyle W}$", when such a map exists).

("Morphism" means map, so "isomorphism" means a map expressing sameness.)

Example 1.4

The vector space ${\displaystyle G=\{c_{1}\cos \theta +c_{2}\sin \theta \,{\big |}\,c_{1},c_{2}\in \mathbb {R} \}}$ of functions of ${\displaystyle \theta }$ is isomorphic to the vector space ${\displaystyle \mathbb {R} ^{2}}$ under this map.

${\displaystyle c_{1}\cos \theta +c_{2}\sin \theta {\stackrel {f}{\longmapsto }}{\begin{pmatrix}c_{1}\\c_{2}\end{pmatrix}}}$

We will check this by going through the conditions in the definition.

We will first verify condition 1, that the map is a correspondence between the sets underlying the spaces.

To establish that ${\displaystyle f}$ is one-to-one, we must prove that ${\displaystyle f({\vec {a}})=f({\vec {b}})}$ only when ${\displaystyle {\vec {a}}={\vec {b}}}$. If

${\displaystyle f(a_{1}\cos \theta +a_{2}\sin \theta )=f(b_{1}\cos \theta +b_{2}\sin \theta )}$

then, by the definition of ${\displaystyle f}$,

${\displaystyle {\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}}$

from which we can conclude that ${\displaystyle a_{1}=b_{1}}$ and ${\displaystyle a_{2}=b_{2}}$ because column vectors are equal only when they have equal components. We've proved that ${\displaystyle f({\vec {a}})=f({\vec {b}})}$ implies that ${\displaystyle {\vec {a}}={\vec {b}}}$, which shows that ${\displaystyle f}$ is one-to-one.

To check that ${\displaystyle f}$ is onto we must check that any member of the codomain ${\displaystyle \mathbb {R} ^{2}}$ is the image of some member of the domain ${\displaystyle G}$. But that's clear—any

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}}$

is the image under ${\displaystyle f}$ of ${\displaystyle x\cos \theta +y\sin \theta \in G}$.

Next we will verify condition (2), that ${\displaystyle f}$ preserves structure.

This computation shows that ${\displaystyle f}$ preserves addition.

${\displaystyle f{\bigl (}\,(a_{1}\cos \theta +a_{2}\sin \theta )+(b_{1}\cos \theta +b_{2}\sin \theta )\,{\bigr )}}$
${\displaystyle {\begin{array}{rl}&=f{\bigl (}\,(a_{1}+b_{1})\cos \theta +(a_{2}+b_{2})\sin \theta \,{\bigr )}\\&={\begin{pmatrix}a_{1}+b_{1}\\a_{2}+b_{2}\end{pmatrix}}\\&={\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}+{\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}}\\&=f(a_{1}\cos \theta +a_{2}\sin \theta )+f(b_{1}\cos \theta +b_{2}\sin \theta )\end{array}}}$

A similar computation shows that ${\displaystyle f}$ preserves scalar multiplication.

${\displaystyle {\begin{array}{rl}f{\bigl (}\,r\cdot (a_{1}\cos \theta +a_{2}\sin \theta )\,{\bigr )}&=f(\,ra_{1}\cos \theta +ra_{2}\sin \theta \,)\\&={\begin{pmatrix}ra_{1}\\ra_{2}\end{pmatrix}}\\&=r\cdot {\begin{pmatrix}a_{1}\\a_{2}\end{pmatrix}}\\&=r\cdot \,f(a_{1}\cos \theta +a_{2}\sin \theta )\end{array}}}$

With that, conditions (1) and (2) are verified, so we know that ${\displaystyle f}$ is an isomorphism and we can say that the spaces are isomorphic ${\displaystyle G\cong \mathbb {R} ^{2}}$.

Example 1.5

Let ${\displaystyle V}$ be the space ${\displaystyle \{c_{1}x+c_{2}y+c_{3}z\,{\big |}\,c_{1},c_{2},c_{3}\in \mathbb {R} \}}$ of linear combinations of three variables ${\displaystyle x}$, ${\displaystyle y}$, and ${\displaystyle z}$, under the natural addition and scalar multiplication operations. Then ${\displaystyle V}$ is isomorphic to ${\displaystyle {\mathcal {P}}_{2}}$, the space of quadratic polynomials.

To show this we will produce an isomorphism map. There is more than one possibility; for instance, here are four.

${\displaystyle {\begin{array}{c}c_{1}x+c_{2}y+c_{3}z\end{array}}\quad {\begin{array}{rl}{\stackrel {f_{1}}{\longmapsto }}&c_{1}+c_{2}x+c_{3}x^{2}\\{\stackrel {f_{2}}{\longmapsto }}&c_{2}+c_{3}x+c_{1}x^{2}\\{\stackrel {f_{3}}{\longmapsto }}&-c_{1}-c_{2}x-c_{3}x^{2}\\{\stackrel {f_{4}}{\longmapsto }}&c_{1}+(c_{1}+c_{2})x+(c_{1}+c_{3})x^{2}\end{array}}}$

The first map is the more natural correspondence in that it just carries the coefficients over. However, below we shall verify that the second one is an isomorphism, to underline that there are isomorphisms other than just the obvious one (showing that ${\displaystyle f_{1}}$ is an isomorphism is Problem 3).

To show that ${\displaystyle f_{2}}$ is one-to-one, we will prove that if ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ then ${\displaystyle c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z}$. The assumption that ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ gives, by the definition of ${\displaystyle f_{2}}$, that ${\displaystyle c_{2}+c_{3}x+c_{1}x^{2}=d_{2}+d_{3}x+d_{1}x^{2}}$. Equal polynomials have equal coefficients, so ${\displaystyle c_{2}=d_{2}}$, ${\displaystyle c_{3}=d_{3}}$, and ${\displaystyle c_{1}=d_{1}}$. Thus ${\displaystyle f_{2}(c_{1}x+c_{2}y+c_{3}z)=f_{2}(d_{1}x+d_{2}y+d_{3}z)}$ implies that ${\displaystyle c_{1}x+c_{2}y+c_{3}z=d_{1}x+d_{2}y+d_{3}z}$ and therefore ${\displaystyle f_{2}}$ is one-to-one.

The map ${\displaystyle f_{2}}$ is onto because any member ${\displaystyle a+bx+cx^{2}}$ of the codomain is the image of some member of the domain, namely it is the image of ${\displaystyle cx+ay+bz}$. For instance, ${\displaystyle 2+3x-4x^{2}}$ is ${\displaystyle f_{2}(-4x+2y+3z)}$.

The computations for structure preservation are like those in the prior example. This map preserves addition

${\displaystyle f_{2}{\bigl (}(c_{1}x+c_{2}y+c_{3}z)+(d_{1}x+d_{2}y+d_{3}z){\bigr )}}$
${\displaystyle {\begin{array}{rl}&=f_{2}{\bigl (}(c_{1}+d_{1})x+(c_{2}+d_{2})y+(c_{3}+d_{3})z{\bigr )}\\&=(c_{2}+d_{2})+(c_{3}+d_{3})x+(c_{1}+d_{1})x^{2}\\&=(c_{2}+c_{3}x+c_{1}x^{2})+(d_{2}+d_{3}x+d_{1}x^{2})\\&=f_{2}(c_{1}x+c_{2}y+c_{3}z)+f_{2}(d_{1}x+d_{2}y+d_{3}z)\end{array}}}$

and scalar multiplication.

${\displaystyle {\begin{array}{rl}f_{2}{\bigl (}r\cdot (c_{1}x+c_{2}y+c_{3}z){\bigr )}&=f_{2}(rc_{1}x+rc_{2}y+rc_{3}z)\\&=rc_{2}+rc_{3}x+rc_{1}x^{2}\\&=r\cdot (c_{2}+c_{3}x+c_{1}x^{2})\\&=r\cdot \,f_{2}(c_{1}x+c_{2}y+c_{3}z)\end{array}}}$

Thus ${\displaystyle f_{2}}$ is an isomorphism and we write ${\displaystyle V\cong {\mathcal {P}}_{2}}$.

We are sometimes interested in an isomorphism of a space with itself, called an automorphism. An identity map is an automorphism. The next two examples show that there are others.

Example 1.6

A dilation map ${\displaystyle d_{s}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that multiplies all vectors by a nonzero scalar ${\displaystyle s}$ is an automorphism of ${\displaystyle \mathbb {R} ^{2}}$.

A rotation or turning map ${\displaystyle t_{\theta }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that rotates all vectors through an angle ${\displaystyle \theta }$ is an automorphism.

A third type of automorphism of ${\displaystyle \mathbb {R} ^{2}}$ is a map ${\displaystyle f_{\ell }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that flips or reflects all vectors over a line ${\displaystyle \ell }$ through the origin.

See Problem 20.

Example 1.7

Consider the space ${\displaystyle {\mathcal {P}}_{5}}$ of polynomials of degree 5 or less and the map ${\displaystyle f}$ that sends a polynomial ${\displaystyle p(x)}$ to ${\displaystyle p(x-1)}$. For instance, under this map ${\displaystyle x^{2}\mapsto (x-1)^{2}=x^{2}-2x+1}$ and ${\displaystyle x^{3}+2x\mapsto (x-1)^{3}+2(x-1)=x^{3}-3x^{2}+5x-3}$. This map is an automorphism of this space; the check is Problem 12.

This isomorphism of ${\displaystyle {\mathcal {P}}_{5}}$ with itself does more than just tell us that the space is "the same" as itself. It gives us some insight into the space's structure. For instance, below is shown a family of parabolas, graphs of members of ${\displaystyle {\mathcal {P}}_{5}}$. Each has a vertex at ${\displaystyle y=-1}$, and the left-most one has zeroes at ${\displaystyle -2.25}$ and ${\displaystyle -1.75}$, the next one has zeroes at ${\displaystyle -1.25}$ and ${\displaystyle -0.75}$, etc.

Geometrically, the substitution of ${\displaystyle x-1}$ for ${\displaystyle x}$ in any function's argument shifts its graph to the right by one. Thus, ${\displaystyle f(p_{0})=p_{1}}$ and ${\displaystyle f}$'s action is to shift all of the parabolas to the right by one. Notice that the picture before ${\displaystyle f}$ is applied is the same as the picture after ${\displaystyle f}$ is applied, because while each parabola moves to the right, another one comes in from the left to take its place. This also holds true for cubics, etc. So the automorphism ${\displaystyle f}$ gives us the insight that ${\displaystyle P_{5}}$ has a certain horizontal homogeneity; this space looks the same near ${\displaystyle x=1}$ as near ${\displaystyle x=0}$.

As described in the preamble to this section, we will next produce some results supporting the contention that the definition of isomorphism above captures our intuition of vector spaces being the same.

Of course the definition itself is persuasive: a vector space consists of two components, a set and some structure, and the definition simply requires that the sets correspond and that the structures correspond also. Also persuasive are the examples above. In particular, Example 1.1, which gives an isomorphism between the space of two-wide row vectors and the space of two-tall column vectors, dramatizes our intuition that isomorphic spaces are the same in all relevant respects. Sometimes people say, where ${\displaystyle V\cong W}$, that "${\displaystyle W}$ is just ${\displaystyle V}$ painted green"—any differences are merely cosmetic.

Further support for the definition, in case it is needed, is provided by the following results that, taken together, suggest that all the things of interest in a vector space correspond under an isomorphism. Since we studied vector spaces to study linear combinations, "of interest" means "pertaining to linear combinations". Not of interest is the way that the vectors are presented typographically (or their color!).

As an example, although the definition of isomorphism doesn't explicitly say that the zero vectors must correspond, it is a consequence of that definition.

Lemma 1.8

An isomorphism maps a zero vector to a zero vector.

Proof

Where ${\displaystyle f:V\to W}$ is an isomorphism, fix any ${\displaystyle {\vec {v}}\in V}$. Then ${\displaystyle f({\vec {0}}_{V})=f(0\cdot {\vec {v}})=0\cdot f({\vec {v}})={\vec {0}}_{W}}$.

The definition of isomorphism requires that sums of two vectors correspond and that so do scalar multiples. We can extend that to say that all linear combinations correspond.

Lemma 1.9

For any map ${\displaystyle f:V\to W}$ between vector spaces these statements are equivalent.

1. ${\displaystyle f}$ preserves structure
${\displaystyle f({\vec {v}}_{1}+{\vec {v}}_{2})=f({\vec {v}}_{1})+f({\vec {v}}_{2})\quad {\text{and}}\quad f(c{\vec {v}})=c\,f({\vec {v}})}$
2. ${\displaystyle f}$ preserves linear combinations of two vectors
${\displaystyle f(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})=c_{1}f({\vec {v}}_{1})+c_{2}f({\vec {v}}_{2})}$
3. ${\displaystyle f}$ preserves linear combinations of any finite number of vectors
${\displaystyle f(c_{1}{\vec {v}}_{1}+\dots +c_{n}{\vec {v}}_{n})=c_{1}f({\vec {v}}_{1})+\dots +c_{n}f({\vec {v}}_{n})}$
Proof

Since the implications ${\displaystyle 3\!\implies \!2}$ and ${\displaystyle 2\!\implies \!1}$ are clear, we need only show that ${\displaystyle 1\!\implies \!3}$. Assume statement 1. We will prove statement 3 by induction on the number of summands ${\displaystyle n}$.

The one-summand base case, that ${\displaystyle f(c{\vec {v}}_{1})=c\,f({\vec {v}}_{1})}$, is covered by the assumption of statement 1.

For the inductive step assume that statement 3 holds whenever there are ${\displaystyle k}$ or fewer summands, that is, whenever ${\displaystyle n=1}$, or ${\displaystyle n=2}$, ..., or ${\displaystyle n=k}$. Consider the ${\displaystyle k+1}$-summand case. The first half of 1 gives

${\displaystyle f(c_{1}{\vec {v}}_{1}+\dots +c_{k}{\vec {v}}_{k}+c_{k+1}{\vec {v}}_{k+1})=f(c_{1}{\vec {v}}_{1}+\dots +c_{k}{\vec {v}}_{k})+f(c_{k+1}{\vec {v}}_{k+1})}$

by breaking the sum along the final "${\displaystyle +}$". Then the inductive hypothesis lets us break up the ${\displaystyle k}$-term sum.

${\displaystyle =f(c_{1}{\vec {v}}_{1})+\dots +f(c_{k}{\vec {v}}_{k})+f(c_{k+1}{\vec {v}}_{k+1})}$

Finally, the second half of statement 1 gives

${\displaystyle =c_{1}\,f({\vec {v}}_{1})+\dots +c_{k}\,f({\vec {v}}_{k})+c_{k+1}\,f({\vec {v}}_{k+1})}$

when applied ${\displaystyle k+1}$ times.

In addition to adding to the intuition that the definition of isomorphism does indeed preserve the things of interest in a vector space, that lemma's second item is an especially handy way of checking that a map preserves structure.

We close with a summary. The material in this section augments the chapter on Vector Spaces. There, after giving the definition of a vector space, we informally looked at what different things can happen. Here, we defined the relation "${\displaystyle \cong }$" between vector spaces and we have argued that it is the right way to split the collection of vector spaces into cases because it preserves the features of interest in a vector space—in particular, it preserves linear combinations. That is, we have now said precisely what we mean by "the same", and by "different", and so we have precisely classified the vector spaces.

## Exercises

This exercise is recommended for all readers.
Problem 1

Verify, using Example 1.4 as a model, that the two correspondences given before the definition are isomorphisms.

This exercise is recommended for all readers.
Problem 2

For the map ${\displaystyle f:{\mathcal {P}}_{1}\to \mathbb {R} ^{2}}$ given by

${\displaystyle a+bx{\stackrel {f}{\longmapsto }}{\begin{pmatrix}a-b\\b\end{pmatrix}}}$

Find the image of each of these elements of the domain.

1. ${\displaystyle 3-2x}$
2. ${\displaystyle 2+2x}$
3. ${\displaystyle x}$

Show that this map is an isomorphism.

Problem 3

Show that the natural map ${\displaystyle f_{1}}$ from Example 1.5 is an isomorphism.

This exercise is recommended for all readers.
Problem 4

Decide whether each map is an isomorphism (if it is an isomorphism then prove it and if it isn't then state a condition that it fails to satisfy).

1. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto ad-bc}$
2. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to \mathbb {R} ^{4}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto {\begin{pmatrix}a+b+c+d\\a+b+c\\a+b\\a\end{pmatrix}}}$
3. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a)x^{2}+ax^{3}}$
4. ${\displaystyle f:{\mathcal {M}}_{2\!\times \!2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto c+(d+c)x+(b+a+1)x^{2}+ax^{3}}$
Problem 5

Show that the map ${\displaystyle f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}}$ given by ${\displaystyle f(x)=x^{3}}$ is one-to-one and onto.Is it an isomorphism?

This exercise is recommended for all readers.
Problem 6

Refer to Example 1.1. Produce two more isomorphisms (of course, that they satisfy the conditions in the definition of isomorphism must be verified).

Problem 7

Refer to Example 1.2. Produce two more isomorphisms (and verify that they satisfy the conditions).

This exercise is recommended for all readers.
Problem 8

Show that, although ${\displaystyle \mathbb {R} ^{2}}$ is not itself a subspace of ${\displaystyle \mathbb {R} ^{3}}$, it is isomorphic to the ${\displaystyle xy}$-plane subspace of ${\displaystyle \mathbb {R} ^{3}}$.

Problem 9

Find two isomorphisms between ${\displaystyle \mathbb {R} ^{16}}$ and ${\displaystyle {\mathcal {M}}_{4\!\times \!4}}$.

This exercise is recommended for all readers.
Problem 10

For what ${\displaystyle k}$ is ${\displaystyle {\mathcal {M}}_{m\!\times \!n}}$ isomorphic to ${\displaystyle \mathbb {R} ^{k}}$?

Problem 11

For what ${\displaystyle k}$ is ${\displaystyle {\mathcal {P}}_{k}}$ isomorphic to ${\displaystyle \mathbb {R} ^{n}}$?

Problem 12

Prove that the map in Example 1.7, from ${\displaystyle {\mathcal {P}}_{5}}$ to ${\displaystyle {\mathcal {P}}_{5}}$ given by ${\displaystyle p(x)\mapsto p(x-1)}$, is a vector space isomorphism.

Problem 13

Why, in Lemma 1.8, must there be a ${\displaystyle {\vec {v}}\in V}$? That is, why must ${\displaystyle V}$ be nonempty?

Problem 14

Are any two trivial spaces isomorphic?

Problem 15

In the proof of Lemma 1.9, what about the zero-summands case (that is, if ${\displaystyle n}$ is zero)?

Problem 16

Show that any isomorphism ${\displaystyle f:{\mathcal {P}}_{0}\to \mathbb {R} ^{1}}$ has the form ${\displaystyle a\mapsto ka}$ for some nonzero real number ${\displaystyle k}$.

This exercise is recommended for all readers.
Problem 17

These prove that isomorphism is an equivalence relation.

1. Show that the identity map ${\displaystyle {\mbox{id}}:V\to V}$ is an isomorphism. Thus, any vector space is isomorphic to itself.
2. Show that if ${\displaystyle f:V\to W}$ is an isomorphism then so is its inverse ${\displaystyle f^{-1}:W\to V}$. Thus, if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ then also ${\displaystyle W}$ is isomorphic to ${\displaystyle V}$.
3. Show that a composition of isomorphisms is an isomorphism: if ${\displaystyle f:V\to W}$ is an isomorphism and ${\displaystyle g:W\to U}$ is an isomorphism then so also is ${\displaystyle g\circ f:V\to U}$. Thus, if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ and ${\displaystyle W}$ is isomorphic to ${\displaystyle U}$, then also ${\displaystyle V}$ is isomorphic to ${\displaystyle U}$.
Problem 18

Suppose that ${\displaystyle f:V\to W}$ preserves structure. Show that ${\displaystyle f}$ is one-to-one if and only if the unique member of ${\displaystyle V}$ mapped by ${\displaystyle f}$ to ${\displaystyle {\vec {0}}_{W}}$ is ${\displaystyle {\vec {0}}_{V}}$.

Problem 19

Suppose that ${\displaystyle f:V\to W}$ is an isomorphism. Prove that the set ${\displaystyle \{{\vec {v}}_{1},\dots ,{\vec {v}}_{k}\}\subseteq V}$ is linearly dependent if and only if the set of images ${\displaystyle \{f({\vec {v}}_{1}),\dots ,f({\vec {v}}_{k})\}\subseteq W}$ is linearly dependent.

This exercise is recommended for all readers.
Problem 20

Show that each type of map from Example 1.6 is an automorphism.

1. Dilation ${\displaystyle d_{s}}$ by a nonzero scalar ${\displaystyle s}$.
2. Rotation ${\displaystyle t_{\theta }}$ through an angle ${\displaystyle \theta }$.
3. Reflection ${\displaystyle f_{\ell }}$ over a line through the origin.

Hint. For the second and third items, polar coordinates are useful.

Problem 21

Produce an automorphism of ${\displaystyle {\mathcal {P}}_{2}}$ other than the identity map, and other than a shift map ${\displaystyle p(x)\mapsto p(x-k)}$.

Problem 22
1. Show that a function ${\displaystyle f:\mathbb {R} ^{1}\to \mathbb {R} ^{1}}$ is an automorphism if and only if it has the form ${\displaystyle x\mapsto kx}$ for some ${\displaystyle k\neq 0}$.
2. Let ${\displaystyle f}$ be an automorphism of ${\displaystyle \mathbb {R} ^{1}}$ such that ${\displaystyle f(3)=7}$. Find ${\displaystyle f(-2)}$.
3. Show that a function ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ is an automorphism if and only if it has the form
${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}ax+by\\cx+dy\end{pmatrix}}}$
for some ${\displaystyle a,b,c,d\in \mathbb {R} }$ with ${\displaystyle ad-bc\neq 0}$. Hint. Exercises in prior subsections have shown that
${\displaystyle {\begin{pmatrix}b\\d\end{pmatrix}}{\text{ is not a multiple of }}{\begin{pmatrix}a\\c\end{pmatrix}}}$
if and only if ${\displaystyle ad-bc\neq 0}$.
4. Let ${\displaystyle f}$ be an automorphism of ${\displaystyle \mathbb {R} ^{2}}$ with
${\displaystyle f({\begin{pmatrix}1\\3\end{pmatrix}})={\begin{pmatrix}2\\-1\end{pmatrix}}\quad {\text{and}}\quad f({\begin{pmatrix}1\\4\end{pmatrix}})={\begin{pmatrix}0\\1\end{pmatrix}}.}$
Find
${\displaystyle f({\begin{pmatrix}0\\-1\end{pmatrix}}).}$
Problem 23

Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.

Problem 24

We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we can produce an isomorphism associating those vectors.

1. Let ${\displaystyle B=\langle {\vec {\beta }}_{1},{\vec {\beta }}_{2},{\vec {\beta }}_{3}\rangle }$ be a basis for ${\displaystyle {\mathcal {P}}_{2}}$ so that any ${\displaystyle {\vec {p}}\in {\mathcal {P}}_{2}}$ has a unique representation as ${\displaystyle {\vec {p}}=c_{1}{\vec {\beta }}_{1}+c_{2}{\vec {\beta }}_{2}+c_{3}{\vec {\beta }}_{3}}$, which we denote in this way.
${\displaystyle {\rm {Rep}}_{B}({\vec {p}})={\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}}$
Show that the ${\displaystyle {\rm {Rep}}_{B}(\cdot )}$ operation is a function from ${\displaystyle {\mathcal {P}}_{2}}$ to ${\displaystyle \mathbb {R} ^{3}}$ (this entails showing that with every domain vector ${\displaystyle {\vec {v}}\in {\mathcal {P}}_{2}}$ there is an associated image vector in ${\displaystyle \mathbb {R} ^{3}}$, and further, that with every domain vector ${\displaystyle {\vec {v}}\in {\mathcal {P}}_{2}}$ there is at most one associated image vector).
2. Show that this ${\displaystyle {\rm {Rep}}_{B}(\cdot )}$ function is one-to-one and onto.
3. Show that it preserves structure.
4. Produce an isomorphism from ${\displaystyle {\mathcal {P}}_{2}}$ to ${\displaystyle \mathbb {R} ^{3}}$ that fits these specifications.
${\displaystyle x+x^{2}\mapsto {\begin{pmatrix}1\\0\\0\end{pmatrix}}\quad {\text{and}}\quad 1-x\mapsto {\begin{pmatrix}0\\1\\0\end{pmatrix}}}$
Problem 25

Prove that a space is ${\displaystyle n}$-dimensional if and only if it is isomorphic to ${\displaystyle \mathbb {R} ^{n}}$. Hint. Fix a basis ${\displaystyle B}$ for the space and consider the map sending a vector over to its representation with respect to ${\displaystyle B}$.

Problem 26

(Requires the subsection on Combining Subspaces, which is optional.) Let ${\displaystyle U}$ and ${\displaystyle W}$ be vector spaces. Define a new vector space, consisting of the set ${\displaystyle U\times W=\{({\vec {u}},{\vec {w}})\,{\big |}\,{\vec {u}}\in U{\text{ and }}{\vec {w}}\in W\}}$ along with these operations.

${\displaystyle ({\vec {u}}_{1},{\vec {w}}_{1})+({\vec {u}}_{2},{\vec {w}}_{2})=({\vec {u}}_{1}+{\vec {u}}_{2},{\vec {w}}_{1}+{\vec {w}}_{2})\quad {\text{and}}\quad r\cdot ({\vec {u}},{\vec {w}})=(r{\vec {u}},r{\vec {w}})}$

This is a vector space, the external direct sum of ${\displaystyle U}$ and ${\displaystyle W}$.

1. Check that it is a vector space.
2. Find a basis for, and the dimension of, the external direct sum ${\displaystyle {\mathcal {P}}_{2}\times \mathbb {R} ^{2}}$.
3. What is the relationship among ${\displaystyle \dim(U)}$, ${\displaystyle \dim(W)}$, and ${\displaystyle \dim(U\times W)}$?
4. Suppose that ${\displaystyle U}$ and ${\displaystyle W}$ are subspaces of a vector space ${\displaystyle V}$ such that ${\displaystyle V=U\oplus W}$ (in this case we say that ${\displaystyle V}$ is the internal direct sum of ${\displaystyle U}$ and ${\displaystyle W}$). Show that the map ${\displaystyle f:U\times W\to V}$ given by
${\displaystyle ({\vec {u}},{\vec {w}}){\stackrel {f}{\longmapsto }}{\vec {u}}+{\vec {w}}}$
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are isomorphic.

### 2 - Dimension Characterizes Isomorphism

In the prior subsection, after stating the definition of an isomorphism, we gave some results supporting the intuition that such a map describes spaces as "the same". Here we will formalize this intuition. While two spaces that are isomorphic are not equal, we think of them as almost equal— as equivalent. In this subsection we shall show that the relationship "is isomorphic to" is an equivalence relation.[2]

Theorem 2.1

Isomorphism is an equivalence relation between vector spaces.

Proof

We must prove that this relation has the three properties of being symmetric, reflexive, and transitive. For each of the three we will use item 2 of Lemma 1.9 and show that the map preserves structure by showing that it preserves linear combinations of two members of the domain.

To check reflexivity, that any space is isomorphic to itself, consider the identity map. It is clearly one-to-one and onto. The calculation showing that it preserves linear combinations is easy.

${\displaystyle {\mbox{id}}(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}=c_{1}\cdot {\mbox{id}}({\vec {v}}_{1})+c_{2}\cdot {\mbox{id}}({\vec {v}}_{2})}$

To check symmetry, that if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ via some map ${\displaystyle f:V\to W}$ then there is an isomorphism going the other way, consider the inverse map ${\displaystyle f^{-1}:W\to V}$. As stated in the appendix, such an inverse function exists and it is also a correspondence. Thus we have reduced the symmetry issue to checking that, because ${\displaystyle f}$ preserves linear combinations, so also does ${\displaystyle f^{-1}}$. Assume that ${\displaystyle {\vec {w}}_{1}=f({\vec {v}}_{1})}$ and ${\displaystyle {\vec {w}}_{2}=f({\vec {v}}_{2})}$, i.e., that ${\displaystyle f^{-1}({\vec {w}}_{1})={\vec {v}}_{1}}$ and ${\displaystyle f^{-1}({\vec {w}}_{2})={\vec {v}}_{2}}$.

${\displaystyle {\begin{array}{rl}f^{-1}(c_{1}\cdot {\vec {w}}_{1}+c_{2}\cdot {\vec {w}}_{2})&=f^{-1}{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=f^{-1}(\,f{\bigl (}c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})\,{\bigr )}\\&=c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}\\&=c_{1}\cdot f^{-1}({\vec {w}}_{1})+c_{2}\cdot f^{-1}({\vec {w}}_{2})\end{array}}}$

Finally, we must check transitivity, that if ${\displaystyle V}$ is isomorphic to ${\displaystyle W}$ via some map ${\displaystyle f}$ and if ${\displaystyle W}$ is isomorphic to ${\displaystyle U}$ via some map ${\displaystyle g}$ then also ${\displaystyle V}$ is isomorphic to ${\displaystyle U}$. Consider the composition ${\displaystyle g\circ f:V\to U}$. The appendix notes that the composition of two correspondences is a correspondence, so we need only check that the composition preserves linear combinations.

${\displaystyle {\begin{array}{rl}g\circ f\,{\bigl (}c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2}{\bigr )}&=g{\bigl (}\,f(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})\,{\bigr )}\\&=g{\bigl (}\,c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})\,{\bigr )}\\&=c_{1}\cdot g{\bigl (}f({\vec {v}}_{1}))+c_{2}\cdot g(f({\vec {v}}_{2}){\bigr )}\\&=c_{1}\cdot (g\circ f)\,({\vec {v}}_{1})+c_{2}\cdot (g\circ f)\,({\vec {v}}_{2})\end{array}}}$

Thus ${\displaystyle g\circ f:V\to U}$ is an isomorphism.

As a consequence of that result, we know that the universe of vector spaces is partitioned into classes: every space is in one and only one isomorphism class.

 All finite dimensional vector spaces:

${\displaystyle V\cong W}$

Theorem 2.2

Vector spaces are isomorphic if and only if they have the same dimension.

This follows from the next two lemmas.

Lemma 2.3

If spaces are isomorphic then they have the same dimension.

Proof

We shall show that an isomorphism of two spaces gives a correspondence between their bases. That is, where ${\displaystyle f:V\to W}$ is an isomorphism and a basis for the domain ${\displaystyle V}$ is ${\displaystyle B=\langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$, then the image set ${\displaystyle D=\langle f({\vec {\beta }}_{1}),\dots ,f({\vec {\beta }}_{n})\rangle }$ is a basis for the codomain ${\displaystyle W}$. (The other half of the correspondence— that for any basis of ${\displaystyle W}$ the inverse image is a basis for ${\displaystyle V}$— follows on recalling that if ${\displaystyle f}$ is an isomorphism then ${\displaystyle f^{-1}}$ is also an isomorphism, and applying the prior sentence to ${\displaystyle f^{-1}}$.)

To see that ${\displaystyle D}$ spans ${\displaystyle W}$, fix any ${\displaystyle {\vec {w}}\in W}$, note that ${\displaystyle f}$ is onto and so there is a ${\displaystyle {\vec {v}}\in V}$ with ${\displaystyle {\vec {w}}=f({\vec {v}})}$, and expand ${\displaystyle {\vec {v}}}$ as a combination of basis vectors.

${\displaystyle {\vec {w}}=f({\vec {v}})=f(v_{1}{\vec {\beta }}_{1}+\dots +v_{n}{\vec {\beta }}_{n})=v_{1}\cdot f({\vec {\beta }}_{1})+\dots +v_{n}\cdot f({\vec {\beta }}_{n})}$

For linear independence of ${\displaystyle D}$, if

${\displaystyle {\vec {0}}_{W}=c_{1}f({\vec {\beta }}_{1})+\dots +c_{n}f({\vec {\beta }}_{n})=f(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})}$

then, since ${\displaystyle f}$ is one-to-one and so the only vector sent to ${\displaystyle {\vec {0}}_{W}}$ is ${\displaystyle {\vec {0}}_{V}}$, we have that ${\displaystyle {\vec {0}}_{V}=c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}}$, implying that all of the ${\displaystyle c}$'s are zero.

Lemma 2.4

If spaces have the same dimension then they are isomorphic.

Proof

To show that any two spaces of dimension ${\displaystyle n}$ are isomorphic, we can simply show that any one is isomorphic to ${\displaystyle \mathbb {R} ^{n}}$. Then we will have shown that they are isomorphic to each other, by the transitivity of isomorphism (which was established in Theorem 2.1).

Let ${\displaystyle V}$ be ${\displaystyle n}$-dimensional. Fix a basis ${\displaystyle B=\langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ for the domain ${\displaystyle V}$. Consider the representation of the members of that domain with respect to the basis as a function from ${\displaystyle V}$ to ${\displaystyle \mathbb {R} ^{n}}$

${\displaystyle {\vec {v}}=v_{1}{\vec {\beta }}_{1}+\dots +v_{n}{\vec {\beta }}_{n}\,{\stackrel {{\text{Rep}}_{B}}{\longmapsto }}\,{\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}}$

(it is well-defined[3] since every ${\displaystyle {\vec {v}}}$ has one and only one such representation— see Remark 2.5 below).

This function is one-to-one because if

${\displaystyle {\text{Rep}}_{B}(u_{1}{\vec {\beta }}_{1}+\dots +u_{n}{\vec {\beta }}_{n})={\text{Rep}}_{B}(v_{1}{\vec {\beta }}_{1}+\dots +v_{n}{\vec {\beta }}_{n})}$

then

${\displaystyle {\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}={\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}}$

and so ${\displaystyle u_{1}=v_{1}}$, ..., ${\displaystyle u_{n}=v_{n}}$, and therefore the original arguments ${\displaystyle u_{1}{\vec {\beta }}_{1}+\dots +u_{n}{\vec {\beta }}_{n}}$ and ${\displaystyle v_{1}{\vec {\beta }}_{1}+\dots +v_{n}{\vec {\beta }}_{n}}$ are equal.

This function is onto; any ${\displaystyle n}$-tall vector

${\displaystyle {\vec {w}}={\begin{pmatrix}w_{1}\\\vdots \\w_{n}\end{pmatrix}}}$

is the image of some ${\displaystyle {\vec {v}}\in V}$, namely ${\displaystyle {\vec {w}}={\rm {Rep}}_{B}(w_{1}{\vec {\beta }}_{1}+\dots +w_{n}{\vec {\beta }}_{n})}$.

Finally, this function preserves structure.

${\displaystyle {\begin{array}{rl}{\rm {Rep}}_{B}(r\cdot {\vec {u}}+s\cdot {\vec {v}})&={\rm {Rep}}_{B}(\,(ru_{1}+sv_{1}){\vec {\beta }}_{1}+\dots +(ru_{n}+sv_{n}){\vec {\beta }}_{n}\,)\\&={\begin{pmatrix}ru_{1}+sv_{1}\\\vdots \\ru_{n}+sv_{n}\end{pmatrix}}\\&=r\cdot {\begin{pmatrix}u_{1}\\\vdots \\u_{n}\end{pmatrix}}+s\cdot {\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}\\&=r\cdot {\rm {Rep}}_{B}({\vec {u}})+s\cdot {\rm {Rep}}_{B}({\vec {v}})\end{array}}}$

Thus the ${\displaystyle {\mbox{Rep}}_{B}}$ function is an isomorphism and thus any ${\displaystyle n}$-dimensional space is isomorphic to the ${\displaystyle n}$-dimensional space ${\displaystyle \mathbb {R} ^{n}}$. Consequently, any two spaces with the same dimension are isomorphic.

Remark 2.5

The parenthetical comment in that proof about the role played by the "one and only one representation" result requires some explanation. We need to show that (for a fixed ${\displaystyle B}$) each vector in the domain is associated by ${\displaystyle {\mbox{Rep}}_{B}}$ with one and only one vector in the codomain.

A contrasting example, where an association doesn't have this property, is illuminating. Consider this subset of ${\displaystyle {\mathcal {P}}_{2}}$, which is not a basis.

${\displaystyle A=\{1+0x+0x^{2},0+1x+0x^{2},0+0x+1x^{2},1+1x+2x^{2}\}}$

Call those four polynomials ${\displaystyle {\vec {\alpha }}_{1}}$, ..., ${\displaystyle {\vec {\alpha }}_{4}}$. If, mimicing above proof, we try to write the members of ${\displaystyle {\mathcal {P}}_{2}}$ as ${\displaystyle {\vec {p}}=c_{1}{\vec {\alpha }}_{1}+c_{2}{\vec {\alpha }}_{2}+c_{3}{\vec {\alpha }}_{3}+c_{4}{\vec {\alpha }}_{4}}$, and associate ${\displaystyle {\vec {p}}}$ with the four-tall vector with components ${\displaystyle c_{1}}$, ..., ${\displaystyle c_{4}}$ then there is a problem. For, consider ${\displaystyle {\vec {p}}(x)=1+x+x^{2}}$. The set ${\displaystyle A}$ spans the space ${\displaystyle {\mathcal {P}}_{2}}$, so there is at least one four-tall vector associated with ${\displaystyle {\vec {p}}}$. But ${\displaystyle A}$ is not linearly independent and so vectors do not have unique decompositions. In this case, both

${\displaystyle {\vec {p}}(x)=1{\vec {\alpha }}_{1}+1{\vec {\alpha }}_{2}+1{\vec {\alpha }}_{3}+0{\vec {\alpha }}_{4}\quad {\text{and}}\quad {\vec {p}}(x)=0{\vec {\alpha }}_{1}+0{\vec {\alpha }}_{2}-1{\vec {\alpha }}_{3}+1{\vec {\alpha }}_{4}}$

and so there is more than one four-tall vector associated with ${\displaystyle {\vec {p}}}$.

${\displaystyle {\begin{pmatrix}1\\1\\1\\0\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}0\\0\\-1\\1\end{pmatrix}}}$

That is, with input ${\displaystyle {\vec {p}}}$ this association does not have a well-defined (i.e., single) output value.

Any map whose definition appears possibly ambiguous must be checked to see that it is well-defined. For ${\displaystyle {\mbox{Rep}}_{B}}$ in the above proof that check is Problem 11.

That ends the proof of Theorem 2.2. We say that the isomorphism classes are characterized by dimension because we can describe each class simply by giving the number that is the dimension of all of the spaces in that class.

This subsection's results give us a collection of representatives of the isomorphism classes.[4]

Corollary 2.6

A finite-dimensional vector space is isomorphic to one and only one of the ${\displaystyle \mathbb {R} ^{n}}$.

The proofs above pack many ideas into a small space. Through the rest of this chapter we'll consider these ideas again, and fill them out. For a taste of this, we will expand here on the proof of Lemma 2.4.

Example 2.7

The space ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$ of ${\displaystyle 2\!\times \!2}$ matrices is isomorphic to ${\displaystyle \mathbb {R} ^{4}}$. With this basis for the domain

${\displaystyle B=\langle {\begin{pmatrix}1&0\\0&0\end{pmatrix}},{\begin{pmatrix}0&1\\0&0\end{pmatrix}},{\begin{pmatrix}0&0\\1&0\end{pmatrix}},{\begin{pmatrix}0&0\\0&1\end{pmatrix}}\rangle }$

the isomorphism given in the lemma, the representation map ${\displaystyle f_{1}={\mbox{Rep}}_{B}}$, simply carries the entries over.

${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}{\stackrel {f_{1}}{\longmapsto }}{\begin{pmatrix}a\\b\\c\\d\end{pmatrix}}}$

One way to think of the map ${\displaystyle f_{1}}$ is: fix the basis ${\displaystyle B}$ for the domain and the basis ${\displaystyle {\mathcal {E}}_{4}}$ for the codomain, and associate ${\displaystyle {\vec {\beta }}_{1}}$ with ${\displaystyle {\vec {e}}_{1}}$, and ${\displaystyle {\vec {\beta }}_{2}}$ with ${\displaystyle {\vec {e}}_{2}}$, etc. Then extend this association to all of the members of two spaces.

${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=a{\vec {\beta }}_{1}+b{\vec {\beta }}_{2}+c{\vec {\beta }}_{3}+d{\vec {\beta }}_{4}\;\;{\stackrel {f_{1}}{\longmapsto }}\;\;a{\vec {e}}_{1}+b{\vec {e}}_{2}+c{\vec {e}}_{3}+d{\vec {e}}_{4}={\begin{pmatrix}a\\b\\c\\d\end{pmatrix}}}$

We say that the map has been extended linearly from the bases to the spaces.

We can do the same thing with different bases, for instance, taking this basis for the domain.

${\displaystyle A=\langle {\begin{pmatrix}2&0\\0&0\end{pmatrix}},{\begin{pmatrix}0&2\\0&0\end{pmatrix}},{\begin{pmatrix}0&0\\2&0\end{pmatrix}},{\begin{pmatrix}0&0\\0&2\end{pmatrix}}\rangle }$

Associating corresponding members of ${\displaystyle A}$ and ${\displaystyle {\mathcal {E}}_{4}}$ and extending linearly

${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=(a/2){\vec {\alpha }}_{1}+(b/2){\vec {\alpha }}_{2}+(c/2){\vec {\alpha }}_{3}+(d/2){\vec {\alpha }}_{4}}$
${\displaystyle {\stackrel {f_{2}}{\longmapsto }}\;\;(a/2){\vec {e}}_{1}+(b/2){\vec {e}}_{2}+(c/2){\vec {e}}_{3}+(d/2){\vec {e}}_{4}={\begin{pmatrix}a/2\\b/2\\c/2\\d/2\end{pmatrix}}}$

gives rise to an isomorphism that is different than ${\displaystyle f_{1}}$.

The prior map arose by changing the basis for the domain. We can also change the basis for the codomain. Starting with

${\displaystyle B\quad {\text{and}}\quad D=\langle {\begin{pmatrix}1\\0\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\1\\0\\0\end{pmatrix}},{\begin{pmatrix}0\\0\\0\\1\end{pmatrix}},{\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}\rangle }$

associating ${\displaystyle {\vec {\beta }}_{1}}$ with ${\displaystyle {\vec {\delta }}_{1}}$, etc., and then linearly extending that correspondence to all of the two spaces

${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=a{\vec {\beta }}_{1}+b{\vec {\beta }}_{2}+c{\vec {\beta }}_{3}+d{\vec {\beta }}_{4}\;\;{\stackrel {f_{3}}{\longmapsto }}\;\;a{\vec {\delta }}_{1}+b{\vec {\delta }}_{2}+c{\vec {\delta }}_{3}+d{\vec {\delta }}_{4}={\begin{pmatrix}a\\b\\d\\c\end{pmatrix}}}$

gives still another isomorphism.

So there is a connection between the maps between spaces and bases for those spaces. Later sections will explore that connection.

We will close this section with a summary.

Recall that in the first chapter we defined two matrices as row equivalent if they can be derived from each other by elementary row operations (this was the meaning of same-ness that was of interest there). We showed that is an equivalence relation and so the collection of matrices is partitioned into classes, where all the matrices that are row equivalent fall together into a single class. Then, for insight into which matrices are in each class, we gave representatives for the classes, the reduced echelon form matrices.

In this section, except that the appropriate notion of same-ness here is vector space isomorphism, we have followed much the same outline. First we defined isomorphism, saw some examples, and established some properties. Then we showed that it is an equivalence relation, and now we have a set of class representatives, the real vector spaces ${\displaystyle \mathbb {R} ^{1}}$, ${\displaystyle \mathbb {R} ^{2}}$, etc.

 All finite dimensional vector spaces:

 One representative per class

As before, the list of representatives helps us to understand the partition. It is simply a classification of spaces by dimension.

In the second chapter, with the definition of vector spaces, we seemed to have opened up our studies to many examples of new structures besides the familiar ${\displaystyle \mathbb {R} ^{n}}$'s. We now know that isn't the case. Any finite-dimensional vector space is actually "the same" as a real space. We are thus considering exactly the structures that we need to consider.

The rest of the chapter fills out the work in this section. In particular, in the next section we will consider maps that preserve structure, but are not necessarily correspondences.

## Exercises

This exercise is recommended for all readers.
Problem 1

Decide if the spaces are isomorphic.

1. ${\displaystyle \mathbb {R} ^{2}}$, ${\displaystyle \mathbb {R} ^{4}}$
2. ${\displaystyle {\mathcal {P}}_{5}}$, ${\displaystyle \mathbb {R} ^{5}}$
3. ${\displaystyle {\mathcal {M}}_{2\!\times \!3}}$, ${\displaystyle \mathbb {R} ^{6}}$
4. ${\displaystyle {\mathcal {P}}_{5}}$, ${\displaystyle {\mathcal {M}}_{2\!\times \!3}}$
5. ${\displaystyle {\mathcal {M}}_{2\!\times \!k}}$, ${\displaystyle \mathbb {C} ^{k}}$

Each pair of spaces is isomorphic if and only if the two have the same dimension. We can, when there is an isomorphism, state a map, but it isn't strictly necessary.

1. No, they have different dimensions.
2. No, they have different dimensions.
3. Yes, they have the same dimension. One isomorphism is this.
${\displaystyle {\begin{pmatrix}a&b&c\\d&e&f\end{pmatrix}}\mapsto {\begin{pmatrix}a\\\vdots \\f\end{pmatrix}}}$
4. Yes, they have the same dimension. This is an isomorphism.
${\displaystyle a+bx+\cdots +fx^{5}\mapsto {\begin{pmatrix}a&b&c\\d&e&f\end{pmatrix}}}$
5. Yes, both have dimension ${\displaystyle 2k}$.
This exercise is recommended for all readers.
Problem 2

Consider the isomorphism ${\displaystyle {\rm {Rep}}_{B}(\cdot ):{\mathcal {P}}_{1}\to \mathbb {R} ^{2}}$ where ${\displaystyle B=\langle 1,1+x\rangle }$. Find the image of each of these elements of the domain.

1. ${\displaystyle 3-2x}$;
2. ${\displaystyle 2+2x}$;
3. ${\displaystyle x}$
1. ${\displaystyle {\rm {Rep}}_{B}(3-2x)={\begin{pmatrix}5\\-2\end{pmatrix}}}$
2. ${\displaystyle {\begin{pmatrix}0\\2\end{pmatrix}}}$
3. ${\displaystyle {\begin{pmatrix}-1\\1\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 3

Show that if ${\displaystyle m\neq n}$ then ${\displaystyle \mathbb {R} ^{m}\not \cong \mathbb {R} ^{n}}$.

They have different dimensions.

This exercise is recommended for all readers.
Problem 4

Is ${\displaystyle {\mathcal {M}}_{m\!\times \!n}\cong {\mathcal {M}}_{n\!\times \!m}}$?

Yes, both are ${\displaystyle mn}$-dimensional.

This exercise is recommended for all readers.
Problem 5

Are any two planes through the origin in ${\displaystyle \mathbb {R} ^{3}}$ isomorphic?

Yes, any two (nondegenerate) planes are both two-dimensional vector spaces.

Problem 6

Find a set of equivalence class representatives other than the set of ${\displaystyle \mathbb {R} ^{n}}$'s.

There are many answers, one is the set of ${\displaystyle {\mathcal {P}}_{k}}$ (taking ${\displaystyle {\mathcal {P}}_{-1}}$ to be the trivial vector space).

Problem 7

True or false: between any ${\displaystyle n}$-dimensional space and ${\displaystyle \mathbb {R} ^{n}}$ there is exactly one isomorphism.

False (except when ${\displaystyle n=0}$). For instance, if ${\displaystyle f:V\to \mathbb {R} ^{n}}$ is an isomorphism then multiplying by any nonzero scalar, gives another, different, isomorphism. (Between trivial spaces the isomorphisms are unique; the only map possible is ${\displaystyle {\vec {0}}_{V}\mapsto 0_{W}}$.)

Problem 8

Can a vector space be isomorphic to one of its (proper) subspaces?

No. A proper subspace has a strictly lower dimension than it's superspace; if ${\displaystyle U}$ is a proper subspace of ${\displaystyle V}$ then any linearly independent subset of ${\displaystyle U}$ must have fewer than ${\displaystyle \dim(V)}$ members or else that set would be a basis for ${\displaystyle V}$, and ${\displaystyle U}$ wouldn't be proper.

This exercise is recommended for all readers.
Problem 9

This subsection shows that for any isomorphism, the inverse map is also an isomorphism. This subsection also shows that for a fixed basis ${\displaystyle B}$ of an ${\displaystyle n}$-dimensional vector space ${\displaystyle V}$, the map ${\displaystyle {\text{Rep}}_{B}:V\to \mathbb {R} ^{n}}$ is an isomorphism. Find the inverse of this map.

Where ${\displaystyle B=\langle {\vec {\beta }}_{1},\ldots ,{\vec {\beta }}_{n}\rangle }$, the inverse is this.

${\displaystyle {\begin{pmatrix}c_{1}\\\vdots \\c_{n}\end{pmatrix}}\mapsto c_{1}{\vec {\beta }}_{1}+\cdots +c_{n}{\vec {\beta }}_{n}}$
This exercise is recommended for all readers.
Problem 10

1. The row space of a matrix is isomorphic to the column space of its transpose.
2. The row space of a matrix is isomorphic to its column space.

All three spaces have dimension equal to the rank of the matrix.

Problem 11

Show that the function from Theorem 2.2 is well-defined.

We must show that if ${\displaystyle {\vec {a}}={\vec {b}}}$ then ${\displaystyle f({\vec {a}})=f({\vec {b}})}$. So suppose that ${\displaystyle a_{1}{\vec {\beta }}_{1}+\dots +a_{n}{\vec {\beta }}_{n}=b_{1}{\vec {\beta }}_{1}+\dots +b_{n}{\vec {\beta }}_{n}}$. Each vector in a vector space (here, the domain space) has a unique representation as a linear combination of basis vectors, so we can conclude that ${\displaystyle a_{1}=b_{1}}$, ..., ${\displaystyle a_{n}=b_{n}}$. Thus,

${\displaystyle f({\vec {a}})={\begin{pmatrix}a_{1}\\\vdots \\a_{n}\end{pmatrix}}={\begin{pmatrix}b_{1}\\\vdots \\b_{n}\end{pmatrix}}=f({\vec {b}})}$

and so the function is well-defined.

Problem 12

Is the proof of Theorem 2.2 valid when ${\displaystyle n=0}$?

Yes, because a zero-dimensional space is a trivial space.

Problem 13

For each, decide if it is a set of isomorphism class representatives.

1. ${\displaystyle \{\mathbb {C} ^{k}\,{\big |}\,k\in \mathbb {N} \}}$
2. ${\displaystyle \{{\mathcal {P}}_{k}\,{\big |}\,k\in \{-1,0,1,\ldots \}\}}$
3. ${\displaystyle \{{\mathcal {M}}_{m\!\times \!n}\,{\big |}\,m,n\in \mathbb {N} \}}$
1. No, this collection has no spaces of odd dimension.
2. Yes, because ${\displaystyle {\mathcal {P}}_{k}\cong \mathbb {R} ^{k+1}}$.
3. No, for instance, ${\displaystyle {\mathcal {M}}_{2\!\times \!3}\cong {\mathcal {M}}_{3\!\times \!2}}$.
Problem 14

Let ${\displaystyle f}$ be a correspondence between vector spaces ${\displaystyle V}$ and ${\displaystyle W}$ (that is, a map that is one-to-one and onto). Show that the spaces ${\displaystyle V}$ and ${\displaystyle W}$ are isomorphic via ${\displaystyle f}$ if and only if there are bases ${\displaystyle B\subset V}$ and ${\displaystyle D\subset W}$ such that corresponding vectors have the same coordinates: ${\displaystyle {\rm {Rep}}_{B}({\vec {v}})={\rm {Rep}}_{D}(f({\vec {v}}))}$.

One direction is easy: if the two are isomorphic via ${\displaystyle f}$ then for any basis ${\displaystyle B\subseteq V}$, the set ${\displaystyle D=f(B)}$ is also a basis (this is shown in Lemma 2.3). The check that corresponding vectors have the same coordinates: ${\displaystyle f(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})=c_{1}f({\vec {\beta }}_{1})+\dots +c_{n}f({\vec {\beta }}_{n})=c_{1}{\vec {\delta }}_{1}+\dots +c_{n}{\vec {\delta }}_{n}}$ is routine.

For the other half, assume that there are bases such that corresponding vectors have the same coordinates with respect to those bases. Because ${\displaystyle f}$ is a correspondence, to show that it is an isomorphism, we need only show that it preserves structure. Because ${\displaystyle {\rm {Rep}}_{B}({\vec {v}}\,)={\rm {Rep}}_{D}(f({\vec {v}}\,))}$, the map ${\displaystyle f}$ preserves structure if and only if representations preserve addition: ${\displaystyle {\rm {Rep}}_{B}({\vec {v}}_{1}+{\vec {v}}_{2})={\rm {Rep}}_{B}({\vec {v}}_{1})+{\rm {Rep}}_{B}({\vec {v}}_{2})}$ and scalar multiplication: ${\displaystyle {\rm {Rep}}_{B}(r\cdot {\vec {v}}\,)=r\cdot {\rm {Rep}}_{B}({\vec {v}}\,)}$ The addition calculation is this: ${\displaystyle (c_{1}+d_{1}){\vec {\beta }}_{1}+\dots +(c_{n}+d_{n}){\vec {\beta }}_{n}=c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}+d_{1}{\vec {\beta }}_{1}+\dots +d_{n}{\vec {\beta }}_{n}}$, and the scalar multiplication calculation is similar.

Problem 15

Consider the isomorphism ${\displaystyle {\text{Rep}}_{B}:{\mathcal {P}}_{3}\to \mathbb {R} ^{4}}$.

1. Vectors in a real space are orthogonal if and only if their dot product is zero. Give a definition of orthogonality for polynomials.
2. The derivative of a member of ${\displaystyle {\mathcal {P}}_{3}}$ is in ${\displaystyle {\mathcal {P}}_{3}}$. Give a definition of the derivative of a vector in ${\displaystyle \mathbb {R} ^{4}}$.
1. Pulling the definition back from ${\displaystyle \mathbb {R} ^{4}}$ to ${\displaystyle {\mathcal {P}}_{3}}$ gives that ${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}}$ is orthogonal to ${\displaystyle b_{0}+b_{1}x+b_{2}x^{2}+b_{3}x^{3}}$ if and only if ${\displaystyle a_{0}b_{0}+a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}=0}$.
2. A natural definition is this.
${\displaystyle D({\begin{pmatrix}a_{0}\\a_{1}\\a_{2}\\a_{3}\end{pmatrix}})={\begin{pmatrix}a_{1}\\2a_{2}\\3a_{3}\\0\end{pmatrix}}}$
This exercise is recommended for all readers.
Problem 16

Does every correspondence between bases, when extended to the spaces, give an isomorphism?

Yes.

Assume that ${\displaystyle V}$ is a vector space with basis ${\displaystyle B=\langle {\vec {\beta }}_{1},\ldots ,{\vec {\beta }}_{n}\rangle }$ and that ${\displaystyle W}$ is another vector space such that the map ${\displaystyle f:B\to W}$ is a correspondence. Consider the extension ${\displaystyle {\hat {f}}:V\to W}$ of ${\displaystyle f}$.

${\displaystyle {\hat {f}}(c_{1}{\vec {\beta }}_{1}+\cdots +c_{n}{\vec {\beta }}_{n})=c_{1}f({\vec {\beta }}_{1})+\cdots +c_{n}f({\vec {\beta }}_{n}).}$

The map ${\displaystyle {\hat {f}}}$ is an isomorphism.

First, ${\displaystyle {\hat {f}}}$ is well-defined because every member of ${\displaystyle V}$ has one and only one representation as a linear combination of elements of ${\displaystyle B}$.

Second, ${\displaystyle {\hat {f}}}$ is one-to-one because every member of ${\displaystyle W}$ has only one representation as a linear combination of elements of ${\displaystyle \langle f({\vec {\beta }}_{1}),\dots ,f({\vec {\beta }}_{n})\rangle }$. That map ${\displaystyle {\hat {f}}}$ is onto because every member of ${\displaystyle W}$ has at least one representation as a linear combination of members of ${\displaystyle \langle f({\vec {\beta }}_{1}),\dots ,f({\vec {\beta }}_{n})\rangle }$.

Finally, preservation of structure is routine to check. For instance, here is the preservation of addition calculation.

${\displaystyle {\begin{array}{rl}{\hat {f}}(\,(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})+(d_{1}{\vec {\beta }}_{1}+\dots +d_{n}{\vec {\beta }}_{n})\,)&={\hat {f}}(\,(c_{1}+d_{1}){\vec {\beta }}_{1}+\dots +(c_{n}+d_{n}){\vec {\beta }}_{n}\,)\\&=(c_{1}+d_{1})f({\vec {\beta }}_{1})+\dots +(c_{n}+d_{n})f({\vec {\beta }}_{n})\\&=c_{1}f({\vec {\beta }}_{1})+\dots +c_{n}f({\vec {\beta }}_{n})+d_{1}f({\vec {\beta }}_{1})+\dots +d_{n}f({\vec {\beta }}_{n})\\&={\hat {f}}(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})++{\hat {f}}(d_{1}{\vec {\beta }}_{1}+\dots +d_{n}{\vec {\beta }}_{n}).\end{array}}}$

Preservation of scalar multiplication is similar.

Problem 17

(Requires the subsection on Combining Subspaces, which is optional.) Suppose that ${\displaystyle V=V_{1}\oplus V_{2}}$ and that ${\displaystyle V}$ is isomorphic to the space ${\displaystyle U}$ under the map ${\displaystyle f}$. Show that ${\displaystyle U=f(V_{1})\oplus f(U_{2})}$.

Because ${\displaystyle V_{1}\cap V_{2}=\{{\vec {0}}_{V}\}}$ and ${\displaystyle f}$ is one-to-one we have that ${\displaystyle f(V_{1})\cap f(V_{2})=\{{\vec {0}}_{U}\}}$. To finish, count the dimensions: ${\displaystyle \dim(U)=\dim(V)=\dim(V_{1})+\dim(V_{2})=\dim(f(V_{1}))+\dim(f(V_{2}))}$, as required.

Problem 18
Show that this is not a well-defined function from the rational numbers to the integers: with each fraction, associate the value of its numerator.

Rational numbers have many representations, e.g., ${\displaystyle 1/2=3/6}$, and the numerators can vary among representations.

## Footnotes

1. More information on one-to-one and onto maps is in the appendix.

## Section II - Homomorphisms

The definition of isomorphism has two conditions. In this section we will consider the second one, that the map must preserve the algebraic structure of the space. We will focus on this condition by studying maps that are required only to preserve structure; that is, maps that are not required to be correspondences.

Experience shows that this kind of map is tremendously useful in the study of vector spaces. For one thing, as we shall see in the second subsection below, while isomorphisms describe how spaces are the same, these maps describe how spaces can be thought of as alike.

### 1 - Definition

Definition 1.1

A function between vector spaces ${\displaystyle h:V\to W}$ that preserves the operations of addition

if ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2}\in V}$ then ${\displaystyle h({\vec {v}}_{1}+{\vec {v}}_{2})=h({\vec {v}}_{1})+h({\vec {v}}_{2})}$

and scalar multiplication

if ${\displaystyle {\vec {v}}\in V}$ and ${\displaystyle r\in \mathbb {R} }$ then ${\displaystyle h(r\cdot {\vec {v}})=r\cdot h({\vec {v}})}$

is a homomorphism or linear map.

Example 1.2

The projection map ${\displaystyle \pi :\mathbb {R} ^{3}\to \mathbb {R} ^{2}}$

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {\pi }{\longmapsto }}{\begin{pmatrix}x\\y\end{pmatrix}}}$

is a homomorphism.

${\displaystyle \pi ({\begin{pmatrix}x_{1}\\y_{1}\\z_{1}\end{pmatrix}}\!+\!{\begin{pmatrix}x_{2}\\y_{2}\\z_{2}\end{pmatrix}})=\pi ({\begin{pmatrix}x_{1}+x_{2}\\y_{1}+y_{2}\\z_{1}+z_{2}\end{pmatrix}})={\begin{pmatrix}x_{1}+x_{2}\\y_{1}+y_{2}\end{pmatrix}}=\pi ({\begin{pmatrix}x_{1}\\y_{1}\\z_{1}\end{pmatrix}})+\pi ({\begin{pmatrix}x_{2}\\y_{2}\\z_{2}\end{pmatrix}})}$

and scalar multiplication.

${\displaystyle \pi (r\cdot {\begin{pmatrix}x_{1}\\y_{1}\\z_{1}\end{pmatrix}})=\pi ({\begin{pmatrix}rx_{1}\\ry_{1}\\rz_{1}\end{pmatrix}})={\begin{pmatrix}rx_{1}\\ry_{1}\end{pmatrix}}=r\cdot \pi ({\begin{pmatrix}x_{1}\\y_{1}\\z_{1}\end{pmatrix}})}$

This map is not an isomorphism since it is not one-to-one. For instance, both ${\displaystyle {\vec {0}}}$ and ${\displaystyle {\vec {e}}_{3}}$ in ${\displaystyle \mathbb {R} ^{3}}$ are mapped to the zero vector in ${\displaystyle \mathbb {R} ^{2}}$.

Example 1.3

Of course, the domain and codomain might be other than spaces of column vectors. Both of these are homomorphisms; the verifications are straightforward.

1. ${\displaystyle f_{1}:{\mathcal {P}}_{2}\to {\mathcal {P}}_{3}}$ given by
${\displaystyle a_{0}+a_{1}x+a_{2}x^{2}\;\mapsto \;a_{0}x+(a_{1}/2)x^{2}+(a_{2}/3)x^{3}}$
2. ${\displaystyle f_{2}:M_{2\!\times \!2}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\mapsto a+d}$
Example 1.4

Between any two spaces there is a zero homomorphism, mapping every vector in the domain to the zero vector in the codomain.

Example 1.5

These two suggest why we use the term "linear map".

1. The map ${\displaystyle g:\mathbb {R} ^{3}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {g}{\longmapsto }}3x+2y-4.5z}$
is linear (i.e., is a homomorphism). In contrast, the map ${\displaystyle {\hat {g}}:\mathbb {R} ^{3}\to \mathbb {R} }$ given by
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {\hat {g}}{\longmapsto }}3x+2y-4.5z+1}$
is not; for instance,
${\displaystyle {\hat {g}}({\begin{pmatrix}0\\0\\0\end{pmatrix}}+{\begin{pmatrix}1\\0\\0\end{pmatrix}})=4\quad {\text{while}}\quad {\hat {g}}({\begin{pmatrix}0\\0\\0\end{pmatrix}})+{\hat {g}}({\begin{pmatrix}1\\0\\0\end{pmatrix}})=5}$
(to show that a map is not linear we need only produce one example of a linear combination that is not preserved).
2. The first of these two maps ${\displaystyle t_{1},t_{2}:\mathbb {R} ^{3}\to \mathbb {R} ^{2}}$ is linear while the second is not.
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {t_{1}}{\longmapsto }}{\begin{pmatrix}5x-2y\\x+y\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}x\\y\\z\end{pmatrix}}{\stackrel {t_{2}}{\longmapsto }}{\begin{pmatrix}5x-2y\\xy\end{pmatrix}}}$
Finding an example that the second fails to preserve structure is easy.

What distinguishes the homomorphisms is that the coordinate functions are linear combinations of the arguments. See also Problem 7.

Obviously, any isomorphism is a homomorphism— an isomorphism is a homomorphism that is also a correspondence. So, one way to think of the "homomorphism" idea is that it is a generalization of "isomorphism", motivated by the observation that many of the properties of isomorphisms have only to do with the map's structure preservation property and not to do with it being a correspondence. As examples, these two results from the prior section do not use one-to-one-ness or onto-ness in their proof, and therefore apply to any homomorphism.

Lemma 1.6

A homomorphism sends a zero vector to a zero vector.

Lemma 1.7

Each of these is a necessary and sufficient condition for ${\displaystyle f:V\to W}$ to be a homomorphism.

1. ${\displaystyle f(c_{1}\cdot {\vec {v}}_{1}+c_{2}\cdot {\vec {v}}_{2})=c_{1}\cdot f({\vec {v}}_{1})+c_{2}\cdot f({\vec {v}}_{2})}$ for any ${\displaystyle c_{1},c_{2}\in \mathbb {R} }$ and ${\displaystyle {\vec {v}}_{1},{\vec {v}}_{2}\in V}$
2. ${\displaystyle f(c_{1}\cdot {\vec {v}}_{1}+\dots +c_{n}\cdot {\vec {v}}_{n})=c_{1}\cdot f({\vec {v}}_{1})+\dots +c_{n}\cdot f({\vec {v}}_{n})}$ for any ${\displaystyle c_{1},\dots ,c_{n}\in \mathbb {R} }$ and ${\displaystyle {\vec {v}}_{1},\ldots ,{\vec {v}}_{n}\in V}$

Part 1 is often used to check that a function is linear.

Example 1.8

The map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{4}}$ given by

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {f}{\longmapsto }}{\begin{pmatrix}x/2\\0\\x+y\\3y\end{pmatrix}}}$

satisfies 1 of the prior result

${\displaystyle {\begin{pmatrix}r_{1}(x_{1}/2)+r_{2}(x_{2}/2)\\0\\r_{1}(x_{1}+y_{1})+r_{2}(x_{2}+y_{2})\\r_{1}(3y_{1})+r_{2}(3y_{2})\end{pmatrix}}=r_{1}{\begin{pmatrix}x_{1}/2\\0\\x_{1}+y_{1}\\3y_{1}\end{pmatrix}}+r_{2}{\begin{pmatrix}x_{2}/2\\0\\x_{2}+y_{2}\\3y_{2}\end{pmatrix}}}$

and so it is a homomorphism.

However, some of the results that we have seen for isomorphisms fail to hold for homomorphisms in general. Consider the theorem that an isomorphism between spaces gives a correspondence between their bases. Homomorphisms do not give any such correspondence; Example 1.2 shows that there is no such correspondence, and another example is the zero map between any two nontrivial spaces. Instead, for homomorphisms a weaker but still very useful result holds.

Theorem 1.9

A homomorphism is determined by its action on a basis. That is, if ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$ is a basis of a vector space ${\displaystyle V}$ and ${\displaystyle {\vec {w}}_{1},\dots ,{\vec {w}}_{n}}$ are (perhaps not distinct) elements of a vector space ${\displaystyle W}$ then there exists a homomorphism from ${\displaystyle V}$ to ${\displaystyle W}$ sending ${\displaystyle {\vec {\beta }}_{1}}$ to ${\displaystyle {\vec {w}}_{1}}$, ..., and ${\displaystyle {\vec {\beta }}_{n}}$ to ${\displaystyle {\vec {w}}_{n}}$, and that homomorphism is unique.

Proof

We will define the map by associating ${\displaystyle {\vec {\beta }}_{1}}$ with ${\displaystyle {\vec {w}}_{1}}$, etc., and then extending linearly to all of the domain. That is, where ${\displaystyle {\vec {v}}=c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n}}$, the map ${\displaystyle h:V\to W}$ is given by ${\displaystyle h({\vec {v}})=c_{1}{\vec {w}}_{1}+\dots +c_{n}{\vec {w}}_{n}}$. This is well-defined because, with respect to the basis, the representation of each domain vector ${\displaystyle {\vec {v}}}$ is unique.

This map is a homomorphism since it preserves linear combinations; where ${\displaystyle {\vec {v_{1}}}=c_{1}{\vec {\beta }}_{1}+\cdots +c_{n}{\vec {\beta }}_{n}}$ and ${\displaystyle {\vec {v_{2}}}=d_{1}{\vec {\beta }}_{1}+\cdots +d_{n}{\vec {\beta }}_{n}}$, we have this.

${\displaystyle {\begin{array}{rl}h(r_{1}{\vec {v}}_{1}+r_{2}{\vec {v}}_{2})&=h((r_{1}c_{1}+r_{2}d_{1}){\vec {\beta }}_{1}+\dots +(r_{1}c_{n}+r_{2}d_{n}){\vec {\beta }}_{n})\\&=(r_{1}c_{1}+r_{2}d_{1}){\vec {w}}_{1}+\dots +(r_{1}c_{n}+r_{2}d_{n}){\vec {w}}_{n}\\&=r_{1}h({\vec {v}}_{1})+r_{2}h({\vec {v}}_{2})\end{array}}}$

And, this map is unique since if ${\displaystyle {\hat {h}}:V\to W}$ is another homomorphism such that ${\displaystyle {\hat {h}}({\vec {\beta }}_{i})={\vec {w}}_{i}}$ for each ${\displaystyle i}$ then ${\displaystyle h}$ and ${\displaystyle {\hat {h}}}$ agree on all of the vectors in the domain.

${\displaystyle {\begin{array}{rl}{\hat {h}}({\vec {v}})&={\hat {h}}(c_{1}{\vec {\beta }}_{1}+\dots +c_{n}{\vec {\beta }}_{n})\\&=c_{1}{\hat {h}}({\vec {\beta }}_{1})+\dots +c_{n}{\hat {h}}({\vec {\beta }}_{n})\\&=c_{1}{\vec {w}}_{1}+\dots +c_{n}{\vec {w}}_{n}\\&=h({\vec {v}})\end{array}}}$

Thus, ${\displaystyle h}$ and ${\displaystyle {\hat {h}}}$ are the same map.

Example 1.10

This result says that we can construct a homomorphism by fixing a basis for the domain and specifying where the map sends those basis vectors. For instance, if we specify a map ${\displaystyle h:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that acts on the standard basis ${\displaystyle {\mathcal {E}}_{2}}$ in this way

${\displaystyle h({\begin{pmatrix}1\\0\end{pmatrix}})={\begin{pmatrix}-1\\1\end{pmatrix}}\quad {\text{and}}\quad h({\begin{pmatrix}0\\1\end{pmatrix}})={\begin{pmatrix}-4\\4\end{pmatrix}}}$

then the action of ${\displaystyle h}$ on any other member of the domain is also specified. For instance, the value of ${\displaystyle h}$ on this argument

${\displaystyle h({\begin{pmatrix}3\\-2\end{pmatrix}})=h(3\cdot {\begin{pmatrix}1\\0\end{pmatrix}}-2\cdot {\begin{pmatrix}0\\1\end{pmatrix}})=3\cdot h({\begin{pmatrix}1\\0\end{pmatrix}})-2\cdot h({\begin{pmatrix}0\\1\end{pmatrix}})={\begin{pmatrix}5\\-5\end{pmatrix}}}$

is a direct consequence of the value of ${\displaystyle h}$ on the basis vectors.

Later in this chapter we shall develop a scheme, using matrices, that is convienent for computations like this one.

Just as the isomorphisms of a space with itself are useful and interesting, so too are the homomorphisms of a space with itself.

Definition 1.11

A linear map from a space into itself ${\displaystyle t:V\to V}$ is a linear transformation.

Remark 1.12

In this book we use "linear transformation" only in the case where the codomain equals the domain, but it is widely used in other texts as a general synonym for "homomorphism".

Example 1.13

The map on ${\displaystyle \mathbb {R} ^{2}}$ that projects all vectors down to the ${\displaystyle x}$-axis

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\begin{pmatrix}x\\0\end{pmatrix}}}$

is a linear transformation.

Example 1.14

The derivative map ${\displaystyle d/dx:{\mathcal {P}}_{n}\to {\mathcal {P}}_{n}}$

${\displaystyle a_{0}+a_{1}x+\cdots +a_{n}x^{n}{\stackrel {d/dx}{\longmapsto }}a_{1}+2a_{2}x+3a_{3}x^{2}+\cdots +na_{n}x^{n-1}}$

is a linear transformation, as this result from calculus notes: ${\displaystyle d(c_{1}f+c_{2}g)/dx=c_{1}\,(df/dx)+c_{2}\,(dg/dx)}$.

Example 1.15
The matrix transpose map
${\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}\;\mapsto \;{\begin{pmatrix}a&c\\b&d\end{pmatrix}}}$

is a linear transformation of ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$. Note that this transformation is one-to-one and onto, and so in fact it is an automorphism.

We finish this subsection about maps by recalling that we can linearly combine maps. For instance, for these maps from ${\displaystyle \mathbb {R} ^{2}}$ to itself

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {f}{\longmapsto }}{\begin{pmatrix}2x\\3x-2y\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {g}{\longmapsto }}{\begin{pmatrix}0\\5x\end{pmatrix}}}$

the linear combination ${\displaystyle 5f-2g}$ is also a map from ${\displaystyle R^{2}}$ to itself.

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}{\stackrel {5f-2g}{\longmapsto }}{\begin{pmatrix}10x\\5x-10y\end{pmatrix}}}$
Lemma 1.16

For vector spaces ${\displaystyle V}$ and ${\displaystyle W}$, the set of linear functions from ${\displaystyle V}$ to ${\displaystyle W}$ is itself a vector space, a subspace of the space of all functions from ${\displaystyle V}$ to ${\displaystyle W}$. It is denoted ${\displaystyle \mathop {\mathcal {L}} (V,W)}$.

Proof

This set is non-empty because it contains the zero homomorphism. So to show that it is a subspace we need only check that it is closed under linear combinations. Let ${\displaystyle f,g:V\to W}$ be linear. Then their sum is linear

${\displaystyle {\begin{array}{rl}(f+g)(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})&=c_{1}f({\vec {v}}_{1})+c_{2}f({\vec {v}}_{2})+c_{1}g({\vec {v}}_{1})+c_{2}g({\vec {v}}_{2})\\&=c_{1}{\bigl (}f+g{\bigr )}({\vec {v}}_{1})+c_{2}{\bigl (}f+g{\bigr )}({\vec {v}}_{2})\end{array}}}$

and any scalar multiple is also linear.

${\displaystyle {\begin{array}{rl}(r\cdot f)(c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2})&=r(c_{1}f({\vec {v}}_{1})+c_{2}f({\vec {v}}_{2}))\\&=c_{1}(r\cdot f)({\vec {v}}_{1})+c_{2}(r\cdot f)({\vec {v}}_{2})\end{array}}}$

Hence ${\displaystyle \mathop {\mathcal {L}} (V,W)}$ is a subspace.

We started this section by isolating the structure preservation property of isomorphisms. That is, we defined homomorphisms as a generalization of isomorphisms. Some of the properties that we studied for isomorphisms carried over unchanged, while others were adapted to this more general setting.

It would be a mistake, though, to view this new notion of homomorphism as derived from, or somehow secondary to, that of isomorphism. In the rest of this chapter we shall work mostly with homomorphisms, partly because any statement made about homomorphisms is automatically true about isomorphisms, but more because, while the isomorphism concept is perhaps more natural, experience shows that the homomorphism concept is actually more fruitful and more central to further progress.

## Exercises

This exercise is recommended for all readers.
Problem 1

Decide if each ${\displaystyle h:\mathbb {R} ^{3}\to \mathbb {R} ^{2}}$ is linear.

1. ${\displaystyle h({\begin{pmatrix}x\\y\\z\end{pmatrix}})={\begin{pmatrix}x\\x+y+z\end{pmatrix}}}$
2. ${\displaystyle h({\begin{pmatrix}x\\y\\z\end{pmatrix}})={\begin{pmatrix}0\\0\end{pmatrix}}}$
3. ${\displaystyle h({\begin{pmatrix}x\\y\\z\end{pmatrix}})={\begin{pmatrix}1\\1\end{pmatrix}}}$