# Linear Algebra/Combining Subspaces/Solutions

## Solutions

This exercise is recommended for all readers.
Problem 1

Decide if ${\displaystyle \mathbb {R} ^{2}}$  is the direct sum of each ${\displaystyle W_{1}}$  and ${\displaystyle W_{2}}$ .

1. ${\displaystyle W_{1}=\{{\begin{pmatrix}x\\0\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}x\\x\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$
2. ${\displaystyle W_{1}=\{{\begin{pmatrix}s\\s\end{pmatrix}}\,{\big |}\,s\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}s\\1.1s\end{pmatrix}}\,{\big |}\,s\in \mathbb {R} \}}$
3. ${\displaystyle W_{1}=\mathbb {R} ^{2}}$ , ${\displaystyle W_{2}=\{{\vec {0}}\}}$
4. ${\displaystyle W_{1}=W_{2}=\{{\begin{pmatrix}t\\t\end{pmatrix}}\,{\big |}\,t\in \mathbb {R} \}}$
5. ${\displaystyle W_{1}=\{{\begin{pmatrix}1\\0\end{pmatrix}}+{\begin{pmatrix}x\\0\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}-1\\0\end{pmatrix}}+{\begin{pmatrix}0\\y\end{pmatrix}}\,{\big |}\,y\in \mathbb {R} \}}$

With each of these we can apply Lemma 4.15.

1. Yes. The plane is the sum of this ${\displaystyle W_{1}}$  and ${\displaystyle W_{2}}$  because for any scalars ${\displaystyle a}$  and ${\displaystyle b}$
${\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}a-b\\0\end{pmatrix}}+{\begin{pmatrix}b\\b\end{pmatrix}}}$
shows that the general vector is a sum of vectors from the two parts. And, these two subspaces are (different) lines through the origin, and so have a trivial intersection.
2. Yes. To see that any vector in the plane is a combination of vectors from these parts, consider this relationship.
${\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}=c_{1}{\begin{pmatrix}1\\1\end{pmatrix}}+c_{2}{\begin{pmatrix}1\\1.1\end{pmatrix}}}$
We could now simply note that the set
${\displaystyle \{{\begin{pmatrix}1\\1\end{pmatrix}},{\begin{pmatrix}1\\1.1\end{pmatrix}}\}}$
is a basis for the space (because it is clearly linearly independent, and has size two in ${\displaystyle \mathbb {R} ^{2}}$ ), and thus ther is one and only one solution to the above equation, implying that all decompositions are unique. Alternatively, we can solve
${\displaystyle {\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&a\\c_{1}&+&1.1c_{2}&=&b\end{array}}\;{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}\;{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&a\\&&0.1c_{2}&=&-a+b\end{array}}}$
to get that ${\displaystyle c_{2}=10(-a+b)}$  and ${\displaystyle c_{1}=11a-10b}$ , and so we have
${\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}11a-10b\\11a-10b\end{pmatrix}}+{\begin{pmatrix}-10a+10b\\1.1\cdot (-10a+10b)\end{pmatrix}}}$
as required. As with the prior answer, each of the two subspaces is a line through the origin, and their intersection is trivial.
3. Yes. Each vector in the plane is a sum in this way
${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}x\\y\end{pmatrix}}+{\begin{pmatrix}0\\0\end{pmatrix}}}$
and the intersection of the two subspaces is trivial.
4. No. The intersection is not trivial.
5. No. These are not subspaces.
This exercise is recommended for all readers.
Problem 2

Show that ${\displaystyle \mathbb {R} ^{3}}$  is the direct sum of the ${\displaystyle xy}$ -plane with each of these.

1. the ${\displaystyle z}$ -axis
2. the line
${\displaystyle \{{\begin{pmatrix}z\\z\\z\end{pmatrix}}\,{\big |}\,z\in \mathbb {R} \}}$

With each of these we can use Lemma 4.15.

1. Any vector in ${\displaystyle \mathbb {R} ^{3}}$  can be decomposed as this sum.
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}x\\y\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\z\end{pmatrix}}}$
And, the intersection of the ${\displaystyle xy}$ -plane and the ${\displaystyle z}$ -axis is the trivial subspace.
2. Any vector in ${\displaystyle \mathbb {R} ^{3}}$  can be decomposed as
${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}x-z\\y-z\\0\end{pmatrix}}+{\begin{pmatrix}z\\z\\z\end{pmatrix}}}$
and the intersection of the two spaces is trivial.
Problem 3

Is ${\displaystyle {\mathcal {P}}_{2}}$  the direct sum of ${\displaystyle \{a+bx^{2}\,{\big |}\,a,b\in \mathbb {R} \}}$  and ${\displaystyle \{cx\,{\big |}\,c\in \mathbb {R} \}}$ ?

It is. Showing that these two are subspaces is routine. To see that the space is the direct sum of these two, just note that each member of ${\displaystyle {\mathcal {P}}_{2}}$  has the unique decomposition ${\displaystyle m+nx+px^{2}=(m+px^{2})+(nx)}$ .

This exercise is recommended for all readers.
Problem 4

In ${\displaystyle {\mathcal {P}}_{n}}$ , the even polynomials are the members of this set

${\displaystyle {\mathcal {E}}=\{p\in {\mathcal {P}}_{n}\,{\big |}\,p(-x)=p(x){\text{ for all }}x\}}$

and the odd polynomials are the members of this set.

${\displaystyle {\mathcal {O}}=\{p\in {\mathcal {P}}_{n}\,{\big |}\,p(-x)=-p(x){\text{ for all }}x\}}$

Show that these are complementary subspaces.

To show that they are subspaces is routine. We will argue they are complements with Lemma 4.15. The intersection ${\displaystyle {\mathcal {E}}\cap {\mathcal {O}}}$  is trivial because the only polynomial satisfying both conditions ${\displaystyle p(-x)=p(x)}$  and ${\displaystyle p(-x)=-p(x)}$  is the zero polynomial. To see that the entire space is the sum of the subspaces ${\displaystyle {\mathcal {E}}+{\mathcal {O}}={\mathcal {P}}_{n}}$ , note that the polynomials ${\displaystyle p_{0}(x)=1}$ , ${\displaystyle p_{2}(x)=x^{2}}$ , ${\displaystyle p_{4}(x)=x^{4}}$ , etc., are in ${\displaystyle {\mathcal {E}}}$  and also note that the polynomials ${\displaystyle p_{1}(x)=x}$ , ${\displaystyle p_{3}(x)=x^{3}}$ , etc., are in ${\displaystyle {\mathcal {O}}}$ . Hence any member of ${\displaystyle {\mathcal {P}}_{n}}$  is a combination of members of ${\displaystyle {\mathcal {E}}}$  and ${\displaystyle {\mathcal {O}}}$ .

Problem 5

Which of these subspaces of ${\displaystyle \mathbb {R} ^{3}}$

${\displaystyle W_{1}}$ : the ${\displaystyle x}$ -axis,      ${\displaystyle W_{2}}$ :the ${\displaystyle y}$ -axis,      ${\displaystyle W_{3}}$ :the ${\displaystyle z}$ -axis,
${\displaystyle W_{4}}$ :the plane ${\displaystyle x+y+z=0}$ ,      ${\displaystyle W_{5}}$ :the ${\displaystyle yz}$ -plane

can be combined to

1. sum to ${\displaystyle \mathbb {R} ^{3}}$ ?
2. direct sum to ${\displaystyle \mathbb {R} ^{3}}$ ?

Each of these is ${\displaystyle \mathbb {R} ^{3}}$ .

1. These are broken into lines for legibility.

${\displaystyle W_{1}+W_{2}+W_{3}}$ , ${\displaystyle W_{1}+W_{2}+W_{3}+W_{4}}$ , ${\displaystyle W_{1}+W_{2}+W_{3}+W_{5}}$ , ${\displaystyle W_{1}+W_{2}+W_{3}+W_{4}+W_{5}}$ ,
${\displaystyle W_{1}+W_{2}+W_{4}}$ , ${\displaystyle W_{1}+W_{2}+W_{4}+W_{5}}$ , ${\displaystyle W_{1}+W_{2}+W_{5}}$ ,
${\displaystyle W_{1}+W_{3}+W_{4}}$ , ${\displaystyle W_{1}+W_{3}+W_{5}}$ , ${\displaystyle W_{1}+W_{3}+W_{4}+W_{5}}$ ,
${\displaystyle W_{1}+W_{4}}$ , ${\displaystyle W_{1}+W_{4}+W_{5}}$ ,
${\displaystyle W_{1}+W_{5}}$ ,
${\displaystyle W_{2}+W_{3}+W_{4}}$ , ${\displaystyle W_{2}+W_{3}+W_{4}+W_{5}}$ ,
${\displaystyle W_{2}+W_{4}}$ , ${\displaystyle W_{2}+W_{4}+W_{5}}$ ,
${\displaystyle W_{3}+W_{4}}$ , ${\displaystyle W_{3}+W_{4}+W_{5}}$ ,
${\displaystyle W_{4}+W_{5}}$

2. ${\displaystyle W_{1}\oplus W_{2}\oplus W_{3}}$ , ${\displaystyle W_{1}\oplus W_{4}}$ , ${\displaystyle W_{1}\oplus W_{5}}$ , ${\displaystyle W_{2}\oplus W_{4}}$ , ${\displaystyle W_{3}\oplus W_{4}}$
This exercise is recommended for all readers.
Problem 6

Show that ${\displaystyle {\mathcal {P}}_{n}=\{a_{0}\,{\big |}\,a_{0}\in \mathbb {R} \}\oplus \dots \oplus \{a_{n}x^{n}\,{\big |}\,a_{n}\in \mathbb {R} \}}$ .

Clearly each is a subspace. The bases ${\displaystyle B_{i}=\langle x^{i}\rangle }$  for the subspaces, when concatenated, form a basis for the whole space.

Problem 7

What is ${\displaystyle W_{1}+W_{2}}$  if ${\displaystyle W_{1}\subseteq W_{2}}$ ?

It is ${\displaystyle W_{2}}$ .

Problem 8

Does Example 4.5 generalize? That is, is this true or false:if a vector space ${\displaystyle V}$  has a basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$  then it is the direct sum of the spans of the one-dimensional subspaces ${\displaystyle V=[\{{\vec {\beta }}_{1}\}]\oplus \dots \oplus [\{{\vec {\beta }}_{n}\}]}$ ?

True by Lemma 4.8.

Problem 9

Can ${\displaystyle \mathbb {R} ^{4}}$  be decomposed as a direct sum in two different ways? Can ${\displaystyle \mathbb {R} ^{1}}$ ?

Two distinct direct sum decompositions of ${\displaystyle \mathbb {R} ^{4}}$  are easy to find. Two such are ${\displaystyle W_{1}=[\{{\vec {e}}_{1},{\vec {e}}_{2}\}]}$  and ${\displaystyle W_{2}=[\{{\vec {e}}_{3},{\vec {e}}_{4}\}]}$ , and also ${\displaystyle U_{1}=[\{{\vec {e}}_{1}\}]}$  and ${\displaystyle U_{2}=[\{{\vec {e}}_{2},{\vec {e}}_{3},{\vec {e}}_{4}\}]}$ . (Many more are possible, for example ${\displaystyle \mathbb {R} ^{4}}$  and its trivial subspace.)

In contrast, any partition of ${\displaystyle \mathbb {R} ^{1}}$ 's single-vector basis will give one basis with no elements and another with a single element. Thus any decomposition involves ${\displaystyle \mathbb {R} ^{1}}$  and its trivial subspace.

Problem 10

This exercise makes the notation of writing "${\displaystyle +}$ " between sets more natural. Prove that, where ${\displaystyle W_{1},\dots ,W_{k}}$  are subspaces of a vector space,

${\displaystyle W_{1}+\dots +W_{k}=\{{\vec {w}}_{1}+{\vec {w}}_{2}+\dots +{\vec {w}}_{k}\,{\big |}\,{\vec {w}}_{1}\in W_{1},\dots ,{\vec {w}}_{k}\in W_{k}\},}$

and so the sum of subspaces is the subspace of all sums.

Set inclusion one way is easy: ${\displaystyle \{{\vec {w}}_{1}+\dots +{\vec {w}}_{k}\,{\big |}\,{\vec {w}}_{i}\in W_{i}\}}$  is a subset of ${\displaystyle [W_{1}\cup \dots \cup W_{k}]}$  because each ${\displaystyle {\vec {w}}_{1}+\dots +{\vec {w}}_{k}}$  is a sum of vectors from the union.

For the other inclusion, to any linear combination of vectors from the union apply commutativity of vector addition to put vectors from ${\displaystyle W_{1}}$  first, followed by vectors from ${\displaystyle W_{2}}$ , etc. Add the vectors from ${\displaystyle W_{1}}$  to get a ${\displaystyle {\vec {w}}_{1}\in W_{1}}$ , add the vectors from ${\displaystyle W_{2}}$  to get a ${\displaystyle {\vec {w}}_{2}\in W_{2}}$ , etc. The result has the desired form.

Problem 11

(Refer to Example 4.19. This exercise shows that the requirement that pariwise intersections be trivial is genuinely stronger than the requirement only that the intersection of all of the subspaces be trivial.) Give a vector space and three subspaces ${\displaystyle W_{1}}$ , ${\displaystyle W_{2}}$ , and ${\displaystyle W_{3}}$  such that the space is the sum of the subspaces, the intersection of all three subspaces ${\displaystyle W_{1}\cap W_{2}\cap W_{3}}$  is trivial, but the pairwise intersections ${\displaystyle W_{1}\cap W_{2}}$ , ${\displaystyle W_{1}\cap W_{3}}$ , and ${\displaystyle W_{2}\cap W_{3}}$  are nontrivial.

One example is to take the space to be ${\displaystyle \mathbb {R} ^{3}}$ , and to take the subspaces to be the ${\displaystyle xy}$ -plane, the ${\displaystyle xz}$ -plane, and the ${\displaystyle yz}$ -plane.

This exercise is recommended for all readers.
Problem 12

Prove that if ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  then ${\displaystyle W_{i}\cap W_{j}}$  is trivial whenever ${\displaystyle i\neq j}$ . This shows that the first half of the proof of Lemma 4.15 extends to the case of more than two subspaces. (Example 4.19 shows that this implication does not reverse; the other half does not extend.)

Of course, the zero vector is in all of the subspaces, so the intersection contains at least that one vector. By the definition of direct sum the set ${\displaystyle \{W_{1},\dots ,W_{k}\}}$  is independent and so no nonzero vector of ${\displaystyle W_{i}}$  is a multiple of a member of ${\displaystyle W_{j}}$ , when ${\displaystyle i\neq j}$ . In particular, no nonzero vector from ${\displaystyle W_{i}}$  equals a member of ${\displaystyle W_{j}}$ .

Problem 13

Recall that no linearly independent set contains the zero vector. Can an independent set of subspaces contain the trivial subspace?

It can contain a trivial subspace; this set of subspaces of ${\displaystyle \mathbb {R} ^{3}}$  is independent: ${\displaystyle \{\{{\vec {0}}\},x{\text{-axis}}\}}$ . No nonzero vector from the trivial space ${\displaystyle \{{\vec {0}}\}}$  is a multiple of a from the ${\displaystyle x}$ -axis, simply because the trivial space has no nonzero vectors to be candidates for such a multiple (and also no nonzero vector from the ${\displaystyle x}$ -axis is a multiple of the zero vector from the trivial subspace).

This exercise is recommended for all readers.
Problem 14

Does every subspace have a complement?

Yes. For any subspace of a vector space we can take any basis ${\displaystyle \langle {\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{k}\rangle }$  for that subspace and extend it to a basis ${\displaystyle \langle {\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{k},{\vec {\beta }}_{k+1},\dots ,{\vec {\beta }}_{n}\rangle }$  for the whole space. Then the complement of the original subspace has this for a basis: ${\displaystyle \langle {\vec {\beta }}_{k+1},\dots ,{\vec {\beta }}_{n}\rangle }$ .

This exercise is recommended for all readers.
Problem 15

Let ${\displaystyle W_{1},W_{2}}$  be subspaces of a vector space.

1. Assume that the set ${\displaystyle S_{1}}$  spans ${\displaystyle W_{1}}$ , and that the set ${\displaystyle S_{2}}$  spans ${\displaystyle W_{2}}$ . Can ${\displaystyle S_{1}\cup S_{2}}$  span ${\displaystyle W_{1}+W_{2}}$ ? Must it?
2. Assume that ${\displaystyle S_{1}}$  is a linearly independent subset of ${\displaystyle W_{1}}$  and that ${\displaystyle S_{2}}$  is a linearly independent subset of ${\displaystyle W_{2}}$ . Can ${\displaystyle S_{1}\cup S_{2}}$  be a linearly independent subset of ${\displaystyle W_{1}+W_{2}}$ ? Must it?
1. It must. Any member of ${\displaystyle W_{1}+W_{2}}$  can be written ${\displaystyle {\vec {w}}_{1}+{\vec {w}}_{2}}$  where ${\displaystyle {\vec {w}}_{1}\in W_{1}}$  and ${\displaystyle {\vec {w}}_{2}\in W_{2}}$ . As ${\displaystyle S_{1}}$  spans ${\displaystyle W_{1}}$ , the vector ${\displaystyle {\vec {w}}_{1}}$  is a combination of members of ${\displaystyle S_{1}}$ . Similarly ${\displaystyle {\vec {w}}_{2}}$  is a combination of members of ${\displaystyle S_{2}}$ .
2. An easy way to see that it can be linearly independent is to take each to be the empty set. On the other hand, in the space ${\displaystyle \mathbb {R} ^{1}}$ , if ${\displaystyle W_{1}=\mathbb {R} ^{1}}$  and ${\displaystyle W_{2}=\mathbb {R} ^{1}}$  and ${\displaystyle S_{1}=\{1\}}$  and ${\displaystyle S_{2}=\{2\}}$ , then their union ${\displaystyle S_{1}\cup S_{2}}$  is not independent.
Problem 16

When a vector space is decomposed as a direct sum, the dimensions of the subspaces add to the dimension of the space. The situation with a space that is given as the sum of its subspaces is not as simple. This exercise considers the two-subspace special case.

1. For these subspaces of ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$  find ${\displaystyle W_{1}\cap W_{2}}$ , ${\displaystyle \dim(W_{1}\cap W_{2})}$ , ${\displaystyle W_{1}+W_{2}}$ , and ${\displaystyle \dim(W_{1}+W_{2})}$ .
${\displaystyle W_{1}=\{{\begin{pmatrix}0&0\\c&d\end{pmatrix}}\,{\big |}\,c,d\in \mathbb {R} \}\qquad W_{2}=\{{\begin{pmatrix}0&b\\c&0\end{pmatrix}}\,{\big |}\,b,c\in \mathbb {R} \}}$
2. Suppose that ${\displaystyle U}$  and ${\displaystyle W}$  are subspaces of a vector space. Suppose that the sequence ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k}\rangle }$  is a basis for ${\displaystyle U\cap W}$ . Finally, suppose that the prior sequence has been expanded to give a sequence ${\displaystyle \langle {\vec {\mu }}_{1},\dots ,{\vec {\mu }}_{j},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k}\rangle }$  that is a basis for ${\displaystyle U}$ , and a sequence ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k},{\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{p}\rangle }$  that is a basis for ${\displaystyle W}$ . Prove that this sequence
${\displaystyle \langle {\vec {\mu }}_{1},\dots ,{\vec {\mu }}_{j},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k},{\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{p}\rangle }$
is a basis for for the sum ${\displaystyle U+W}$ .
3. Conclude that ${\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W)}$ .
4. Let ${\displaystyle W_{1}}$  and ${\displaystyle W_{2}}$  be eight-dimensional subspaces of a ten-dimensional space. List all values possible for ${\displaystyle \dim(W_{1}\cap W_{2})}$ .
1. The intersection and sum are
${\displaystyle \{{\begin{pmatrix}0&0\\c&0\end{pmatrix}}\,{\big |}\,c\in \mathbb {R} \}\qquad \{{\begin{pmatrix}0&b\\c&d\end{pmatrix}}\,{\big |}\,b,c,d\in \mathbb {R} \}}$
which have dimensions one and three.
2. We write ${\displaystyle B_{U\cap W}}$  for the basis for ${\displaystyle U\cap W}$ , we write ${\displaystyle B_{U}}$  for the basis for ${\displaystyle U}$ , we write ${\displaystyle B_{W}}$  for the basis for ${\displaystyle W}$ , and we write ${\displaystyle B_{U+W}}$  for the basis under consideration. To see that that ${\displaystyle B_{U+W}}$  spans ${\displaystyle U+W}$ , observe that any vector ${\displaystyle c{\vec {u}}+d{\vec {w}}}$  from ${\displaystyle U+W}$  can be written as a linear combination of the vectors in ${\displaystyle B_{U+W}}$ , simply by expressing ${\displaystyle {\vec {u}}}$  in terms of ${\displaystyle B_{U}}$  and expressing ${\displaystyle {\vec {w}}}$  in terms of ${\displaystyle B_{W}}$ . We finish by showing that ${\displaystyle B_{U+W}}$  is linearly independent. Consider
${\displaystyle c_{1}{\vec {\mu }}_{1}+\dots +c_{j+1}{\vec {\beta }}_{1}+\dots +c_{j+k+p}{\vec {\omega }}_{p}={\vec {0}}}$
can be rewritten in this way.
${\displaystyle c_{1}{\vec {\mu }}_{1}+\dots +c_{j}{\vec {\mu }}_{j}=-c_{j+1}{\vec {\beta }}_{1}-\dots -c_{j+k+p}{\vec {\omega }}_{p}}$
Note that the left side sums to a vector in ${\displaystyle U}$  while right side sums to a vector in ${\displaystyle W}$ , and thus both sides sum to a member of ${\displaystyle U\cap W}$ . Since the left side is a member of ${\displaystyle U\cap W}$ , it is expressible in terms of the members of ${\displaystyle B_{U\cap W}}$ , which gives the combination of ${\displaystyle {\vec {\mu }}}$ 's from the left side above as equal to a combination of ${\displaystyle {\vec {\beta }}}$ 's. But, the fact that the basis ${\displaystyle B_{U}}$  is linearly independent shows that any such combination is trivial, and in particular, the coefficients ${\displaystyle c_{1}}$ , ..., ${\displaystyle c_{j}}$  from the left side above are all zero. Similarly, the coefficients of the ${\displaystyle {\vec {\omega }}}$ 's are all zero. This leaves the above equation as a linear relationship among the ${\displaystyle {\vec {\beta }}}$ 's, but ${\displaystyle B_{U\cap W}}$  is linearly independent, and therefore all of the coefficients of the ${\displaystyle {\vec {\beta }}}$ 's are also zero.
3. Just count the basis vectors in the prior item:${\displaystyle \dim(U+W)=j+k+p}$ , and ${\displaystyle \dim(U)=j+k}$ , and ${\displaystyle \dim(W)=k+p}$ , and ${\displaystyle \dim(U\cap W)=k}$ .
4. We know that ${\displaystyle \dim(W_{1}+W_{2})=\dim(W_{1})+\dim(W_{2})-\dim(W_{1}\cap W_{2})}$ . Because ${\displaystyle W_{1}\subseteq W_{1}+W_{2}}$ , we know that ${\displaystyle W_{1}+W_{2}}$  must have dimension greater than that of ${\displaystyle W_{1}}$ , that is, must have dimension eight, nine, or ten. Substituting gives us three possibilities ${\displaystyle 8=8+8-\dim(W_{1}\cap W_{2})}$  or ${\displaystyle 9=8+8-\dim(W_{1}\cap W_{2})}$  or ${\displaystyle 10=8+8-\dim(W_{1}\cap W_{2})}$ . Thus ${\displaystyle \dim(W_{1}\cap W_{2})}$  must be either eight, seven, or six. (Giving examples to show that each of these three cases is possible is easy, for instance in ${\displaystyle \mathbb {R} ^{10}}$ .)
Problem 17

Let ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  and for each index ${\displaystyle i}$  suppose that ${\displaystyle S_{i}}$  is a linearly independent subset of ${\displaystyle W_{i}}$ . Prove that the union of the ${\displaystyle S_{i}}$ 's is linearly independent.

Expand each ${\displaystyle S_{i}}$  to a basis ${\displaystyle B_{i}}$  for ${\displaystyle W_{i}}$ . The concatenation of those bases ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$  is a basis for ${\displaystyle V}$  and thus its members form a linearly independent set. But the union ${\displaystyle S_{1}\cup \cdots \cup S_{k}}$  is a subset of that linearly independent set, and thus is itself linearly independent.

Problem 18

A matrix is symmetric if for each pair of indices ${\displaystyle i}$  and ${\displaystyle j}$ , the ${\displaystyle i,j}$  entry equals the ${\displaystyle j,i}$  entry. A matrix is antisymmetric if each ${\displaystyle i,j}$  entry is the negative of the ${\displaystyle j,i}$  entry.

1. Give a symmetric ${\displaystyle 2\!\times \!2}$  matrix and an antisymmetric ${\displaystyle 2\!\times \!2}$  matrix. (Remark. For the second one, be careful about the entries on the diagional.)
2. What is the relationship between a square symmetric matrix and its transpose? Between a square antisymmetric matrix and its transpose?
3. Show that ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$  is the direct sum of the space of symmetric matrices and the space of antisymmetric matrices.
1. Two such are these.
${\displaystyle {\begin{pmatrix}1&2\\2&3\end{pmatrix}}\qquad {\begin{pmatrix}0&1\\-1&0\end{pmatrix}}}$
For the antisymmetric one, entries on the diagonal must be zero.
2. A square symmetric matrix equals its transpose. A square antisymmetric matrix equals the negative of its transpose.
3. Showing that the two sets are subspaces is easy. Suppose that ${\displaystyle A\in {\mathcal {M}}_{n\!\times \!n}}$ .To express ${\displaystyle A}$  as a sum of a symmetric and an antisymmetric matrix, we observe that
${\displaystyle A=(1/2)(A+{{A}^{\rm {trans}}})+(1/2)(A-{{A}^{\rm {trans}}})}$
and note the first summand is symmetric while the second is antisymmetric. Thus ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$  is the sum of the two subspaces. To show that the sum is direct, assume a matrix ${\displaystyle A}$  is both symmetric ${\displaystyle A={{A}^{\rm {trans}}}}$  and antisymmetric ${\displaystyle A=-{{A}^{\rm {trans}}}}$ . Then ${\displaystyle A=-A}$  and so all of ${\displaystyle A}$ 's entries are zeroes.
Problem 19

Let ${\displaystyle W_{1},W_{2},W_{3}}$  be subspaces of a vector space. Prove that ${\displaystyle (W_{1}\cap W_{2})+(W_{1}\cap W_{3})\subseteq W_{1}\cap (W_{2}+W_{3})}$ . Does the inclusion reverse?

Assume that ${\displaystyle {\vec {v}}\in (W_{1}\cap W_{2})+(W_{1}\cap W_{3})}$ . Then ${\displaystyle {\vec {v}}={\vec {w}}_{2}+{\vec {w}}_{3}}$  where ${\displaystyle {\vec {w}}_{2}\in W_{1}\cap W_{2}}$  and ${\displaystyle {\vec {w}}_{3}\in W_{1}\cap W_{3}}$ . Note that ${\displaystyle {\vec {w}}_{2},{\vec {w}}_{3}\in W_{1}}$  and, as a subspace is closed under addition, ${\displaystyle {\vec {w}}_{2}+{\vec {w}}_{3}\in W_{1}}$ . Thus ${\displaystyle {\vec {v}}={\vec {w}}_{2}+{\vec {w}}_{3}\in W_{1}\cap (W_{2}+W_{3})}$ .

This example proves that the inclusion may be strict: in ${\displaystyle \mathbb {R} ^{2}}$  take ${\displaystyle W_{1}}$  to be the ${\displaystyle x}$ -axis, take ${\displaystyle W_{2}}$  to be the ${\displaystyle y}$ -axis, and take ${\displaystyle W_{3}}$  to be the line ${\displaystyle y=x}$ . Then ${\displaystyle W_{1}\cap W_{2}}$  and ${\displaystyle W_{1}\cap W_{3}}$  are trivial and so their sum is trivial. But ${\displaystyle W_{2}+W_{3}}$  is all of ${\displaystyle \mathbb {R} ^{2}}$  so ${\displaystyle W_{1}\cap (W_{2}+W_{3})}$  is the ${\displaystyle x}$ -axis.

Problem 20

The example of the ${\displaystyle x}$ -axis and the ${\displaystyle y}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$  shows that ${\displaystyle W_{1}\oplus W_{2}=V}$  does not imply that ${\displaystyle W_{1}\cup W_{2}=V}$ . Can ${\displaystyle W_{1}\oplus W_{2}=V}$  and ${\displaystyle W_{1}\cup W_{2}=V}$  happen?

It happens when at least one of ${\displaystyle W_{1},W_{2}}$  is trivial. But that is the only way it can happen.

To prove this, assume that both are non-trivial, select nonzero vectors ${\displaystyle {\vec {w}}_{1},{\vec {w}}_{2}}$  from each, and consider ${\displaystyle {\vec {w}}_{1}+{\vec {w}}_{2}}$ . This sum is not in ${\displaystyle W_{1}}$  because ${\displaystyle {\vec {w}}_{1}+{\vec {w}}_{2}={\vec {v}}\in W_{1}}$  would imply that ${\displaystyle {\vec {w}}_{2}={\vec {v}}-{\vec {w}}_{1}}$  is in ${\displaystyle W_{1}}$ , which violates the assumption of the independence of the subspaces. Similarly, ${\displaystyle {\vec {w}}_{1}+{\vec {w}}_{2}}$  is not in ${\displaystyle W_{2}}$ . Thus there is an element of ${\displaystyle V}$  that is not in ${\displaystyle W_{1}\cup W_{2}}$ .

This exercise is recommended for all readers.
Problem 21

Our model for complementary subspaces, the ${\displaystyle x}$ -axis and the ${\displaystyle y}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$ , has one property not used here. Where ${\displaystyle U}$  is a subspace of ${\displaystyle \mathbb {R} ^{n}}$  we define the orthogonal complement of ${\displaystyle U}$  to be

${\displaystyle U^{\perp }=\{{\vec {v}}\in \mathbb {R} ^{n}\,{\big |}\,{\vec {v}}\cdot {\vec {u}}=0{\text{ for all }}{\vec {u}}\in U\}}$

(read "${\displaystyle U}$  perp").

1. Find the orthocomplement of the ${\displaystyle x}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$ .
2. Find the orthocomplement of the ${\displaystyle x}$ -axis in ${\displaystyle \mathbb {R} ^{3}}$ .
3. Find the orthocomplement of the ${\displaystyle xy}$ -plane in ${\displaystyle \mathbb {R} ^{3}}$ .
4. Show that the orthocomplement of a subspace is a subspace.
5. Show that if ${\displaystyle W}$  is the orthocomplement of ${\displaystyle U}$  then ${\displaystyle U}$  is the orthocomplement of ${\displaystyle W}$ .
6. Prove that a subspace and its orthocomplement have a trivial intersection.
7. Conclude that for any ${\displaystyle n}$  and subspace ${\displaystyle U\subseteq \mathbb {R} ^{n}}$  we have that ${\displaystyle \mathbb {R} ^{n}=U\oplus U^{\perp }}$ .
8. Show that ${\displaystyle \dim(U)+\dim(U^{\perp })}$  equals the dimension of the enclosing space.
1. The set
${\displaystyle \{{\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}\,{\big |}\,{\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}\cdot {\begin{pmatrix}x\\0\end{pmatrix}}=0{\text{ for all }}x\in \mathbb {R} \}}$
is easily seen to be the ${\displaystyle y}$ -axis.
2. The ${\displaystyle yz}$ -plane.
3. The ${\displaystyle z}$ -axis.
4. Assume that ${\displaystyle U}$  is a subspace of some ${\displaystyle \mathbb {R} ^{n}}$ . Because ${\displaystyle U^{\perp }}$  contains the zero vector, since that vector is perpendicular to everything, we need only show that the orthocomplement is closed under linear combinations of two elements. If ${\displaystyle {\vec {w}}_{1},{\vec {w}}_{2}\in U^{\perp }}$  then ${\displaystyle {\vec {w}}_{1}\cdot {\vec {u}}=0}$  and ${\displaystyle {\vec {w}}_{2}\cdot {\vec {u}}=0}$  for all ${\displaystyle {\vec {u}}\in U}$ . Thus ${\displaystyle (c_{1}{\vec {w}}_{1}+c_{2}{\vec {w}}_{2})\cdot {\vec {u}}=c_{1}({\vec {w}}_{1}\cdot {\vec {u}})+c_{2}({\vec {w}}_{2}\cdot {\vec {u}})=0}$  for all ${\displaystyle {\vec {u}}\in U}$  and so ${\displaystyle U^{\perp }}$  is closed under linear combinations.
5. The only vector orthogonal to itself is the zero vector.
6. This is immediate.
7. To prove that the dimensions add, it suffices by Corollary 4.13 and Lemma 4.15 to show that ${\displaystyle U\cap U^{\perp }}$  is the trivial subspace ${\displaystyle \{{\vec {0}}\}}$ . But this is one of the prior items in this problem.
This exercise is recommended for all readers.
Problem 22

Consider Corollary 4.13. Does it work both ways— that is, supposing that ${\displaystyle V=W_{1}+\dots +W_{k}}$ , is ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  if and only if ${\displaystyle \dim(V)=\dim(W_{1})+\dots +\dim(W_{k})}$ ?

Yes. The left-to-right implication is Corollary 4.13. For the other direction, assume that ${\displaystyle \dim(V)=\dim(W_{1})+\dots +\dim(W_{k})}$ . Let ${\displaystyle B_{1},\dots ,B_{k}}$  be bases for ${\displaystyle W_{1},\dots ,W_{k}}$ . As ${\displaystyle V}$  is the sum of the subspaces, any ${\displaystyle {\vec {v}}\in V}$  can be written ${\displaystyle {\vec {v}}={\vec {w}}_{1}+\cdots +{\vec {w}}_{k}}$  and expressing each ${\displaystyle {\vec {w}}_{i}}$  as a combination of vectors from the associated basis ${\displaystyle B_{i}}$  shows that the concatenation ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$  spans ${\displaystyle V}$ . Now, that concatenation has ${\displaystyle \dim(W_{1})+\dots +\dim(W_{k})}$  members, and so it is a spanning set of size ${\displaystyle \dim(V)}$ . The concatenation is therefore a basis for ${\displaystyle V}$ . Thus ${\displaystyle V}$  is the direct sum.

Problem 23

We know that if ${\displaystyle V=W_{1}\oplus W_{2}}$  then there is a basis for ${\displaystyle V}$  that splits into a basis for ${\displaystyle W_{1}}$  and a basis for ${\displaystyle W_{2}}$ . Can we make the stronger statement that every basis for ${\displaystyle V}$  splits into a basis for ${\displaystyle W_{1}}$  and a basis for ${\displaystyle W_{2}}$ ?

No. The standard basis for ${\displaystyle \mathbb {R} ^{2}}$  does not split into bases for the complementary subspaces the line ${\displaystyle x=y}$  and the line ${\displaystyle x=-y}$ .

Problem 24

We can ask about the algebra of the "${\displaystyle +}$ " operation.

1. Is it commutative; is ${\displaystyle W_{1}+W_{2}=W_{2}+W_{1}}$ ?
2. Is it associative; is ${\displaystyle (W_{1}+W_{2})+W_{3}=W_{1}+(W_{2}+W_{3})}$ ?
3. Let ${\displaystyle W}$  be a subspace of some vector space. Show that ${\displaystyle W+W=W}$ .
4. Must there be an identity element, a subspace ${\displaystyle I}$  such that ${\displaystyle I+W=W+I=W}$  for all subspaces ${\displaystyle W}$ ?
5. Does left-cancelation hold:if ${\displaystyle W_{1}+W_{2}=W_{1}+W_{3}}$  then ${\displaystyle W_{2}=W_{3}}$ ? Right cancelation?
1. Yes, ${\displaystyle W_{1}+W_{2}=W_{2}+W_{1}}$  for all subspaces ${\displaystyle W_{1},W_{2}}$  because each side is the span of ${\displaystyle W_{1}\cup W_{2}=W_{2}\cup W_{1}}$ .
2. This one is similar to the prior one— each side of that equation is the span of ${\displaystyle (W_{1}\cup W_{2})\cup W_{3}=W_{1}\cup (W_{2}\cup W_{3})}$ .
3. Because this is an equality between sets, we can show that it holds by mutual inclusion. Clearly ${\displaystyle W\subseteq W+W}$ . For ${\displaystyle W+W\subseteq W}$  just recall that every subset is closed under addition so any sum of the form ${\displaystyle {\vec {w}}_{1}+{\vec {w}}_{2}}$  is in ${\displaystyle W}$ .
4. In each vector space, the identity element with respect to subspace addition is the trivial subspace.
5. Neither of left or right cancelation needs to hold. For an example, in ${\displaystyle \mathbb {R} ^{3}}$  take ${\displaystyle W_{1}}$  to be the ${\displaystyle xy}$ -plane, take ${\displaystyle W_{2}}$  to be the ${\displaystyle x}$ -axis, and take ${\displaystyle W_{3}}$  to be the ${\displaystyle y}$ -axis.
Problem 25

Consider the algebraic properties of the direct sum operation.

1. Does direct sum commute: does ${\displaystyle V=W_{1}\oplus W_{2}}$  imply that ${\displaystyle V=W_{2}\oplus W_{1}}$ ?
2. Prove that direct sum is associative:${\displaystyle (W_{1}\oplus W_{2})\oplus W_{3}=W_{1}\oplus (W_{2}\oplus W_{3})}$ .
3. Show that ${\displaystyle \mathbb {R} ^{3}}$  is the direct sum of the three axes (the relevance here is that by the previous item, we needn't specify which two of the three axes are combined first).
4. Does the direct sum operation left-cancel:does ${\displaystyle W_{1}\oplus W_{2}=W_{1}\oplus W_{3}}$  imply ${\displaystyle W_{2}=W_{3}}$ ? Does it right-cancel?
5. There is an identity element with respect to this operation. Find it.
6. Do some, or all, subspaces have inverses with respect to this operation:is there a subspace ${\displaystyle W}$  of some vector space such that there is a subspace ${\displaystyle U}$  with the property that ${\displaystyle U\oplus W}$  equals the identity element from the prior item?
1. They are equal because for each, ${\displaystyle V}$  is the direct sum if and only if each ${\displaystyle {\vec {v}}\in V}$  can be written in a unique way as a sum ${\displaystyle {\vec {v}}={\vec {w}}_{1}+{\vec {w}}_{2}}$  and ${\displaystyle {\vec {v}}={\vec {w}}_{2}+{\vec {w}}_{1}}$ .
2. They are equal because for each, ${\displaystyle V}$  is the direct sum if and only if each ${\displaystyle {\vec {v}}\in V}$  can be written in a unique way as a sum of a vector from each ${\displaystyle {\vec {v}}=({\vec {w}}_{1}+{\vec {w}}_{2})+{\vec {w}}_{3}}$  and ${\displaystyle {\vec {v}}={\vec {w}}_{1}+({\vec {w}}_{2}+{\vec {w}}_{3})}$ .
3. Any vector in ${\displaystyle \mathbb {R} ^{3}}$  can be decomposed uniquely into the sum of a vector from each axis.
4. No. For an example, in ${\displaystyle \mathbb {R} ^{2}}$  take ${\displaystyle W_{1}}$  to be the ${\displaystyle x}$ -axis, take ${\displaystyle W_{2}}$  to be the ${\displaystyle y}$ -axis, and take ${\displaystyle W_{3}}$  to be the line ${\displaystyle y=x}$ .