# Linear Algebra/Combining Subspaces

 Linear Algebra ← Vector Spaces and Linear Systems Combining Subspaces Topic: Fields →

This subsection is optional. It is required only for the last sections of Chapter Three and Chapter Five and for occasional exercises, and can be passed over without loss of continuity.

This chapter opened with the definition of a vector space, and the middle consisted of a first analysis of the idea. This subsection closes the chapter by finishing the analysis, in the sense that "analysis" means "method of determining the ... essential features of something by separating it into parts" (Halsey 1979).

A common way to understand things is to see how they can be built from component parts. For instance, we think of ${\displaystyle \mathbb {R} ^{3}}$ as put together, in some way, from the ${\displaystyle x}$-axis, the ${\displaystyle y}$-axis, and ${\displaystyle z}$-axis. In this subsection we will make this precise;we will describe how to decompose a vector space into a combination of some of its subspaces. In developing this idea of subspace combination, we will keep the ${\displaystyle \mathbb {R} ^{3}}$ example in mind as a benchmark model.

Subspaces are subsets and sets combine via union. But taking the combination operation for subspaces to be the simple union operation isn't what we want. For one thing, the union of the ${\displaystyle x}$-axis, the ${\displaystyle y}$-axis, and ${\displaystyle z}$-axis is not all of ${\displaystyle \mathbb {R} ^{3}}$, so the benchmark model would be left out. Besides, union is all wrong for this reason: a union of subspaces need not be a subspace (it need not be closed; for instance, this ${\displaystyle \mathbb {R} ^{3}}$ vector

${\displaystyle {\begin{pmatrix}1\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\1\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\1\end{pmatrix}}={\begin{pmatrix}1\\1\\1\end{pmatrix}}}$

is in none of the three axes and hence is not in the union). In addition to the members of the subspaces, we must at least also include all of the linear combinations.

Definition 4.1

Where ${\displaystyle W_{1},\dots ,W_{k}}$ are subspaces of a vector space, their sum is the span of their union ${\displaystyle W_{1}+W_{2}+\dots +W_{k}=[W_{1}\cup W_{2}\cup \dots W_{k}]}$.

(The notation, writing the "${\displaystyle +}$" between sets in addition to using it between vectors, fits with the practice of using this symbol for any natural accumulation operation.)

Example 4.2

The ${\displaystyle \mathbb {R} ^{3}}$ model fits with this operation. Any vector ${\displaystyle {\vec {w}}\in \mathbb {R} ^{3}}$ can be written as a linear combination ${\displaystyle c_{1}{\vec {v}}_{1}+c_{2}{\vec {v}}_{2}+c_{3}{\vec {v}}_{3}}$ where ${\displaystyle {\vec {v}}_{1}}$ is a member of the ${\displaystyle x}$-axis, etc., in this way

${\displaystyle {\begin{pmatrix}w_{1}\\w_{2}\\w_{3}\end{pmatrix}}=1\cdot {\begin{pmatrix}w_{1}\\0\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\w_{2}\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\0\\w_{3}\end{pmatrix}}}$

and so ${\displaystyle \mathbb {R} ^{3}=x{\text{-axis}}+y{\text{-axis}}+z{\text{-axis}}}$.

Example 4.3

A sum of subspaces can be less than the entire space. Inside of ${\displaystyle {\mathcal {P}}_{4}}$, let ${\displaystyle L}$ be the subspace of linear polynomials ${\displaystyle \{a+bx\,{\big |}\,a,b\in \mathbb {R} \}}$ and let ${\displaystyle C}$ be the subspace of purely-cubic polynomials ${\displaystyle \{cx^{3}\,{\big |}\,c\in \mathbb {R} \}}$. Then ${\displaystyle L+C}$ is not all of ${\displaystyle {\mathcal {P}}_{4}}$. Instead, it is the subspace ${\displaystyle L+C=\{a+bx+cx^{3}\,{\big |}\,a,b,c\in \mathbb {R} \}}$.

Example 4.4

A space can be described as a combination of subspaces in more than one way. Besides the decomposition ${\displaystyle \mathbb {R} ^{3}=x{\text{-axis}}+y{\text{-axis}}+z{\text{-axis}}}$, we can also write ${\displaystyle \mathbb {R} ^{3}=xy{\text{-plane}}+yz{\text{-plane}}}$. To check this, note that any ${\displaystyle {\vec {w}}\in \mathbb {R} ^{3}}$ can be written as a linear combination of a member of the ${\displaystyle xy}$-plane and a member of the ${\displaystyle yz}$-plane; here are two such combinations.

${\displaystyle {\begin{pmatrix}w_{1}\\w_{2}\\w_{3}\end{pmatrix}}=1\cdot {\begin{pmatrix}w_{1}\\w_{2}\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\0\\w_{3}\end{pmatrix}}\qquad {\begin{pmatrix}w_{1}\\w_{2}\\w_{3}\end{pmatrix}}=1\cdot {\begin{pmatrix}w_{1}\\w_{2}/2\\0\end{pmatrix}}+1\cdot {\begin{pmatrix}0\\w_{2}/2\\w_{3}\end{pmatrix}}}$

The above definition gives one way in which a space can be thought of as a combination of some of its parts. However, the prior example shows that there is at least one interesting property of our benchmark model that is not captured by the definition of the sum of subspaces. In the familiar decomposition of ${\displaystyle \mathbb {R} ^{3}}$, we often speak of a vector's "${\displaystyle x}$part" or "${\displaystyle y}$part" or "${\displaystyle z}$part". That is, in this model, each vector has a unique decomposition into parts that come from the parts making up the whole space. But in the decomposition used in Example 4.4, we cannot refer to the "${\displaystyle xy}$part" of a vector— these three sums

${\displaystyle {\begin{pmatrix}1\\2\\3\end{pmatrix}}={\begin{pmatrix}1\\2\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\3\end{pmatrix}}={\begin{pmatrix}1\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\2\\3\end{pmatrix}}={\begin{pmatrix}1\\1\\0\end{pmatrix}}+{\begin{pmatrix}0\\1\\3\end{pmatrix}}}$

all describe the vector as comprised of something from the first plane plus something from the second plane, but the "${\displaystyle xy}$part" is different in each.

That is, when we consider how ${\displaystyle \mathbb {R} ^{3}}$ is put together from the three axes "in some way", we might mean "in such a way that every vector has at least one decomposition", and that leads to the definition above. But if we take it to mean "in such a way that every vector has one and only one decomposition" then we need another condition on combinations. To see what this condition is, recall that vectors are uniquely represented in terms of a basis. We can use this to break a space into a sum of subspaces such that any vector in the space breaks uniquely into a sum of members of those subspaces.

Example 4.5

The benchmark is ${\displaystyle \mathbb {R} ^{3}}$ with its standard basis ${\displaystyle {\mathcal {E}}_{3}=\langle {\vec {e}}_{1},{\vec {e}}_{2},{\vec {e}}_{3}\rangle }$. The subspace with the basis ${\displaystyle B_{1}=\langle {\vec {e}}_{1}\rangle }$ is the ${\displaystyle x}$-axis. The subspace with the basis ${\displaystyle B_{2}=\langle {\vec {e}}_{2}\rangle }$ is the ${\displaystyle y}$-axis. The subspace with the basis ${\displaystyle B_{3}=\langle {\vec {e}}_{3}\rangle }$ is the ${\displaystyle z}$-axis. The fact that any member of ${\displaystyle \mathbb {R} ^{3}}$ is expressible as a sum of vectors from these subspaces

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}x\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\y\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\z\end{pmatrix}}}$

is a reflection of the fact that ${\displaystyle {\mathcal {E}}_{3}}$ spans the space— this equation

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}=c_{1}{\begin{pmatrix}1\\0\\0\end{pmatrix}}+c_{2}{\begin{pmatrix}0\\1\\0\end{pmatrix}}+c_{3}{\begin{pmatrix}0\\0\\1\end{pmatrix}}}$

has a solution for any ${\displaystyle x,y,z\in \mathbb {R} }$. And, the fact that each such expression is unique reflects that fact that ${\displaystyle {\mathcal {E}}_{3}}$ is linearly independent— any equation like the one above has a unique solution.

Example 4.6

We don't have to take the basis vectors one at a time, the same idea works if we conglomerate them into larger sequences. Consider again the space ${\displaystyle \mathbb {R} ^{3}}$ and the vectors from the standard basis ${\displaystyle {\mathcal {E}}_{3}}$. The subspace with the basis ${\displaystyle B_{1}=\langle {\vec {e}}_{1},{\vec {e}}_{3}\rangle }$ is the ${\displaystyle xz}$-plane. The subspace with the basis ${\displaystyle B_{2}=\langle {\vec {e}}_{2}\rangle }$ is the ${\displaystyle y}$-axis. As in the prior example, the fact that any member of the space is a sum of members of the two subspaces in one and only one way

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}x\\0\\z\end{pmatrix}}+{\begin{pmatrix}0\\y\\0\end{pmatrix}}}$

is a reflection of the fact that these vectors form a basis— this system

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}=(c_{1}{\begin{pmatrix}1\\0\\0\end{pmatrix}}+c_{3}{\begin{pmatrix}0\\0\\1\end{pmatrix}})+c_{2}{\begin{pmatrix}0\\1\\0\end{pmatrix}}}$

has one and only one solution for any ${\displaystyle x,y,z\in \mathbb {R} }$.

These examples illustrate a natural way to decompose a space into a sum of subspaces in such a way that each vector decomposes uniquely into a sum of vectors from the parts. The next result says that this way is the only way.

Definition 4.7

The concatenation of the sequences ${\displaystyle B_{1}=\langle {\vec {\beta }}_{1,1},\dots ,{\vec {\beta }}_{1,n_{1}}\rangle }$, ..., ${\displaystyle B_{k}=\langle {\vec {\beta }}_{k,1},\dots ,{\vec {\beta }}_{k,n_{k}}\rangle }$ is their adjoinment.

${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!B_{2}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}=\langle {\vec {\beta }}_{1,1},\dots ,{\vec {\beta }}_{1,n_{1}},{\vec {\beta }}_{2,1},\dots ,{\vec {\beta }}_{k,n_{k}}\rangle }$
Lemma 4.8

Let ${\displaystyle V}$ be a vector space that is the sum of some of its subspaces ${\displaystyle V=W_{1}+\dots +W_{k}}$. Let ${\displaystyle B_{1}}$, ..., ${\displaystyle B_{k}}$ be any bases for these subspaces. Then the following are equivalent.

1. For every ${\displaystyle {\vec {v}}\in V}$, the expression ${\displaystyle {\vec {v}}={\vec {w}}_{1}+\dots +{\vec {w}}_{k}}$ (with ${\displaystyle {\vec {w}}_{i}\in W_{i}}$) is unique.
2. The concatenation ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$ is a basis for ${\displaystyle V}$.
3. The nonzero members of ${\displaystyle \{{\vec {w}}_{1},\dots ,{\vec {w}}_{k}\}}$ (with ${\displaystyle {\vec {w}}_{i}\in W_{i}}$) form a linearly independent set— among nonzero vectors from different ${\displaystyle W_{i}}$'s, every linear relationship is trivial.
Proof

We will show that ${\displaystyle {\text{(1)}}\implies {\text{(2)}}}$, that ${\displaystyle {\text{(2)}}\implies {\text{(3)}}}$, and finally that ${\displaystyle {\text{(3)}}\implies {\text{(1)}}}$. For these arguments, observe that we can pass from a combination of ${\displaystyle {\vec {w}}}$'s to a combination of ${\displaystyle {\vec {\beta }}}$'s

${\displaystyle (*)\qquad d_{1}{\vec {w}}_{1}+\dots +d_{k}{\vec {w}}_{k}=}$

${\displaystyle {\begin{array}{rl}&=d_{1}(c_{1,1}{\vec {\beta }}_{1,1}+\dots +c_{1,n_{1}}{\vec {\beta }}_{1,n_{1}})+\dots +d_{k}(c_{k,1}{\vec {\beta }}_{k,1}+\dots +c_{k,n_{k}}{\vec {\beta }}_{k,n_{k}})\\&=d_{1}c_{1,1}\cdot {\vec {\beta }}_{1,1}+\dots +d_{k}c_{k,n_{k}}\cdot {\vec {\beta }}_{k,n_{k}}\end{array}}}$

and vice versa.

For ${\displaystyle {\text{(1)}}\implies {\text{(2)}}}$, assume that all decompositions are unique. We will show that ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$ spans the space and is linearly independent. It spans the space because the assumption that ${\displaystyle V=W_{1}+\dots +W_{k}}$ means that every ${\displaystyle {\vec {v}}}$ can be expressed as ${\displaystyle {\vec {v}}={\vec {w}}_{1}+\dots +{\vec {w}}_{k}}$, which translates by equation(${\displaystyle *}$) to an expression of ${\displaystyle {\vec {v}}}$ as a linear combination of the ${\displaystyle {\vec {\beta }}}$'s from the concatenation. For linear independence, consider this linear relationship.

${\displaystyle {\vec {0}}=c_{1,1}{\vec {\beta }}_{1,1}+\dots +c_{k,n_{k}}{\vec {\beta }}_{k,n_{k}}}$

Regroup as in (${\displaystyle *}$) (that is, take ${\displaystyle d_{1}}$, ..., ${\displaystyle d_{k}}$ to be ${\displaystyle 1}$ and move from bottom to top) to get the decomposition ${\displaystyle {\vec {0}}={\vec {w}}_{1}+\dots +{\vec {w}}_{k}}$. Because of the assumption that decompositions are unique, and because the zero vector obviously has the decomposition ${\displaystyle {\vec {0}}={\vec {0}}+\dots +{\vec {0}}}$, we now have that each ${\displaystyle {\vec {w}}_{i}}$ is the zero vector. This means that ${\displaystyle c_{i,1}{\vec {\beta }}_{i,1}+\dots +c_{i,n_{i}}{\vec {\beta }}_{i,n_{i}}={\vec {0}}}$. Thus, since each ${\displaystyle B_{i}}$ is a basis, we have the desired conclusion that all of the ${\displaystyle c}$'s are zero.

For ${\displaystyle {\text{(2)}}\implies {\text{(3)}}}$, assume that ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$ is a basis for the space. Consider a linear relationship among nonzero vectors from different ${\displaystyle W_{i}}$'s,

${\displaystyle {\vec {0}}=\dots +d_{i}{\vec {w}}_{i}+\cdots }$

in order to show that it is trivial. (The relationship is written in this way because we are considering a combination of nonzero vectors from only some of the ${\displaystyle W_{i}}$'s; for instance, there might not be a ${\displaystyle {\vec {w}}_{1}}$ in this combination.) As in (${\displaystyle *}$), ${\displaystyle {\vec {0}}=\dots +d_{i}(c_{i,1}{\vec {\beta }}_{i,1}+\dots +c_{i,n_{i}}{\vec {\beta }}_{i,n_{i}})+\cdots =\dots +d_{i}c_{i,1}\cdot {\vec {\beta }}_{i,1}+\dots +d_{i}c_{i,n_{i}}\cdot {\vec {\beta }}_{i,n_{i}}+\cdots }$ and the linear independence of ${\displaystyle B_{1}\!{\mathbin {{}^{\frown }}}\!\cdots \!{\mathbin {{}^{\frown }}}\!B_{k}}$ gives that each coefficient ${\displaystyle d_{i}c_{i,j}}$ is zero. Now, ${\displaystyle {\vec {w}}_{i}}$ is a nonzero vector, so at least one of the ${\displaystyle c_{i,j}}$'s is not zero, and thus ${\displaystyle d_{i}}$ is zero. This holds for each ${\displaystyle d_{i}}$, and therefore the linear relationship is trivial.

Finally, for ${\displaystyle {\text{(3)}}\implies {\text{(1)}}}$, assume that, among nonzero vectors from different ${\displaystyle W_{i}}$'s, any linear relationship is trivial. Consider two decompositions of a vector ${\displaystyle {\vec {v}}={\vec {w}}_{1}+\dots +{\vec {w}}_{k}}$ and ${\displaystyle {\vec {v}}={\vec {u}}_{1}+\dots +{\vec {u}}_{k}}$ in order to show that the two are the same. We have

${\displaystyle {\vec {0}}=({\vec {w}}_{1}+\dots +{\vec {w}}_{k})-({\vec {u}}_{1}+\dots +{\vec {u}}_{k})=({\vec {w}}_{1}-{\vec {u}}_{1})+\dots +({\vec {w}}_{k}-{\vec {u}}_{k})}$

which violates the assumption unless each ${\displaystyle {\vec {w}}_{i}-{\vec {u}}_{i}}$ is the zero vector. Hence, decompositions are unique.

Definition 4.9

A collection of subspaces ${\displaystyle \{W_{1},\ldots ,W_{k}\}}$ is independent if no nonzero vector from any ${\displaystyle W_{i}}$ is a linear combination of vectors from the other subspaces ${\displaystyle W_{1},\dots ,W_{i-1},W_{i+1},\dots ,W_{k}}$.

Definition 4.10

A vector space ${\displaystyle V}$ is the direct sum (or internal direct sum) of its subspaces ${\displaystyle W_{1},\dots ,W_{k}}$ if ${\displaystyle V=W_{1}+W_{2}+\dots +W_{k}}$ and the collection ${\displaystyle \{W_{1},\dots ,W_{k}\}}$ is independent. We write ${\displaystyle V=W_{1}\oplus W_{2}\oplus \dots \oplus W_{k}}$.

Example 4.11

The benchmark model fits: ${\displaystyle \mathbb {R} ^{3}=x{\text{-axis}}\oplus y{\text{-axis}}\oplus z{\text{-axis}}}$.

Example 4.12

The space of ${\displaystyle 2\!\times \!2}$ matrices is this direct sum.

${\displaystyle \{{\begin{pmatrix}a&0\\0&d\end{pmatrix}}\,{\big |}\,a,d\in \mathbb {R} \}\,\oplus \,\{{\begin{pmatrix}0&b\\0&0\end{pmatrix}}\,{\big |}\,b\in \mathbb {R} \}\,\oplus \,\{{\begin{pmatrix}0&0\\c&0\end{pmatrix}}\,{\big |}\,c\in \mathbb {R} \}}$

It is the direct sum of subspaces in many other ways as well; direct sum decompositions are not unique.

Corollary 4.13

The dimension of a direct sum is the sum of the dimensions of its summands.

Proof

In Lemma 4.8, the number of basis vectors in the concatenation equals the sum of the number of vectors in the subbases that make up the concatenation.

The special case of two subspaces is worth mentioning separately.

Definition 4.14

When a vector space is the direct sum of two of its subspaces, then they are said to be complements.

Lemma 4.15

A vector space ${\displaystyle V}$ is the direct sum of two of its subspaces ${\displaystyle W_{1}}$ and ${\displaystyle W_{2}}$ if and only if it is the sum of the two ${\displaystyle V=W_{1}+W_{2}}$ and their intersection is trivial ${\displaystyle W_{1}\cap W_{2}=\{{\vec {0}}\,\}}$.

Proof

Suppose first that ${\displaystyle V=W_{1}\oplus W_{2}}$. By definition, ${\displaystyle V}$ is the sum of the two. To show that the two have a trivial intersection, let ${\displaystyle {\vec {v}}}$ be a vector from ${\displaystyle W_{1}\cap W_{2}}$ and consider the equation ${\displaystyle {\vec {v}}={\vec {v}}}$. On the left side of that equation is a member of ${\displaystyle W_{1}}$, and on the right side is a linear combination of members (actually, of only one member) of ${\displaystyle W_{2}}$. But the independence of the spaces then implies that ${\displaystyle {\vec {v}}={\vec {0}}}$, as desired.

For the other direction, suppose that ${\displaystyle V}$ is the sum of two spaces with a trivial intersection. To show that ${\displaystyle V}$ is a direct sum of the two, we need only show that the spaces are independent— no nonzero member of the first is expressible as a linear combination of members of the second, and vice versa. This is true because any relationship ${\displaystyle {\vec {w}}_{1}=c_{1}{\vec {w}}_{2,1}+\dots +d_{k}{\vec {w}}_{2,k}}$ (with ${\displaystyle {\vec {w}}_{1}\in W_{1}}$ and ${\displaystyle {\vec {w}}_{2,j}\in W_{2}}$ for all ${\displaystyle j}$) shows that the vector on the left is also in ${\displaystyle W_{2}}$, since the right side is a combination of members of ${\displaystyle W_{2}}$. The intersection of these two spaces is trivial, so ${\displaystyle {\vec {w}}_{1}={\vec {0}}}$. The same argument works for any ${\displaystyle {\vec {w}}_{2}}$.

Example 4.16

In the space ${\displaystyle \mathbb {R} ^{2}}$, the ${\displaystyle x}$-axis and the ${\displaystyle y}$-axis are complements, that is, ${\displaystyle \mathbb {R} ^{2}=x{\text{-axis}}\oplus y{\text{-axis}}}$. A space can have more than one pair of complementary subspaces; another pair here are the subspaces consisting of the lines ${\displaystyle y=x}$ and ${\displaystyle y=2x}$.

Example 4.17

In the space ${\displaystyle F=\{a\cos \theta +b\sin \theta \,{\big |}\,a,b\in \mathbb {R} \}}$, the subspaces ${\displaystyle W_{1}=\{a\cos \theta \,{\big |}\,a\in \mathbb {R} \}}$ and ${\displaystyle W_{2}=\{b\sin \theta \,{\big |}\,b\in \mathbb {R} \}}$ are complements. In addition to the fact that a space like ${\displaystyle F}$ can have more than one pair of complementary subspaces, inside of the space a single subspace like ${\displaystyle W_{1}}$ can have more than one complement— another complement of ${\displaystyle W_{1}}$ is ${\displaystyle W_{3}=\{b\sin \theta +b\cos \theta \,{\big |}\,b\in \mathbb {R} \}}$.

Example 4.18

In ${\displaystyle \mathbb {R} ^{3}}$, the ${\displaystyle xy}$-plane and the ${\displaystyle yz}$-planes are not complements, which is the point of the discussion following Example 4.4. One complement of the ${\displaystyle xy}$-plane is the ${\displaystyle z}$-axis. A complement of the ${\displaystyle yz}$-plane is the line through ${\displaystyle (1,1,1)}$.

Example 4.19

Following Lemma 4.15, here is a natural question:is the simple sum ${\displaystyle V=W_{1}+\dots +W_{k}}$ also a direct sum if and only if the intersection of the subspaces is trivial? The answer is that if there are more than two subspaces then having a trivial intersection is not enough to guarantee unique decomposition (i.e., is not enough to ensure that the spaces are independent). In ${\displaystyle \mathbb {R} ^{3}}$, let ${\displaystyle W_{1}}$ be the ${\displaystyle x}$-axis, let ${\displaystyle W_{2}}$ be the ${\displaystyle y}$-axis, and let ${\displaystyle W_{3}}$ be this.

${\displaystyle W_{3}=\{{\begin{pmatrix}q\\q\\r\end{pmatrix}}\,{\big |}\,q,r\in \mathbb {R} \}}$

The check that ${\displaystyle \mathbb {R} ^{3}=W_{1}+W_{2}+W_{3}}$ is easy. The intersection ${\displaystyle W_{1}\cap W_{2}\cap W_{3}}$ is trivial, but decompositions aren't unique.

${\displaystyle {\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}0\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\y-x\\0\end{pmatrix}}+{\begin{pmatrix}x\\x\\z\end{pmatrix}}={\begin{pmatrix}x-y\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\0\end{pmatrix}}+{\begin{pmatrix}y\\y\\z\end{pmatrix}}}$

(This example also shows that this requirement is also not enough: that all pairwise intersections of the subspaces be trivial. See Problem 11.)

Example 4.20

This subspace of ${\displaystyle \mathbb {R} ^{3}}$, ${\displaystyle \{{\begin{pmatrix}x\\y\\0\end{pmatrix}}\,\mid \,x,y\in \mathbb {R} \}=\{{\begin{pmatrix}x\\0\\0\end{pmatrix}}\,\mid \,x\in \mathbb {R} \}\,\oplus \,\{{\begin{pmatrix}0\\y\\0\end{pmatrix}}\,\mid \,y\in \mathbb {R} \}}$.

Shows that a direct sum doesn't have to be a maximal space.

Example 4.21

The direct sum ${\displaystyle \{a_{0}+a_{1}x+a_{2}x^{2}\,\mid \,a_{0},a_{1},a_{2}\in \mathbb {R} \}\,\oplus \,\{a_{3}x^{3}+a_{4}x^{4}\,\mid \,a_{3},a_{4}\in \mathbb {R} \}={\mathcal {P}}_{4}}$ i.e. the space of polynomials of degree at most 4.

And the direct sum ${\displaystyle {\mathcal {P}}_{4}\,\oplus \,\{a_{5}x^{5}+a_{6}x^{6}+a_{7}x^{7}\,\mid \,a_{5},a_{6},a_{7}\in \mathbb {R} \}={\mathcal {P}}_{7}}$ i.e. the space of polynomials of degree at most 7.

This shows that direct sum of two vectors spaces can be directly summed again to form an even bigger vector space( at least in the case of fininte-dimensional vectors spaces of polynomials ${\displaystyle {\mathcal {P}}_{n}}$, this can be repeated indefinitely).

Example 4.22

Summands of some direct sums can be written as direct sums themselves,
${\displaystyle \mathbb {R} ^{3}=x{\text{-axis}}\,\oplus \,yz{\text{-plane}}=x{\text{-axis}}\,\oplus \,(y{\text{-axis}}\,\oplus \,z{\text{-axis}})}$,
${\displaystyle {\mathcal {P}}_{2}=\{a_{2}x^{2}\mid a_{2}\in \mathbb {R} \}\oplus {\mathcal {P}}_{1}=\{a_{2}x^{2}\mid a_{2}\in \mathbb {R} \}\oplus (\,\{a_{1}x\mid a_{1}\in \mathbb {R} \}\oplus {\mathcal {P}}_{0}\,)}$.

In this subsection we have seen two ways to regard a space as built up from component parts. Both are useful; in particular, in this book the direct sum definition is needed to do the Jordan Form construction in the fifth chapter.

## Exercises

This exercise is recommended for all readers.
Problem 1

Decide if ${\displaystyle \mathbb {R} ^{2}}$  is the direct sum of each ${\displaystyle W_{1}}$  and ${\displaystyle W_{2}}$ .

1. ${\displaystyle W_{1}=\{{\begin{pmatrix}x\\0\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}x\\x\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$
2. ${\displaystyle W_{1}=\{{\begin{pmatrix}s\\s\end{pmatrix}}\,{\big |}\,s\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}s\\1.1s\end{pmatrix}}\,{\big |}\,s\in \mathbb {R} \}}$
3. ${\displaystyle W_{1}=\mathbb {R} ^{2}}$ , ${\displaystyle W_{2}=\{{\vec {0}}\}}$
4. ${\displaystyle W_{1}=W_{2}=\{{\begin{pmatrix}t\\t\end{pmatrix}}\,{\big |}\,t\in \mathbb {R} \}}$
5. ${\displaystyle W_{1}=\{{\begin{pmatrix}1\\0\end{pmatrix}}+{\begin{pmatrix}x\\0\end{pmatrix}}\,{\big |}\,x\in \mathbb {R} \}}$ , ${\displaystyle W_{2}=\{{\begin{pmatrix}-1\\0\end{pmatrix}}+{\begin{pmatrix}0\\y\end{pmatrix}}\,{\big |}\,y\in \mathbb {R} \}}$
This exercise is recommended for all readers.
Problem 2

Show that ${\displaystyle \mathbb {R} ^{3}}$  is the direct sum of the ${\displaystyle xy}$ -plane with each of these.

1. the ${\displaystyle z}$ -axis
2. the line
${\displaystyle \{{\begin{pmatrix}z\\z\\z\end{pmatrix}}\,{\big |}\,z\in \mathbb {R} \}}$
Problem 3

Is ${\displaystyle {\mathcal {P}}_{2}}$  the direct sum of ${\displaystyle \{a+bx^{2}\,{\big |}\,a,b\in \mathbb {R} \}}$  and ${\displaystyle \{cx\,{\big |}\,c\in \mathbb {R} \}}$ ?

This exercise is recommended for all readers.
Problem 4

In ${\displaystyle {\mathcal {P}}_{n}}$ , the even polynomials are the members of this set

${\displaystyle {\mathcal {E}}=\{p\in {\mathcal {P}}_{n}\,{\big |}\,p(-x)=p(x){\text{ for all }}x\}}$

and the odd polynomials are the members of this set.

${\displaystyle {\mathcal {O}}=\{p\in {\mathcal {P}}_{n}\,{\big |}\,p(-x)=-p(x){\text{ for all }}x\}}$

Show that these are complementary subspaces.

Problem 5

Which of these subspaces of ${\displaystyle \mathbb {R} ^{3}}$

${\displaystyle W_{1}}$ : the ${\displaystyle x}$ -axis,      ${\displaystyle W_{2}}$ :the ${\displaystyle y}$ -axis,      ${\displaystyle W_{3}}$ :the ${\displaystyle z}$ -axis,
${\displaystyle W_{4}}$ :the plane ${\displaystyle x+y+z=0}$ ,      ${\displaystyle W_{5}}$ :the ${\displaystyle yz}$ -plane

can be combined to

1. sum to ${\displaystyle \mathbb {R} ^{3}}$ ?
2. direct sum to ${\displaystyle \mathbb {R} ^{3}}$ ?
This exercise is recommended for all readers.
Problem 6

Show that ${\displaystyle {\mathcal {P}}_{n}=\{a_{0}\,{\big |}\,a_{0}\in \mathbb {R} \}\oplus \dots \oplus \{a_{n}x^{n}\,{\big |}\,a_{n}\in \mathbb {R} \}}$ .

Problem 7

What is ${\displaystyle W_{1}+W_{2}}$  if ${\displaystyle W_{1}\subseteq W_{2}}$ ?

Problem 8

Does Example 4.5 generalize? That is, is this true or false:if a vector space ${\displaystyle V}$  has a basis ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{n}\rangle }$  then it is the direct sum of the spans of the one-dimensional subspaces ${\displaystyle V=[\{{\vec {\beta }}_{1}\}]\oplus \dots \oplus [\{{\vec {\beta }}_{n}\}]}$ ?

Problem 9

Can ${\displaystyle \mathbb {R} ^{4}}$  be decomposed as a direct sum in two different ways? Can ${\displaystyle \mathbb {R} ^{1}}$ ?

Problem 10

This exercise makes the notation of writing "${\displaystyle +}$ " between sets more natural. Prove that, where ${\displaystyle W_{1},\dots ,W_{k}}$  are subspaces of a vector space,

${\displaystyle W_{1}+\dots +W_{k}=\{{\vec {w}}_{1}+{\vec {w}}_{2}+\dots +{\vec {w}}_{k}\,{\big |}\,{\vec {w}}_{1}\in W_{1},\dots ,{\vec {w}}_{k}\in W_{k}\},}$

and so the sum of subspaces is the subspace of all sums.

Problem 11

(Refer to Example 4.19. This exercise shows that the requirement that pariwise intersections be trivial is genuinely stronger than the requirement only that the intersection of all of the subspaces be trivial.) Give a vector space and three subspaces ${\displaystyle W_{1}}$ , ${\displaystyle W_{2}}$ , and ${\displaystyle W_{3}}$  such that the space is the sum of the subspaces, the intersection of all three subspaces ${\displaystyle W_{1}\cap W_{2}\cap W_{3}}$  is trivial, but the pairwise intersections ${\displaystyle W_{1}\cap W_{2}}$ , ${\displaystyle W_{1}\cap W_{3}}$ , and ${\displaystyle W_{2}\cap W_{3}}$  are nontrivial.

This exercise is recommended for all readers.
Problem 12

Prove that if ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  then ${\displaystyle W_{i}\cap W_{j}}$  is trivial whenever ${\displaystyle i\neq j}$ . This shows that the first half of the proof of Lemma 4.15 extends to the case of more than two subspaces. (Example 4.19 shows that this implication does not reverse; the other half does not extend.)

Problem 13

Recall that no linearly independent set contains the zero vector. Can an independent set of subspaces contain the trivial subspace?

This exercise is recommended for all readers.
Problem 14

Does every subspace have a complement?

This exercise is recommended for all readers.
Problem 15

Let ${\displaystyle W_{1},W_{2}}$  be subspaces of a vector space.

1. Assume that the set ${\displaystyle S_{1}}$  spans ${\displaystyle W_{1}}$ , and that the set ${\displaystyle S_{2}}$  spans ${\displaystyle W_{2}}$ . Can ${\displaystyle S_{1}\cup S_{2}}$  span ${\displaystyle W_{1}+W_{2}}$ ? Must it?
2. Assume that ${\displaystyle S_{1}}$  is a linearly independent subset of ${\displaystyle W_{1}}$  and that ${\displaystyle S_{2}}$  is a linearly independent subset of ${\displaystyle W_{2}}$ . Can ${\displaystyle S_{1}\cup S_{2}}$  be a linearly independent subset of ${\displaystyle W_{1}+W_{2}}$ ? Must it?
Problem 16

When a vector space is decomposed as a direct sum, the dimensions of the subspaces add to the dimension of the space. The situation with a space that is given as the sum of its subspaces is not as simple. This exercise considers the two-subspace special case.

1. For these subspaces of ${\displaystyle {\mathcal {M}}_{2\!\times \!2}}$  find ${\displaystyle W_{1}\cap W_{2}}$ , ${\displaystyle \dim(W_{1}\cap W_{2})}$ , ${\displaystyle W_{1}+W_{2}}$ , and ${\displaystyle \dim(W_{1}+W_{2})}$ .
${\displaystyle W_{1}=\{{\begin{pmatrix}0&0\\c&d\end{pmatrix}}\,{\big |}\,c,d\in \mathbb {R} \}\qquad W_{2}=\{{\begin{pmatrix}0&b\\c&0\end{pmatrix}}\,{\big |}\,b,c\in \mathbb {R} \}}$
2. Suppose that ${\displaystyle U}$  and ${\displaystyle W}$  are subspaces of a vector space. Suppose that the sequence ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k}\rangle }$  is a basis for ${\displaystyle U\cap W}$ . Finally, suppose that the prior sequence has been expanded to give a sequence ${\displaystyle \langle {\vec {\mu }}_{1},\dots ,{\vec {\mu }}_{j},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k}\rangle }$  that is a basis for ${\displaystyle U}$ , and a sequence ${\displaystyle \langle {\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k},{\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{p}\rangle }$  that is a basis for ${\displaystyle W}$ . Prove that this sequence
${\displaystyle \langle {\vec {\mu }}_{1},\dots ,{\vec {\mu }}_{j},{\vec {\beta }}_{1},\dots ,{\vec {\beta }}_{k},{\vec {\omega }}_{1},\dots ,{\vec {\omega }}_{p}\rangle }$
is a basis for for the sum ${\displaystyle U+W}$ .
3. Conclude that ${\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W)}$ .
4. Let ${\displaystyle W_{1}}$  and ${\displaystyle W_{2}}$  be eight-dimensional subspaces of a ten-dimensional space. List all values possible for ${\displaystyle \dim(W_{1}\cap W_{2})}$ .
Problem 17

Let ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  and for each index ${\displaystyle i}$  suppose that ${\displaystyle S_{i}}$  is a linearly independent subset of ${\displaystyle W_{i}}$ . Prove that the union of the ${\displaystyle S_{i}}$ 's is linearly independent.

Problem 18

A matrix is symmetric if for each pair of indices ${\displaystyle i}$  and ${\displaystyle j}$ , the ${\displaystyle i,j}$  entry equals the ${\displaystyle j,i}$  entry. A matrix is antisymmetric if each ${\displaystyle i,j}$  entry is the negative of the ${\displaystyle j,i}$  entry.

1. Give a symmetric ${\displaystyle 2\!\times \!2}$  matrix and an antisymmetric ${\displaystyle 2\!\times \!2}$  matrix. (Remark. For the second one, be careful about the entries on the diagional.)
2. What is the relationship between a square symmetric matrix and its transpose? Between a square antisymmetric matrix and its transpose?
3. Show that ${\displaystyle {\mathcal {M}}_{n\!\times \!n}}$  is the direct sum of the space of symmetric matrices and the space of antisymmetric matrices.
Problem 19

Let ${\displaystyle W_{1},W_{2},W_{3}}$  be subspaces of a vector space. Prove that ${\displaystyle (W_{1}\cap W_{2})+(W_{1}\cap W_{3})\subseteq W_{1}\cap (W_{2}+W_{3})}$ . Does the inclusion reverse?

Problem 20

The example of the ${\displaystyle x}$ -axis and the ${\displaystyle y}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$  shows that ${\displaystyle W_{1}\oplus W_{2}=V}$  does not imply that ${\displaystyle W_{1}\cup W_{2}=V}$ . Can ${\displaystyle W_{1}\oplus W_{2}=V}$  and ${\displaystyle W_{1}\cup W_{2}=V}$  happen?

This exercise is recommended for all readers.
Problem 21

Our model for complementary subspaces, the ${\displaystyle x}$ -axis and the ${\displaystyle y}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$ , has one property not used here. Where ${\displaystyle U}$  is a subspace of ${\displaystyle \mathbb {R} ^{n}}$  we define the orthogonal complement of ${\displaystyle U}$  to be

${\displaystyle U^{\perp }=\{{\vec {v}}\in \mathbb {R} ^{n}\,{\big |}\,{\vec {v}}\cdot {\vec {u}}=0{\text{ for all }}{\vec {u}}\in U\}}$

(read "${\displaystyle U}$  perp").

1. Find the orthocomplement of the ${\displaystyle x}$ -axis in ${\displaystyle \mathbb {R} ^{2}}$ .
2. Find the orthocomplement of the ${\displaystyle x}$ -axis in ${\displaystyle \mathbb {R} ^{3}}$ .
3. Find the orthocomplement of the ${\displaystyle xy}$ -plane in ${\displaystyle \mathbb {R} ^{3}}$ .
4. Show that the orthocomplement of a subspace is a subspace.
5. Show that if ${\displaystyle W}$  is the orthocomplement of ${\displaystyle U}$  then ${\displaystyle U}$  is the orthocomplement of ${\displaystyle W}$ .
6. Prove that a subspace and its orthocomplement have a trivial intersection.
7. Conclude that for any ${\displaystyle n}$  and subspace ${\displaystyle U\subseteq \mathbb {R} ^{n}}$  we have that ${\displaystyle \mathbb {R} ^{n}=U\oplus U^{\perp }}$ .
8. Show that ${\displaystyle \dim(U)+\dim(U^{\perp })}$  equals the dimension of the enclosing space.
This exercise is recommended for all readers.
Problem 22

Consider Corollary 4.13. Does it work both ways— that is, supposing that ${\displaystyle V=W_{1}+\dots +W_{k}}$ , is ${\displaystyle V=W_{1}\oplus \dots \oplus W_{k}}$  if and only if ${\displaystyle \dim(V)=\dim(W_{1})+\dots +\dim(W_{k})}$ ?

Problem 23

We know that if ${\displaystyle V=W_{1}\oplus W_{2}}$  then there is a basis for ${\displaystyle V}$  that splits into a basis for ${\displaystyle W_{1}}$  and a basis for ${\displaystyle W_{2}}$ . Can we make the stronger statement that every basis for ${\displaystyle V}$  splits into a basis for ${\displaystyle W_{1}}$  and a basis for ${\displaystyle W_{2}}$ ?

Problem 24

We can ask about the algebra of the "${\displaystyle +}$ " operation.

1. Is it commutative; is ${\displaystyle W_{1}+W_{2}=W_{2}+W_{1}}$ ?
2. Is it associative; is ${\displaystyle (W_{1}+W_{2})+W_{3}=W_{1}+(W_{2}+W_{3})}$ ?
3. Let ${\displaystyle W}$  be a subspace of some vector space. Show that ${\displaystyle W+W=W}$ .
4. Must there be an identity element, a subspace ${\displaystyle I}$  such that ${\displaystyle I+W=W+I=W}$  for all subspaces ${\displaystyle W}$ ?
5. Does left-cancelation hold:if ${\displaystyle W_{1}+W_{2}=W_{1}+W_{3}}$  then ${\displaystyle W_{2}=W_{3}}$ ? Right cancelation?
Problem 25

Consider the algebraic properties of the direct sum operation.

1. Does direct sum commute: does ${\displaystyle V=W_{1}\oplus W_{2}}$  imply that ${\displaystyle V=W_{2}\oplus W_{1}}$ ?
2. Prove that direct sum is associative:${\displaystyle (W_{1}\oplus W_{2})\oplus W_{3}=W_{1}\oplus (W_{2}\oplus W_{3})}$ .
3. Show that ${\displaystyle \mathbb {R} ^{3}}$  is the direct sum of the three axes (the relevance here is that by the previous item, we needn't specify which two of the three axes are combined first).
4. Does the direct sum operation left-cancel:does ${\displaystyle W_{1}\oplus W_{2}=W_{1}\oplus W_{3}}$  imply ${\displaystyle W_{2}=W_{3}}$ ? Does it right-cancel?
5. There is an identity element with respect to this operation. Find it.
6. Do some, or all, subspaces have inverses with respect to this operation:is there a subspace ${\displaystyle W}$  of some vector space such that there is a subspace ${\displaystyle U}$  with the property that ${\displaystyle U\oplus W}$  equals the identity element from the prior item?

## References

• Halsey, William D. (1979), Macmillian Dictionary, Macmillian.
 Linear Algebra ← Vector Spaces and Linear Systems Combining Subspaces Topic: Fields →