Linear Algebra/Combining Subspaces

Linear Algebra
 ← Vector Spaces and Linear Systems Combining Subspaces Topic: Fields → 

This subsection is optional. It is required only for the last sections of Chapter Three and Chapter Five and for occasional exercises, and can be passed over without loss of continuity.

This chapter opened with the definition of a vector space, and the middle consisted of a first analysis of the idea. This subsection closes the chapter by finishing the analysis, in the sense that "analysis" means "method of determining the ... essential features of something by separating it into parts" (Halsey 1979).

A common way to understand things is to see how they can be built from component parts. For instance, we think of as put together, in some way, from the -axis, the -axis, and -axis. In this subsection we will make this precise;we will describe how to decompose a vector space into a combination of some of its subspaces. In developing this idea of subspace combination, we will keep the example in mind as a benchmark model.

Subspaces are subsets and sets combine via union. But taking the combination operation for subspaces to be the simple union operation isn't what we want. For one thing, the union of the -axis, the -axis, and -axis is not all of , so the benchmark model would be left out. Besides, union is all wrong for this reason: a union of subspaces need not be a subspace (it need not be closed; for instance, this vector

is in none of the three axes and hence is not in the union). In addition to the members of the subspaces, we must at least also include all of the linear combinations.

Definition 4.1

Where are subspaces of a vector space, their sum is the span of their union .

(The notation, writing the "" between sets in addition to using it between vectors, fits with the practice of using this symbol for any natural accumulation operation.)

Example 4.2

The model fits with this operation. Any vector can be written as a linear combination where is a member of the -axis, etc., in this way

and so .

Example 4.3

A sum of subspaces can be less than the entire space. Inside of , let be the subspace of linear polynomials and let be the subspace of purely-cubic polynomials . Then is not all of . Instead, it is the subspace .

Example 4.4

A space can be described as a combination of subspaces in more than one way. Besides the decomposition , we can also write . To check this, note that any can be written as a linear combination of a member of the -plane and a member of the -plane; here are two such combinations.


The above definition gives one way in which a space can be thought of as a combination of some of its parts. However, the prior example shows that there is at least one interesting property of our benchmark model that is not captured by the definition of the sum of subspaces. In the familiar decomposition of , we often speak of a vector's "part" or "part" or "part". That is, in this model, each vector has a unique decomposition into parts that come from the parts making up the whole space. But in the decomposition used in Example 4.4, we cannot refer to the "part" of a vector— these three sums

all describe the vector as comprised of something from the first plane plus something from the second plane, but the "part" is different in each.

That is, when we consider how is put together from the three axes "in some way", we might mean "in such a way that every vector has at least one decomposition", and that leads to the definition above. But if we take it to mean "in such a way that every vector has one and only one decomposition" then we need another condition on combinations. To see what this condition is, recall that vectors are uniquely represented in terms of a basis. We can use this to break a space into a sum of subspaces such that any vector in the space breaks uniquely into a sum of members of those subspaces.

Example 4.5

The benchmark is with its standard basis . The subspace with the basis is the -axis. The subspace with the basis is the -axis. The subspace with the basis is the -axis. The fact that any member of is expressible as a sum of vectors from these subspaces

is a reflection of the fact that spans the space— this equation

has a solution for any . And, the fact that each such expression is unique reflects that fact that is linearly independent— any equation like the one above has a unique solution.

Example 4.6

We don't have to take the basis vectors one at a time, the same idea works if we conglomerate them into larger sequences. Consider again the space and the vectors from the standard basis . The subspace with the basis is the -plane. The subspace with the basis is the -axis. As in the prior example, the fact that any member of the space is a sum of members of the two subspaces in one and only one way

is a reflection of the fact that these vectors form a basis— this system

has one and only one solution for any .

These examples illustrate a natural way to decompose a space into a sum of subspaces in such a way that each vector decomposes uniquely into a sum of vectors from the parts. The next result says that this way is the only way.

Definition 4.7

The concatenation of the sequences , ..., is their adjoinment.

Lemma 4.8

Let be a vector space that is the sum of some of its subspaces . Let , ..., be any bases for these subspaces. Then the following are equivalent.

  1. For every , the expression (with ) is unique.
  2. The concatenation is a basis for .
  3. The nonzero members of (with ) form a linearly independent set— among nonzero vectors from different 's, every linear relationship is trivial.
Proof

We will show that , that , and finally that . For these arguments, observe that we can pass from a combination of 's to a combination of 's

and vice versa.

For , assume that all decompositions are unique. We will show that spans the space and is linearly independent. It spans the space because the assumption that means that every can be expressed as , which translates by equation() to an expression of as a linear combination of the 's from the concatenation. For linear independence, consider this linear relationship.

Regroup as in () (that is, take , ..., to be and move from bottom to top) to get the decomposition . Because of the assumption that decompositions are unique, and because the zero vector obviously has the decomposition , we now have that each is the zero vector. This means that . Thus, since each is a basis, we have the desired conclusion that all of the 's are zero.

For , assume that is a basis for the space. Consider a linear relationship among nonzero vectors from different 's,

in order to show that it is trivial. (The relationship is written in this way because we are considering a combination of nonzero vectors from only some of the 's; for instance, there might not be a in this combination.) As in (), and the linear independence of gives that each coefficient is zero. Now, is a nonzero vector, so at least one of the 's is not zero, and thus is zero. This holds for each , and therefore the linear relationship is trivial.

Finally, for , assume that, among nonzero vectors from different 's, any linear relationship is trivial. Consider two decompositions of a vector and in order to show that the two are the same. We have

which violates the assumption unless each is the zero vector. Hence, decompositions are unique.

Definition 4.9

A collection of subspaces is independent if no nonzero vector from any is a linear combination of vectors from the other subspaces .

Definition 4.10

A vector space is the direct sum (or internal direct sum) of its subspaces if and the collection is independent. We write .

Example 4.11

The benchmark model fits: .

Example 4.12

The space of matrices is this direct sum.

It is the direct sum of subspaces in many other ways as well; direct sum decompositions are not unique.

Corollary 4.13

The dimension of a direct sum is the sum of the dimensions of its summands.

Proof

In Lemma 4.8, the number of basis vectors in the concatenation equals the sum of the number of vectors in the subbases that make up the concatenation.

The special case of two subspaces is worth mentioning separately.

Definition 4.14

When a vector space is the direct sum of two of its subspaces, then they are said to be complements.

Lemma 4.15

A vector space is the direct sum of two of its subspaces and if and only if it is the sum of the two and their intersection is trivial .

Proof

Suppose first that . By definition, is the sum of the two. To show that the two have a trivial intersection, let be a vector from and consider the equation . On the left side of that equation is a member of , and on the right side is a linear combination of members (actually, of only one member) of . But the independence of the spaces then implies that , as desired.

For the other direction, suppose that is the sum of two spaces with a trivial intersection. To show that is a direct sum of the two, we need only show that the spaces are independent— no nonzero member of the first is expressible as a linear combination of members of the second, and vice versa. This is true because any relationship (with and for all ) shows that the vector on the left is also in , since the right side is a combination of members of . The intersection of these two spaces is trivial, so . The same argument works for any .

Example 4.16

In the space , the -axis and the -axis are complements, that is, . A space can have more than one pair of complementary subspaces; another pair here are the subspaces consisting of the lines and .

Example 4.17

In the space , the subspaces and are complements. In addition to the fact that a space like can have more than one pair of complementary subspaces, inside of the space a single subspace like can have more than one complement— another complement of is .

Example 4.18

In , the -plane and the -planes are not complements, which is the point of the discussion following Example 4.4. One complement of the -plane is the -axis. A complement of the -plane is the line through .

Example 4.19

Following Lemma 4.15, here is a natural question:is the simple sum also a direct sum if and only if the intersection of the subspaces is trivial? The answer is that if there are more than two subspaces then having a trivial intersection is not enough to guarantee unique decomposition (i.e., is not enough to ensure that the spaces are independent). In , let be the -axis, let be the -axis, and let be this.



The check that is easy. The intersection is trivial, but decompositions aren't unique.

(This example also shows that this requirement is also not enough: that all pairwise intersections of the subspaces be trivial. See Problem 11.)

Example 4.20

This subspace of , .

Shows that a direct sum doesn't have to be a maximal space.

Example 4.21

The direct sum i.e. the space of polynomials of degree at most 4.

And the direct sum i.e. the space of polynomials of degree at most 7.

This shows that direct sum of two vectors spaces can be directly summed again to form an even bigger vector space( at least in the case of fininte-dimensional vectors spaces of polynomials , this can be repeated indefinitely).

Example 4.22

Summands of some direct sums can be written as direct sums themselves,
,
.

In this subsection we have seen two ways to regard a space as built up from component parts. Both are useful; in particular, in this book the direct sum definition is needed to do the Jordan Form construction in the fifth chapter.

Exercises

edit
This exercise is recommended for all readers.
Problem 1

Decide if   is the direct sum of each   and  .

  1.  ,  
  2.  ,  
  3.  ,  
  4.  
  5.  ,  
This exercise is recommended for all readers.
Problem 2

Show that   is the direct sum of the  -plane with each of these.

  1. the  -axis
  2. the line
     
Problem 3

Is   the direct sum of   and  ?

This exercise is recommended for all readers.
Problem 4

In  , the even polynomials are the members of this set

 

and the odd polynomials are the members of this set.

 

Show that these are complementary subspaces.

Problem 5

Which of these subspaces of  

 : the  -axis,       :the  -axis,       :the  -axis,     
 :the plane  ,       :the  -plane

can be combined to

  1. sum to  ?
  2. direct sum to  ?
This exercise is recommended for all readers.
Problem 6

Show that  .

Problem 7

What is   if  ?

Problem 8

Does Example 4.5 generalize? That is, is this true or false:if a vector space   has a basis   then it is the direct sum of the spans of the one-dimensional subspaces  ?

Problem 9

Can   be decomposed as a direct sum in two different ways? Can  ?

Problem 10

This exercise makes the notation of writing " " between sets more natural. Prove that, where   are subspaces of a vector space,

 

and so the sum of subspaces is the subspace of all sums.

Problem 11

(Refer to Example 4.19. This exercise shows that the requirement that pariwise intersections be trivial is genuinely stronger than the requirement only that the intersection of all of the subspaces be trivial.) Give a vector space and three subspaces  ,  , and   such that the space is the sum of the subspaces, the intersection of all three subspaces   is trivial, but the pairwise intersections  ,  , and   are nontrivial.

This exercise is recommended for all readers.
Problem 12

Prove that if   then   is trivial whenever  . This shows that the first half of the proof of Lemma 4.15 extends to the case of more than two subspaces. (Example 4.19 shows that this implication does not reverse; the other half does not extend.)

Problem 13

Recall that no linearly independent set contains the zero vector. Can an independent set of subspaces contain the trivial subspace?

This exercise is recommended for all readers.
Problem 14

Does every subspace have a complement?

This exercise is recommended for all readers.
Problem 15

Let   be subspaces of a vector space.

  1. Assume that the set   spans  , and that the set   spans  . Can   span  ? Must it?
  2. Assume that   is a linearly independent subset of   and that   is a linearly independent subset of  . Can   be a linearly independent subset of  ? Must it?
Problem 16

When a vector space is decomposed as a direct sum, the dimensions of the subspaces add to the dimension of the space. The situation with a space that is given as the sum of its subspaces is not as simple. This exercise considers the two-subspace special case.

  1. For these subspaces of   find  ,  ,  , and  .
     
  2. Suppose that   and   are subspaces of a vector space. Suppose that the sequence   is a basis for  . Finally, suppose that the prior sequence has been expanded to give a sequence   that is a basis for  , and a sequence   that is a basis for  . Prove that this sequence
     
    is a basis for for the sum  .
  3. Conclude that  .
  4. Let   and   be eight-dimensional subspaces of a ten-dimensional space. List all values possible for  .
Problem 17

Let   and for each index   suppose that   is a linearly independent subset of  . Prove that the union of the  's is linearly independent.

Problem 18

A matrix is symmetric if for each pair of indices   and  , the   entry equals the   entry. A matrix is antisymmetric if each   entry is the negative of the   entry.

  1. Give a symmetric   matrix and an antisymmetric   matrix. (Remark. For the second one, be careful about the entries on the diagional.)
  2. What is the relationship between a square symmetric matrix and its transpose? Between a square antisymmetric matrix and its transpose?
  3. Show that   is the direct sum of the space of symmetric matrices and the space of antisymmetric matrices.
Problem 19

Let   be subspaces of a vector space. Prove that  . Does the inclusion reverse?

Problem 20

The example of the  -axis and the  -axis in   shows that   does not imply that  . Can   and   happen?

This exercise is recommended for all readers.
Problem 21

Our model for complementary subspaces, the  -axis and the  -axis in  , has one property not used here. Where   is a subspace of   we define the orthogonal complement of   to be

 

(read "  perp").

  1. Find the orthocomplement of the  -axis in  .
  2. Find the orthocomplement of the  -axis in  .
  3. Find the orthocomplement of the  -plane in  .
  4. Show that the orthocomplement of a subspace is a subspace.
  5. Show that if   is the orthocomplement of   then   is the orthocomplement of  .
  6. Prove that a subspace and its orthocomplement have a trivial intersection.
  7. Conclude that for any   and subspace   we have that  .
  8. Show that   equals the dimension of the enclosing space.
This exercise is recommended for all readers.
Problem 22

Consider Corollary 4.13. Does it work both ways— that is, supposing that  , is   if and only if  ?

Problem 23

We know that if   then there is a basis for   that splits into a basis for   and a basis for  . Can we make the stronger statement that every basis for   splits into a basis for   and a basis for  ?

Problem 24

We can ask about the algebra of the " " operation.

  1. Is it commutative; is  ?
  2. Is it associative; is  ?
  3. Let   be a subspace of some vector space. Show that  .
  4. Must there be an identity element, a subspace   such that   for all subspaces  ?
  5. Does left-cancelation hold:if   then  ? Right cancelation?
Problem 25

Consider the algebraic properties of the direct sum operation.

  1. Does direct sum commute: does   imply that  ?
  2. Prove that direct sum is associative: .
  3. Show that   is the direct sum of the three axes (the relevance here is that by the previous item, we needn't specify which two of the three axes are combined first).
  4. Does the direct sum operation left-cancel:does   imply  ? Does it right-cancel?
  5. There is an identity element with respect to this operation. Find it.
  6. Do some, or all, subspaces have inverses with respect to this operation:is there a subspace   of some vector space such that there is a subspace   with the property that   equals the identity element from the prior item?

Solutions

References

edit
  • Halsey, William D. (1979), Macmillian Dictionary, Macmillian.
Linear Algebra
 ← Vector Spaces and Linear Systems Combining Subspaces Topic: Fields →