Linear Algebra/Vector Spaces and Linear Systems
We will now reconsider linear systems and Gauss' method, aided by the tools and terms of this chapter. We will make three points.
For the first point, recall the first chapter's Linear Combination Lemma and its corollary: if two matrices are related by row operations then each row of is a linear combination of the rows of . That is, Gauss' method works by taking linear combinations of rows. Therefore, the right setting in which to study row operations in general, and Gauss' method in particular, is the following vector space.
- Definition 3.1
The row space of a matrix is the span of the set of its rows. The row rank is the dimension of the row space, the number of linearly independent rows.
- Example 3.2
If
then is this subspace of the space of two-component row vectors.
The linear dependence of the second on the first is obvious and so we can simplify this description to .
- Lemma 3.3
If the matrices and are related by a row operation
(for and ) then their row spaces are equal. Hence, row-equivalent matrices have the same row space, and hence also, the same row rank.
- Proof
By the Linear Combination Lemma's corollary, each row of is in the row space of . Further, because a member of the set is a linear combination of the rows of , which means it is a combination of a combination of the rows of , and hence, by the Linear Combination Lemma, is also a member of .
For the other containment, recall that row operations are reversible: if and only if . With that, also follows from the prior paragraph, and so the two sets are equal.
Thus, row operations leave the row space unchanged. But of course, Gauss' method performs the row operations systematically, with a specific goal in mind, echelon form.
- Lemma 3.4
The nonzero rows of an echelon form matrix make up a linearly independent set.
- Proof
A result in the first chapter, Lemma One.III.2.5, states that in an echelon form matrix, no nonzero row is a linear combination of the other rows. This is a restatement of that result into new terminology.
Thus, in the language of this chapter, Gaussian reduction works by eliminating linear dependencies among rows, leaving the span unchanged, until no nontrivial linear relationships remain (among the nonzero rows). That is, Gauss' method produces a basis for the row space.
- Example 3.5
From any matrix, we can produce a basis for the row space by performing Gauss' method and taking the nonzero rows of the resulting echelon form matrix. For instance,
produces the basis for the row space. This is a basis for the row space of both the starting and ending matrices, since the two row spaces are equal.
Using this technique, we can also find bases for spans not directly involving row vectors.
- Definition 3.6
The column space of a matrix is the span of the set of its columns. The column rank is the dimension of the column space, the number of linearly independent columns.
Our interest in column spaces stems from our study of linear systems. An example is that this system
has a solution if and only if the vector of 's is a linear combination of the other column vectors,
meaning that the vector of 's is in the column space of the matrix of coefficients.
- Example 3.7
Given this matrix,
to get a basis for the column space, temporarily turn the columns into rows and reduce.
Now turn the rows back to columns.
The result is a basis for the column space of the given matrix.
- Definition 3.8
The transpose of a matrix is the result of interchanging the rows and columns of that matrix. That is, column of the matrix is row of , and vice versa.
So the instructions for the prior example are "transpose, reduce, and transpose back".
We can even, at the price of tolerating the as-yet-vague idea of vector spaces being "the same", use Gauss' method to find bases for spans in other types of vector spaces.
- Example 3.9
To get a basis for the span of in the space , think of these three polynomials as "the same" as the row vectors , , and , apply Gauss' method
and translate back to get the basis . (As mentioned earlier, we will make the phrase "the same" precise at the start of the next chapter.)
Thus, our first point in this subsection is that the tools of this chapter give us a more conceptual understanding of Gaussian reduction.
For the second point of this subsection, consider the effect on the column space of this row reduction.
The column space of the left-hand matrix contains vectors with a second component that is nonzero. But the column space of the right-hand matrix is different because it contains only vectors whose second component is zero. It is this knowledge that row operations can change the column space that makes next result surprising.
- Lemma 3.10
Row operations do not change the column rank.
- Proof
Restated, if reduces to then the column rank of equals the column rank of .
We will be done if we can show that row operations do not affect linear relationships among columns (e.g., if the fifth column is twice the second plus the fourth before a row operation then that relationship still holds afterwards), because the column rank is just the size of the largest set of unrelated columns. But this is exactly the first theorem of this book: in a relationship among columns,
row operations leave unchanged the set of solutions .
Another way, besides the prior result, to state that Gauss' method has something to say about the column space as well as about the row space is to consider again Gauss-Jordan reduction. Recall that it ends with the reduced echelon form of a matrix, as here.
Consider the row space and the column space of this result. Our first point made above says that a basis for the row space is easy to get: simply collect together all of the rows with leading entries. However, because this is a reduced echelon form matrix, a basis for the column space is just as easy: take the columns containing the leading entries, that is, . (Linear independence is obvious. The other columns are in the span of this set, since they all have a third component of zero.) Thus, for a reduced echelon form matrix, bases for the row and column spaces can be found in essentially the same way— by taking the parts of the matrix, the rows or columns, containing the leading entries.
- Theorem 3.11
The row rank and column rank of a matrix are equal.
- Proof
First bring the matrix to reduced echelon form. At that point, the row rank equals the number of leading entries since that equals the number of nonzero rows. Also at that point, the number of leading entries equals the column rank because the set of columns containing leading entries consists of some of the 's from a standard basis, and that set is linearly independent and spans the set of columns. Hence, in the reduced echelon form matrix, the row rank equals the column rank, because each equals the number of leading entries.
But Lemma 3.3 and Lemma 3.10 show that the row rank and column rank are not changed by using row operations to get to reduced echelon form. Thus the row rank and the column rank of the original matrix are also equal.
- Definition 3.12
The rank of a matrix is its row rank or column rank.
So our second point in this subsection is that the column space and row space of a matrix have the same dimension. Our third and final point is that the concepts that we've seen arising naturally in the study of vector spaces are exactly the ones that we have studied with linear systems.
- Theorem 3.13
For linear systems with unknowns and with matrix of coefficients , the statements
- the rank of is
- the space of solutions of the associated homogeneous system has dimension
are equivalent.
So if the system has at least one particular solution then for the set of solutions, the number of parameters equals , the number of variables minus the rank of the matrix of coefficients.
- Proof
The rank of is if and only if Gaussian reduction on ends with nonzero rows. That's true if and only if echelon form matrices row equivalent to have -many leading variables. That in turn holds if and only if there are free variables.
- Remark 3.14
- (Munkres 1964)
Sometimes that result is mistakenly remembered to say that the general solution of an unknown system of equations uses parameters. The number of equations is not the relevant figure, rather, what matters is the number of independent equations (the number of equations in a maximal independent set). Where there are independent equations, the general solution involves parameters.
- Corollary 3.15
Where the matrix is , the statements
- the rank of is
- is nonsingular
- the rows of form a linearly independent set
- the columns of form a linearly independent set
- any linear system whose matrix of coefficients is has one and only one solution
are equivalent.
- Proof
Clearly . The last, , holds because a set of column vectors that have components is linearly independent if and only if it is a basis for , but the system
has a unique solution for all choices of if and only if the vectors of 's form a basis.
Exercises
edit- Problem 1
Transpose each.
- This exercise is recommended for all readers.
- Problem 2
Decide if the vector is in the row space of the matrix.
- ,
- ,
- This exercise is recommended for all readers.
- Problem 3
Decide if the vector is in the column space.
- ,
- ,
- This exercise is recommended for all readers.
- Problem 4
Find a basis for the row space of this matrix.
- This exercise is recommended for all readers.
- Problem 5
Find the rank of each matrix.
- This exercise is recommended for all readers.
- Problem 6
Find a basis for the span of each set.
- Problem 7
Which matrices have rank zero? Rank one?
- This exercise is recommended for all readers.
- Problem 8
Given , what choice of will cause this matrix to have the rank of one?
- Problem 9
Find the column rank of this matrix.
- Problem 10
Show that a linear system with at least one solution has at most one solution if and only if the matrix of coefficients has rank equal to the number of its columns.
- This exercise is recommended for all readers.
- Problem 11
If a matrix is , which set must be dependent, its set of rows or its set of columns?
- Problem 12
Give an example to show that, despite that they have the same dimension, the row space and column space of a matrix need not be equal. Are they ever equal?
- Problem 13
Show that the set does not have the same span as . What, by the way, is the vector space?
- This exercise is recommended for all readers.
- Problem 14
Show that this set of column vectors
is a subspace of . Find a basis.
- Problem 15
Show that the transpose operation is linear:
for and .
- This exercise is recommended for all readers.
- Problem 16
In this subsection we have shown that Gaussian reduction finds a basis for the row space.
- Show that this basis is not unique— different reductions may yield different bases.
- Produce matrices with equal row spaces but unequal numbers of rows.
- Prove that two matrices have equal row spaces if and only if after Gauss-Jordan reduction they have the same nonzero rows.
- Problem 17
Why is there not a problem with Remark 3.14 in the case that is bigger than ?
- Problem 18
Show that the row rank of an matrix is at most . Is there a better bound?
- This exercise is recommended for all readers.
- Problem 19
Show that the rank of a matrix equals the rank of its transpose.
- Problem 20
True or false: the column space of a matrix equals the row space of its transpose.
- This exercise is recommended for all readers.
- Problem 21
We have seen that a row operation may change the column space. Must it?
- Problem 22
Prove that a linear system has a solution if and only if that system's matrix of coefficients has the same rank as its augmented matrix.
- Problem 23
An matrix has full row rank if its row rank is , and it has full column rank if its column rank is .
- Show that a matrix can have both full row rank and full column rank only if it is square.
- Prove that the linear system with matrix of coefficients has a solution for any , ..., 's on the right side if and only if has full row rank.
- Prove that a homogeneous system has a unique solution if and only if its matrix of coefficients has full column rank.
- Prove that the statement "if a system with matrix of coefficients has any solution then it has a unique solution" holds if and only if has full column rank.
- Problem 24
How would the conclusion of Lemma 3.3 change if Gauss' method is changed to allow multiplying a row by zero?
- This exercise is recommended for all readers.
- Problem 25
What is the relationship between and ? Between and ? What, if any, is the relationship between , , and ?
References
edit- Munkres, James R. (1964), Elementary Linear Algebra, Addison-Wesley.