Linear Algebra/Vector Spaces and Linear Systems/Solutions

Solutions

Problem 1

Transpose each.

1. ${\begin{pmatrix}2&1\\3&1\end{pmatrix}}$
2. ${\begin{pmatrix}2&1\\1&3\end{pmatrix}}$
3. ${\begin{pmatrix}1&4&3\\6&7&8\end{pmatrix}}$
4. ${\begin{pmatrix}0\\0\\0\end{pmatrix}}$
5. ${\begin{pmatrix}-1&-2\end{pmatrix}}$
1. ${\begin{pmatrix}2&3\\1&1\end{pmatrix}}$
2. ${\begin{pmatrix}2&1\\1&3\end{pmatrix}}$
3. ${\begin{pmatrix}1&6\\4&7\\3&8\end{pmatrix}}$
4. ${\begin{pmatrix}0&0&0\end{pmatrix}}$
5. ${\begin{pmatrix}-1\\-2\end{pmatrix}}$
This exercise is recommended for all readers.
Problem 2

Decide if the vector is in the row space of the matrix.

1. ${\begin{pmatrix}2&1\\3&1\end{pmatrix}}$ , ${\begin{pmatrix}1&0\end{pmatrix}}$
2. ${\begin{pmatrix}0&1&3\\-1&0&1\\-1&2&7\end{pmatrix}}$ , ${\begin{pmatrix}1&1&1\end{pmatrix}}$
1. Yes. To see if there are $c_{1}$  and $c_{2}$  such that $c_{1}\cdot {\begin{pmatrix}2&1\end{pmatrix}}+c_{2}\cdot {\begin{pmatrix}3&1\end{pmatrix}}={\begin{pmatrix}1&0\end{pmatrix}}$  we solve
${\begin{array}{*{2}{rc}r}2c_{1}&+&3c_{2}&=&1\\c_{1}&+&c_{2}&=&0\end{array}}$
and get $c_{1}=-1$  and $c_{2}=1$ . Thus the vector is in the row space.
2. No. The equation $c_{1}{\begin{pmatrix}0&1&3\end{pmatrix}}+c_{2}{\begin{pmatrix}-1&0&1\end{pmatrix}}+c_{3}{\begin{pmatrix}-1&2&7\end{pmatrix}}={\begin{pmatrix}1&1&1\end{pmatrix}}$  has no solution.
$\left({\begin{array}{*{3}{c}|c}0&-1&-1&1\\1&0&2&1\\3&1&7&1\end{array}}\right){\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}\;{\xrightarrow[{}]{-3\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{\rho _{2}+\rho _{3}}}\left({\begin{array}{*{3}{c}|c}1&0&2&1\\0&-1&-1&1\\0&0&0&-1\end{array}}\right)$
Thus, the vector is not in the row space.
This exercise is recommended for all readers.
Problem 3

Decide if the vector is in the column space.

1. ${\begin{pmatrix}1&1\\1&1\end{pmatrix}}$ , ${\begin{pmatrix}1\\3\end{pmatrix}}$
2. ${\begin{pmatrix}1&3&1\\2&0&4\\1&-3&-3\end{pmatrix}}$ , ${\begin{pmatrix}1\\0\\0\end{pmatrix}}$
1. No. To see if there are $c_{1},c_{2}\in \mathbb {R}$  such that
$c_{1}{\begin{pmatrix}1\\1\end{pmatrix}}+c_{2}{\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}1\\3\end{pmatrix}}$
we can use Gauss' method on the resulting linear system.
${\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&1\\c_{1}&+&c_{2}&=&3\end{array}}{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}\;{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&1\\&&0&=&2\end{array}}$
There is no solution and so the vector is not in the column space.
2. Yes. From this relationship
$c_{1}{\begin{pmatrix}1\\2\\1\end{pmatrix}}+c_{2}{\begin{pmatrix}3\\0\\-3\end{pmatrix}}+c_{3}{\begin{pmatrix}1\\4\\3\end{pmatrix}}={\begin{pmatrix}1\\0\\0\end{pmatrix}}$
we get a linear system that, when Gauss' method is applied,
$\left({\begin{array}{*{3}{c}|c}1&3&1&1\\2&0&4&0\\1&-3&-3&0\end{array}}\right){\xrightarrow[{-\rho _{1}+\rho _{3}}]{-2\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}\left({\begin{array}{*{3}{c}|c}1&3&1&1\\0&-6&2&-2\\0&0&-6&1\end{array}}\right)$
yields a solution. Thus, the vector is in the column space.
This exercise is recommended for all readers.
Problem 4

Find a basis for the row space of this matrix.

${\begin{pmatrix}2&0&3&4\\0&1&1&-1\\3&1&0&2\\1&0&-4&-1\end{pmatrix}}$

A routine Gaussian reduction

${\begin{pmatrix}2&0&3&4\\0&1&1&-1\\3&1&0&2\\1&0&-4&1\end{pmatrix}}{\xrightarrow[{-(1/2)\rho _{1}+\rho _{4}}]{-(3/2)\rho _{1}+\rho _{3}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}\;{\xrightarrow[{}]{-\rho _{3}+\rho _{4}}}{\begin{pmatrix}2&0&3&4\\0&1&1&-1\\0&0&-11/2&-3\\0&0&0&0\end{pmatrix}}$

suggests this basis $\langle {\begin{pmatrix}2&0&3&4\end{pmatrix}},{\begin{pmatrix}0&1&1&-1\end{pmatrix}},{\begin{pmatrix}0&0&-11/2&-3\end{pmatrix}}\rangle$ .

Another, perhaps more convenient procedure, is to swap rows first,

${\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{4}}}\;{\xrightarrow[{-2\rho _{1}+\rho _{4}}]{-3\rho _{1}+\rho _{3}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}\;{\xrightarrow[{}]{-\rho _{3}+\rho _{4}}}{\begin{pmatrix}1&0&-4&-1\\0&1&1&-1\\0&0&11&6\\0&0&0&0\end{pmatrix}}$

leading to the basis $\langle {\begin{pmatrix}1&0&-4&-1\end{pmatrix}},{\begin{pmatrix}0&1&1&-1\end{pmatrix}},{\begin{pmatrix}0&0&11&6\end{pmatrix}}\rangle$ .

This exercise is recommended for all readers.
Problem 5

Find the rank of each matrix.

1. ${\begin{pmatrix}2&1&3\\1&-1&2\\1&0&3\end{pmatrix}}$
2. ${\begin{pmatrix}1&-1&2\\3&-3&6\\-2&2&-4\end{pmatrix}}$
3. ${\begin{pmatrix}1&3&2\\5&1&1\\6&4&3\end{pmatrix}}$
4. ${\begin{pmatrix}0&0&0\\0&0&0\\0&0&0\end{pmatrix}}$
1. This reduction
${\xrightarrow[{-(1/2)\rho _{1}+\rho _{3}}]{-(1/2)\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-(1/3)\rho _{2}+\rho _{3}}}{\begin{pmatrix}2&1&3\\0&-3/2&1/2\\0&0&4/3\end{pmatrix}}$
shows that the row rank, and hence the rank, is three.
2. Inspection of the columns shows that that the others are multiples of the first (inspection of the rows shows the same thing). Thus the rank is one. Alternatively, the reduction
${\begin{pmatrix}1&-1&2\\3&-3&6\\-2&2&-4\end{pmatrix}}{\xrightarrow[{2\rho _{1}+\rho _{3}}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&-1&2\\0&0&0\\0&0&0\end{pmatrix}}$
shows the same thing.
3. This calculation
${\begin{pmatrix}1&3&2\\5&1&1\\6&4&3\end{pmatrix}}{\xrightarrow[{-6\rho _{1}+\rho _{3}}]{-5\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&3&2\\0&-14&-9\\0&0&0\end{pmatrix}}$
shows that the rank is two.
4. The rank is zero.
This exercise is recommended for all readers.
Problem 6

Find a basis for the span of each set.

1. $\{{\begin{pmatrix}1&3\end{pmatrix}},{\begin{pmatrix}-1&3\end{pmatrix}},{\begin{pmatrix}1&4\end{pmatrix}},{\begin{pmatrix}2&1\end{pmatrix}}\}\subseteq {\mathcal {M}}_{1\!\times \!2}$
2. $\{{\begin{pmatrix}1\\2\\1\end{pmatrix}},{\begin{pmatrix}3\\1\\-1\end{pmatrix}},{\begin{pmatrix}1\\-3\\-3\end{pmatrix}}\}\subseteq \mathbb {R} ^{3}$
3. $\{1+x,1-x^{2},3+2x-x^{2}\}\subseteq {\mathcal {P}}_{3}$
4. $\{{\begin{pmatrix}1&0&1\\3&1&-1\end{pmatrix}},{\begin{pmatrix}1&0&3\\2&1&4\end{pmatrix}},{\begin{pmatrix}-1&0&-5\\-1&-1&-9\end{pmatrix}}\}\subseteq {\mathcal {M}}_{2\!\times \!3}$
1. This reduction
${\begin{pmatrix}1&3\\-1&3\\1&4\\2&1\end{pmatrix}}{\xrightarrow[{\begin{array}{c}\\[-19pt]-\rho _{1}+\rho _{3}\\[-5pt]-2\rho _{1}+\rho _{4}\end{array}}]{\rho _{1}+\rho _{2}}}{\xrightarrow[{(5/6)\rho _{2}+\rho _{4}}]{-(1/6)\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&3\\0&6\\0&0\\0&0\end{pmatrix}}$
gives $\langle {\begin{pmatrix}1&3\end{pmatrix}},{\begin{pmatrix}0&6\end{pmatrix}}\rangle$ .
2. Transposing and reducing
${\begin{pmatrix}1&2&1\\3&1&-1\\1&-3&-3\end{pmatrix}}{\xrightarrow[{-\rho _{1}+\rho _{3}}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&2&1\\0&-5&-4\\0&-5&-4\end{pmatrix}}{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&2&1\\0&-5&-4\\0&0&0\end{pmatrix}}$
and then transposing back gives this basis.
$\langle {\begin{pmatrix}1\\2\\1\end{pmatrix}},{\begin{pmatrix}0\\-5\\-4\end{pmatrix}}\rangle$
3. Notice first that the surrounding space is given as ${\mathcal {P}}_{3}$ , not ${\mathcal {P}}_{2}$ . Then, taking the first polynomial $1+1\cdot x+0\cdot x^{2}+0\cdot x^{3}$  to be "the same" as the row vector ${\begin{pmatrix}1&1&0&0\end{pmatrix}}$ , etc., leads to
${\begin{pmatrix}1&1&0&0\\1&0&-1&0\\3&2&-1&0\end{pmatrix}}{\xrightarrow[{-3\rho _{1}+\rho _{3}}]{-\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&1&0&0\\0&-1&-1&0\\0&0&0&0\end{pmatrix}}$
which yields the basis $\langle 1+x,-x-x^{2}\rangle$ .
4. Here "the same" gives
${\begin{pmatrix}1&0&1&3&1&-1\\1&0&3&2&1&4\\-1&0&-5&-1&-1&-9\end{pmatrix}}{\xrightarrow[{\rho _{1}+\rho _{3}}]{-\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{2\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&0&1&3&1&-1\\0&0&2&-1&0&5\\0&0&0&0&0&0\end{pmatrix}}$
$\langle {\begin{pmatrix}1&0&1\\3&1&-1\end{pmatrix}},{\begin{pmatrix}0&0&2\\-1&0&5\end{pmatrix}}\rangle$
Problem 7

Which matrices have rank zero? Rank one?

Only the zero matrices have rank of zero. The only matrices of rank one have the form

${\begin{pmatrix}k_{1}\cdot \rho \\\vdots \\k_{m}\cdot \rho \end{pmatrix}}$

where $\rho$  is some nonzero row vector, and not all of the $k_{i}$ 's are zero. (Remark. We can't simply say that all of the rows are multiples of the first because the first row might be the zero row. Another Remark. The above also applies with "column" replacing "row".)

This exercise is recommended for all readers.
Problem 8

Given $a,b,c\in \mathbb {R}$ , what choice of $d$  will cause this matrix to have the rank of one?

${\begin{pmatrix}a&b\\c&d\end{pmatrix}}$

If $a\neq 0$  then a choice of $d=(c/a)b$  will make the second row be a multiple of the first, specifically, $c/a$  times the first. If $a=0$  and $b=0$  then any non-$0$  choice for $d$  will ensure that the second row is nonzero. If $a=0$  and $b\neq 0$  and $c=0$  then any choice for $d$  will do, since the matrix will automatically have rank one (even with the choice of $d=0$ ). Finally, if $a=0$  and $b\neq 0$  and $c\neq 0$  then no choice for $d$  will suffice because the matrix is sure to have rank two.

Problem 9

Find the column rank of this matrix.

${\begin{pmatrix}1&3&-1&5&0&4\\2&0&1&0&4&1\end{pmatrix}}$

The column rank is two. One way to see this is by inspection— the column space consists of two-tall columns and so can have a dimension of at least two, and we can easily find two columns that together form a linearly independent set (the fourth and fifth columns, for instance). Another way to see this is to recall that the column rank equals the row rank, and to perform Gauss' method, which leaves two nonzero rows.

Problem 10

Show that a linear system with at least one solution has at most one solution if and only if the matrix of coefficients has rank equal to the number of its columns.

We apply Theorem 3.13. The number of columns of a matrix of coefficients $A$  of a linear system equals the number $n$  of unknowns. A linear system with at least one solution has at most one solution if and only if the space of solutions of the associated homogeneous system has dimension zero (recall: in the "${\text{General}}={\text{Particular}}+{\text{Homogeneous}}$ " equation ${\vec {v}}={\vec {p}}+{\vec {h}}$ , provided that such a ${\vec {p}}$  exists, the solution ${\vec {v}}$  is unique if and only if the vector ${\vec {h}}$  is unique, namely ${\vec {h}}={\vec {0}}$ ). But that means, by the theorem, that $n=r$ .

This exercise is recommended for all readers.
Problem 11

If a matrix is $5\!\times \!9$ , which set must be dependent, its set of rows or its set of columns?

The set of columns must be dependent because the rank of the matrix is at most five while there are nine columns.

Problem 12

Give an example to show that, despite that they have the same dimension, the row space and column space of a matrix need not be equal. Are they ever equal?

There is little danger of their being equal since the row space is a set of row vectors while the column space is a set of columns (unless the matrix is $1\!\times \!1$ , in which case the two spaces must be equal).

Remark. Consider

$A={\begin{pmatrix}1&3\\2&6\end{pmatrix}}$

and note that the row space is the set of all multiples of ${\begin{pmatrix}1&3\end{pmatrix}}$  while the column space consists of multiples of

${\begin{pmatrix}1\\2\end{pmatrix}}$

so we also cannot argue that the two spaces must be simply transposes of each other.

Problem 13

Show that the set $\{(1,-1,2,-3),(1,1,2,0),(3,-1,6,-6)\}$  does not have the same span as $\{(1,0,1,0),(0,2,0,3)\}$ . What, by the way, is the vector space?

First, the vector space is the set of four-tuples of real numbers, under the natural operations. Although this is not the set of four-wide row vectors, the difference is slight— it is "the same" as that set. So we will treat the four-tuples like four-wide vectors.

With that, one way to see that $(1,0,1,0)$  is not in the span of the first set is to note that this reduction

${\begin{pmatrix}1&-1&2&-3\\1&1&2&0\\3&-1&6&-6\end{pmatrix}}{\xrightarrow[{-3\rho _{1}+\rho _{3}}]{-\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-\rho _{2}+\rho _{3}}}{\begin{pmatrix}1&-1&2&-3\\0&2&0&3\\0&0&0&0\end{pmatrix}}$

and this one

${\begin{pmatrix}1&-1&2&-3\\1&1&2&0\\3&-1&6&-6\\1&0&1&0\end{pmatrix}}{\xrightarrow[{\begin{array}{c}\\[-19pt]-3\rho _{1}+\rho _{3}\\[-5pt]-\rho _{1}+\rho _{4}\end{array}}]{-\rho _{1}+\rho _{2}}}\;{\xrightarrow[{-(1/2)\rho _{2}+\rho _{4}}]{-\rho _{2}+\rho _{3}}}\;{\xrightarrow[{}]{\rho _{3}\leftrightarrow \rho _{4}}}{\begin{pmatrix}1&-1&2&-3\\0&2&0&3\\0&0&-1&3/2\\0&0&0&0\end{pmatrix}}$

yield matrices differing in rank. This means that addition of $(1,0,1,0)$  to the set of the first three four-tuples increases the rank, and hence the span, of that set. Therefore $(1,0,1,0)$  is not already in the span.

This exercise is recommended for all readers.
Problem 14

Show that this set of column vectors

$\left\{{\begin{pmatrix}d_{1}\\d_{2}\\d_{3}\end{pmatrix}}\,{\big |}\,{\text{there are }}x,y,{\text{ and }}z{\text{ such that }}{\begin{array}{*{3}{rc}r}3x&+&2y&+&4z&=&d_{1}\\x&&&-&z&=&d_{2}\\2x&+&2y&+&5z&=&d_{3}\end{array}}\right\}$

is a subspace of $\mathbb {R} ^{3}$ . Find a basis.

It is a subspace because it is the column space of the matrix

${\begin{pmatrix}3&2&4\\1&0&-1\\2&2&5\end{pmatrix}}$

of coefficients. To find a basis for the column space,

$\{c_{1}{\begin{pmatrix}3\\1\\2\end{pmatrix}}+c_{2}{\begin{pmatrix}2\\0\\2\end{pmatrix}}+c_{3}{\begin{pmatrix}4\\-1\\5\end{pmatrix}}\,{\big |}\,c_{1},c_{2},c_{3}\in \mathbb {R} \}$

we take the three vectors from the spanning set, transpose, reduce,

${\begin{pmatrix}3&1&2\\2&0&2\\4&-1&5\end{pmatrix}}{\xrightarrow[{-(4/3)\rho _{1}+\rho _{3}}]{-(2/3)\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{-(7/2)\rho _{2}+\rho _{3}}}{\begin{pmatrix}3&1&2\\0&-2/3&2/3\\0&0&0\end{pmatrix}}$

and transpose back to get this.

$\langle {\begin{pmatrix}3\\1\\2\end{pmatrix}},{\begin{pmatrix}0\\-2/3\\2/3\end{pmatrix}}\rangle$
Problem 15

Show that the transpose operation is linear:

${{(rA+sB)}^{\rm {trans}}}=r{{A}^{\rm {trans}}}+s{{B}^{\rm {trans}}}$

for $r,s\in \mathbb {R}$  and $A,B\in {\mathcal {M}}_{m\!\times \!n}$ .

This can be done as a straightforward calculation.

${\begin{array}{rl}{{(rA+sB)}^{\rm {trans}}}&={{\begin{pmatrix}ra_{1,1}+sb_{1,1}&\ldots &ra_{1,n}+sb_{1,n}\\\vdots &&\vdots \\ra_{m,1}+sb_{m,1}&\ldots &ra_{m,n}+sb_{m,n}\end{pmatrix}}^{\rm {trans}}}\\&={\begin{pmatrix}ra_{1,1}+sb_{1,1}&\ldots &ra_{m,1}+sb_{m,1}\\\vdots \\ra_{1,n}+sb_{1,n}&\ldots &ra_{m,n}+sb_{m,n}\end{pmatrix}}\\&={\begin{pmatrix}ra_{1,1}&\ldots &ra_{m,1}\\\vdots \\ra_{1,n}&\ldots &ra_{m,n}\end{pmatrix}}+{\begin{pmatrix}sb_{1,1}&\ldots &sb_{m,1}\\\vdots \\sb_{1,n}&\ldots &sb_{m,n}\end{pmatrix}}\\&=r{{A}^{\rm {trans}}}+s{{B}^{\rm {trans}}}\end{array}}$
This exercise is recommended for all readers.
Problem 16

In this subsection we have shown that Gaussian reduction finds a basis for the row space.

1. Show that this basis is not unique— different reductions may yield different bases.
2. Produce matrices with equal row spaces but unequal numbers of rows.
3. Prove that two matrices have equal row spaces if and only if after Gauss-Jordan reduction they have the same nonzero rows.
1. These reductions give different bases.
${\begin{pmatrix}1&2&0\\1&2&1\end{pmatrix}}{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&2&0\\0&0&1\end{pmatrix}}\qquad {\begin{pmatrix}1&2&0\\1&2&1\end{pmatrix}}{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}\;{\xrightarrow[{}]{2\rho _{2}}}{\begin{pmatrix}1&2&0\\0&0&2\end{pmatrix}}$
2. An easy example is this.
${\begin{pmatrix}1&2&1\\3&1&4\end{pmatrix}}\qquad {\begin{pmatrix}1&2&1\\3&1&4\\0&0&0\end{pmatrix}}$
This is a less simplistic example.
${\begin{pmatrix}1&2&1\\3&1&4\end{pmatrix}}\qquad {\begin{pmatrix}1&2&1\\3&1&4\\2&4&2\\4&3&5\end{pmatrix}}$
3. Assume that $A$  and $B$  are matrices with equal row spaces. Construct a matrix $C$  with the rows of $A$  above the rows of $B$ , and another matrix $D$  with the rows of $B$  above the rows of $A$ .
$C={\begin{pmatrix}A\\B\end{pmatrix}}\qquad D={\begin{pmatrix}B\\A\end{pmatrix}}$
Observe that $C$  and $D$  are row-equivalent (via a sequence of row-swaps) and so Gauss-Jordan reduce to the same reduced echelon form matrix. Because the row spaces are equal, the rows of $B$  are linear combinations of the rows of $A$  so Gauss-Jordan reduction on $C$  simply turns the rows of $B$  to zero rows and thus the nonzero rows of $C$  are just the nonzero rows obtained by Gauss-Jordan reducing $A$ . The same can be said for the matrix $D$ — Gauss-Jordan reduction on $D$  gives the same non-zero rows as are produced by reduction on $B$  alone. Therefore, $A$  yields the same nonzero rows as $C$ , which yields the same nonzero rows as $D$ , which yields the same nonzero rows as $B$ .
Problem 17

Why is there not a problem with Remark 3.14 in the case that $r$  is bigger than $n$ ?

It cannot be bigger.

Problem 18

Show that the row rank of an $m\!\times \!n$  matrix is at most $m$ . Is there a better bound?

The number of rows in a maximal linearly independent set cannot exceed the number of rows. A better bound (the bound that is, in general, the best possible) is the minimum of $m$  and $n$ , because the row rank equals the column rank.

This exercise is recommended for all readers.
Problem 19

Show that the rank of a matrix equals the rank of its transpose.

Because the rows of a matrix $A$  are turned into the columns of ${{A}^{\rm {trans}}}$  the dimension of the row space of $A$  equals the dimension of the column space of ${{A}^{\rm {trans}}}$ . But the dimension of the row space of $A$  is the rank of $A$  and the dimension of the column space of ${{A}^{\rm {trans}}}$  is the rank of ${{A}^{\rm {trans}}}$ . Thus the two ranks are equal.

Problem 20

True or false: the column space of a matrix equals the row space of its transpose.

False. The first is a set of columns while the second is a set of rows.

This example, however,

$A={\begin{pmatrix}1&2&3\\4&5&6\end{pmatrix}},\qquad {{A}^{\rm {trans}}}={\begin{pmatrix}1&4\\2&5\\3&6\end{pmatrix}}$

indicates that as soon as we have a formal meaning for "the same", we can apply it here:

$\mathop {\text{Columnspace}} (A)=[\{{\begin{pmatrix}1\\4\end{pmatrix}},{\begin{pmatrix}2\\5\end{pmatrix}},{\begin{pmatrix}3\\6\end{pmatrix}}\}]$

while

$\mathop {\mbox{Rowspace}} ({{A}^{\rm {trans}}})=[\{{\begin{pmatrix}1&4\end{pmatrix}},{\begin{pmatrix}2&5\end{pmatrix}},{\begin{pmatrix}3&6\end{pmatrix}}\}]$

are "the same" as each other.

This exercise is recommended for all readers.
Problem 21

We have seen that a row operation may change the column space. Must it?

No. Here, Gauss' method does not change the column space.

${\begin{pmatrix}1&0\\3&1\end{pmatrix}}{\xrightarrow[{}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}$
Problem 22

Prove that a linear system has a solution if and only if that system's matrix of coefficients has the same rank as its augmented matrix.

A linear system

$c_{1}{\vec {a}}_{1}+\dots +c_{n}{\vec {a}}_{n}={\vec {d}}$

has a solution if and only if ${\vec {d}}$  is in the span of the set $\{{\vec {a}}_{1},\dots ,{\vec {a}}_{n}\}$ . That's true if and only if the column rank of the augmented matrix equals the column rank of the matrix of coefficients. Since rank equals the column rank, the system has a solution if and only if the rank of its augmented matrix equals the rank of its matrix of coefficients.

Problem 23

An $m\!\times \!n$  matrix has full row rank if its row rank is $m$ , and it has full column rank if its column rank is $n$ .

1. Show that a matrix can have both full row rank and full column rank only if it is square.
2. Prove that the linear system with matrix of coefficients $A$  has a solution for any $d_{1}$ , ..., $d_{m}$ 's on the right side if and only if $A$  has full row rank.
3. Prove that a homogeneous system has a unique solution if and only if its matrix of coefficients $A$  has full column rank.
4. Prove that the statement "if a system with matrix of coefficients $A$  has any solution then it has a unique solution" holds if and only if $A$  has full column rank.
1. Row rank equals column rank so each is at most the minimum of the number of rows and columns. Hence both can be full only if the number of rows equals the number of columns. (Of course, the converse does not hold: a square matrix need not have full row rank or full column rank.)
2. If $A$  has full row rank then, no matter what the right-hand side, Gauss' method on the augmented matrix ends with a leading one in each row and none of those leading ones in the furthest right column (the "augmenting" column). Back substitution then gives a solution. On the other hand, if the linear system lacks a solution for some right-hand side it can only be because Gauss' method leaves some row so that it is all zeroes to the left of the "augmenting" bar and has a nonzero entry on the right. Thus, if $A$  does not have a solution for some right-hand sides, then $A$  does not have full row rank because some of its rows have been eliminated.
3. The matrix $A$  has full column rank if and only if its columns form a linearly independent set. That's equivalent to the existence of only the trivial linear relationship.
4. The matrix $A$  has full column rank if and only if the set of its columns is linearly independent, and so forms a basis for its span. That's equivalent to the existence of a unique linear representation of all vectors in that span.
Problem 24

How would the conclusion of Lemma 3.3 change if Gauss' method is changed to allow multiplying a row by zero?

Instead of the row spaces being the same, the row space of $B$  would be a subspace (possibly equal to) the row space of $A$ .

This exercise is recommended for all readers.
Problem 25

What is the relationship between $\mathop {\mbox{rank}} (A)$  and $\mathop {\mbox{rank}} (-A)$ ? Between $\mathop {\mbox{rank}} (A)$  and $\mathop {\mbox{rank}} (kA)$ ? What, if any, is the relationship between $\mathop {\mbox{rank}} (A)$ , $\mathop {\mbox{rank}} (B)$ , and $\mathop {\mbox{rank}} (A+B)$ ?

Clearly $\mathop {\mbox{rank}} (A)=\mathop {\mbox{rank}} (-A)$  as Gauss' method allows us to multiply all rows of a matrix by $-1$ . In the same way, when $k\neq 0$  we have $\mathop {\mbox{rank}} (A)=\mathop {\mbox{rank}} (kA)$ .

Addition is more interesting. The rank of a sum can be smaller than the rank of the summands.

${\begin{pmatrix}1&2\\3&4\end{pmatrix}}+{\begin{pmatrix}-1&-2\\-3&-4\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}$

The rank of a sum can be bigger than the rank of the summands.

${\begin{pmatrix}1&2\\0&0\end{pmatrix}}+{\begin{pmatrix}0&0\\3&4\end{pmatrix}}={\begin{pmatrix}1&2\\3&4\end{pmatrix}}$

But there is an upper bound (other than the size of the matrices). In general, $\mathop {\mbox{rank}} (A+B)\leq \mathop {\mbox{rank}} (A)+\mathop {\mbox{rank}} (B)$ .

To prove this, note that Gaussian elimination can be performed on $A+B$  in either of two ways: we can first add $A$  to $B$  and then apply the appropriate sequence of reduction steps

$(A+B)\;{\xrightarrow[{}]{{\text{step}}_{1}}}\;\cdots \;{\xrightarrow[{}]{{\text{step}}_{k}}}\;{\text{echelon form}}$

or we can get the same results by performing ${\text{step}}_{1}$  through ${\text{step}}_{k}$  separately on $A$  and $B$ , and then adding. The largest rank that we can end with in the second case is clearly the sum of the ranks. (The matrices above give examples of both possibilities, $\mathop {\mbox{rank}} (A+B)<\mathop {\mbox{rank}} (A)+\mathop {\mbox{rank}} (B)$  and $\mathop {\mbox{rank}} (A+B)=\mathop {\mbox{rank}} (A)+\mathop {\mbox{rank}} (B)$ , happening.)