# Introductory Linear Algebra/Vectors and subspaces

## Vectors

### Introduction

Without distinguishing row and column vectors, we can define vectors as follows:

Definition. (Vector) Let $n$  be a positive integer. A vector is an $n$ -tuple $\mathbf {v} =(v_{1},v_{2},\ldots ,v_{n})$  of real numbers. The set $\mathbb {R} ^{n}$  of all such vectors is the Euclidean space of dimension $n$ .

Remark.

• We will define dimension later.
• We use a boldface letter to denote a vector. We may write ${\vec {v}},{\underline {v}},{\overset {\rightharpoonup }{v}}$  in written form.
• In particular, we use $\mathbf {0}$  to denote zero vector in which each entry is zero.
• The entries $v_{1},v_{2},\ldots ,v_{k}$  are called the coordinates or entries of $\mathbf {v}$ .

A special type of vector is the standard vector.

Definition. (Standard vector) The standard vectors in $\mathbb {R} ^{n}$  are $\mathbf {e} _{j}$  whose $j$ th entry is $1$  and all other entries are $0$ , in which $j\in \{1,\ldots ,n\}$

Remark.

• In $\mathbb {R} ^{2}$ , $\mathbf {e} _{1}$  and $\mathbf {e} _{2}$  are usually denoted by $\mathbf {i}$  and $\mathbf {j}$  respectively.
• In $\mathbb {R} ^{3}$ , $\mathbf {e} _{1}$ , $\mathbf {e} _{2}$  and $\mathbf {e} _{3}$  are usually denoted by $\mathbf {i}$ , $\mathbf {j}$  and $\mathbf {k}$  respectively.

Example.

• In $\mathbb {R} ^{4}$ , $\mathbf {e} _{3}=(0,0,1,0)$ .
• In $\mathbb {R} ^{3}$ , $\mathbf {i} =(1,0,0)$ .
• In $\mathbb {R} ^{2}$ , $\mathbf {i} =(1,0)$ .

We can see that $\mathbf {i}$  in $\mathbb {R} ^{2}$  is different from $\mathbf {i}$  in $\mathbb {R} ^{3}$ .

However, in linear algebra, we sometimes need to distinguish row and column vectors, which are defined as follows:

Definition. (Row and column vector) A row vector is a $1\times n$  matrix, and a column vector is a $n\times 1$  matrix.

Remark.

• It is more common to use column vectors.
• Because of this, we can apply the definitions of addition and scalar multiplication of a matrix to the corresponding vector operations.

Example. (Row and column vectors) ${\begin{pmatrix}1&2&3\end{pmatrix}}$  is a row vector, and ${\begin{pmatrix}1\\2\\3\\\end{pmatrix}}$  is a column vector.

Remark.

• To save space, we may use ${\begin{pmatrix}v_{1}&\cdots &v_{n}\end{pmatrix}}^{T}$  to represent column vectors.
• To save more space, it is more common to denote this transpose by $(v_{1},\ldots ,v_{n})^{T}$ .
• On the other hand, we usually do not denote row vectors by $(v_{1},\ldots ,v_{n})$  to avoid confusion with the notation of vectors (without specifying row or column vectors).

The two basic vector operations are addition and scalar multiplication. Using these two operations only, we can combine multiple vectors as in the following definition.

Definition. (Linear combination) Let $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\in \mathbb {R} ^{n}$ . A vector $\mathbf {v} \in \mathbb {R} ^{n}$  is a linear combination of $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}$  if

$\mathbf {v} =c_{1}\mathbf {v} _{1}+\cdots +c_{k}\mathbf {v} _{k}$

for some scalars (or real numbers) $c_{1},\ldots ,c_{k}$ .

Example. The vector $(3,4,5)^{T}$  is a linear combination of $(1,2,3)^{T}$  and $(2,3,4)^{T}$ , while the vector $(3,4,6)^{T}$  is not.

Proof. Since

$(3,4,5)^{T}=a(1,2,3)^{T}+b(2,3,4)^{T}\iff {\begin{cases}a+2b=3\\2a+3b=4\\3a+4b=5\\\end{cases}},$

and we can transform the augmented matrix representing this SLE as follows:
${\begin{pmatrix}1&2&3\\2&3&4\\3&4&5\\\end{pmatrix}}{\overset {-3\mathbf {r} _{1}+\mathbf {r} _{3}\to \mathbf {r} _{3}}{\overset {-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}}{\begin{pmatrix}1&2&3\\0&-1&-2\\0&-2&-4\\\end{pmatrix}}{\overset {-\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2&3\\0&1&2\\0&-2&-4\\\end{pmatrix}}{\overset {-2\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\overset {2\mathbf {r} _{2}+\mathbf {r} _{3}\to \mathbf {r} _{3}}{\to }}}{\begin{pmatrix}1&0&-1\\0&1&2\\0&0&0\\\end{pmatrix}}.$

Then, we can directly read that the unique solution is $(a,b)=(-1,2)$ . So, we can express $(3,4,5)^{T}$  as the linear combination of $(1,2,3)^{T}$  and $(2,3,4)^{T}$ .

On the other hand, since

$(3,4,6)^{T}=a(1,2,3)^{T}+b(2,3,4)^{T}\iff {\begin{cases}a+2b=3\\2a+3b=4\\3a+4b=6\\\end{cases}},$

and we can transform the augmented matrix representing this SLE as follows:
${\begin{pmatrix}1&2&3\\2&3&4\\3&4&6\\\end{pmatrix}}{\overset {-3\mathbf {r} _{1}+\mathbf {r} _{3}\to \mathbf {r} _{3}}{\overset {-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}}{\begin{pmatrix}1&2&3\\0&-1&-2\\0&-2&-3\\\end{pmatrix}}{\overset {-\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2&3\\0&1&2\\0&-2&-3\\\end{pmatrix}}{\overset {-2\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\overset {2\mathbf {r} _{2}+\mathbf {r} _{3}\to \mathbf {r} _{3}}{\to }}}{\begin{pmatrix}1&0&-1\\0&1&2\\0&0&1\\\end{pmatrix}}.$

Since there is a leading one at the 3rd column, the SLE is inconsistent, and therefore we cannot express $(3,4,6)^{T}$  as a linear combination of $(1,2,3)^{T}$  and $(2,3,4)^{T}$ .

$\Box$

1 Choose linear combination(s) of $(1,2)^{T},(0,0)^{T}$ .

 $(1,2)^{T}$ . $(0,0)^{T}$ . $(0,1)^{T}$ . $(1,2)$ . $(0,0)$ .

2 Select all correct statement(s).

 If a nonzero vector $\mathbf {v}$ is a linear combination of a vector $\mathbf {v} _{1}$ , and also a linear combination of a vector $\mathbf {v} _{2}$ , then $\mathbf {v} _{1}$ is a linear combination of $\mathbf {v} _{2}$ . Linear combination of vectors $\mathbf {v} _{1},\mathbf {v} _{2}$ and linear combination of vectors $\mathbf {v} _{2},\mathbf {v} _{3}$ are both linear combination of vectors $\mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}$ . Zero vector is a linear combination of arbitrary vector(s). There are infinitely many possible linear combinations for arbitrary vector(s). $\mathbf {v} =\pi ^{2}\mathbf {v} _{1}+3\mathbf {v} _{2}$ is not a linear combination of vectors $\mathbf {v} _{1},\mathbf {v} _{2}$ .

Another concept that is closely related to linear combinations is span.

Definition. (Span) The cross-hatched plane is the span of u {\displaystyle \mathbf {u} }   and v {\displaystyle \mathbf {v} }   in R 3 {\displaystyle \mathbb {R} ^{3}}  .

Let $S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\}$  be a nonempty subset of $\mathbb {R} ^{n}$ . The span of $S$ , denoted by $\operatorname {span} (S)$ , is the set of all linear combinations of $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}$ .

Remark.

• It follows that $\operatorname {span} (S)$  contains infinitely many vectors, since there are infinitely many possible linear combinations of such vectors.
• It follows that we can use the vectors belonging in the set $S$  to generate each vector in $\operatorname {span} (S)$ .
• We have a span of a set containing some vector(s), instead of a span of vectors.

Example. The span of the set $\{(1,2,3)^{T},(2,3,4)^{T}\}$  is

$\{(a+2b,2a+3b,3a+4b)^{T}:a,b\in \mathbb {R} \}$

since linear combinations of $(1,2,3)^{T}$  and $(2,3,4)^{T}$  are in the form of
$a(1,2,3)^{T}+b(2,3,4)^{T}=(a+2b,2a+3b,3a+4b)^{T}.$

Geometrically, the span is a plane in $\mathbb {R} ^{3}$ .

The span of the set $\{(1,1)^{T}\}$  is

$\{(a,a)^{T}:a\in \mathbb {R} \}$

since linear combinations of $(1,1,1)^{T}$  are in the form of
$a(1,1)^{T}=(a,a)^{T}.$

Geometrically, the span is a line in $\mathbb {R} ^{2}$ .

1 Select all correct expression(s) for $\operatorname {span} {(\{\mathbf {0} \})}$ , in which $\mathbf {0} \in \mathbb {R} ^{n}$ .

 $\mathbf {0}$ . Empty set, $\varnothing$ . $\{\mathbf {0} \}$ . $\{\{\mathbf {0} \}\}$ . $\mathbb {R} ^{n}$ 2 Select all correct expression(s) for $\operatorname {span} {(\{\mathbf {i} ,\mathbf {j} ,\mathbf {k} \})}$ , in which $\mathbf {i} ,\mathbf {j} ,\mathbf {k}$  are column vectors.

 $\mathbb {R} ^{3}$ . $\{\mathbf {i} ,\mathbf {j} ,\mathbf {k} \}$ . $\{a\mathbf {i} ,b\mathbf {j} ,c\mathbf {k} :a,b,c\in \mathbb {R} \}$ . $\{(a,b,c)^{T}:a,b,c\in \mathbb {R} \}$ .

3 Let $S$  and $T$  be sets containing some vectors, which are nonempty subset of $\mathbb {R} ^{n}$ . Select all correct statement(s).

 $\operatorname {span} {(S\cup T)}=\operatorname {span} (S)\cup \operatorname {span} (T)$ If $\operatorname {span} (S)=\operatorname {span} (T)$ , then $S=T$ . If $\operatorname {span} (S)\neq \operatorname {span} (T)$ , then $S\neq T$ $\operatorname {span} {(\operatorname {span} (S))}=\operatorname {span} (S)$ $\operatorname {span} {(\{\mathbf {u} ,\mathbf {v} \})}=\mathbb {R} ^{2}$ for each $\mathbf {u} ,\mathbf {v} \in \mathbb {R} ^{2}$ ### Linear independence

Definition. (Linear (in)dependence)

Let $S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\}$  be a subset of $\mathbb {R} ^{n}$ . The set $S$  is linearly dependent if there exist scalars $c_{1},\ldots ,c_{k}$  that are not all zero such that

$c_{1}\mathbf {v} _{1}+\cdots +c_{k}\mathbf {v} _{k}=\mathbf {0} .$

Otherwise, the set is linearly independent.

Remark.

• (terminology) We may also say that the vectors $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}$  are linearly independent, instead of saying the set containing these vectors is linearly (in)dependent.
• If the vectors are linearly dependent, it is possible that some of $c_{1},\ldots ,c_{k}$ , in the equation, are zero.
• Equivalently, if the vectors are linearly independent, then we have 'if $c_{1}\mathbf {v} _{1}+\cdots +c_{k}\mathbf {v} _{k}=\mathbf {0}$ , then the only solution is $c_{1}=\cdots =c_{k}=0$ '.
• This is a more common way to check linear independence.

Then, we will introduce an intuitive result about linear dependence, in the sense that the results match with the name 'linear dependence'.

Proposition. (Equivalent condition for linearly dependence) The vectors $\mathbf {v} _{1},\cdots ,\mathbf {v} _{k}$  are linearly dependent if and only if one of them is a linear combination of the others.

Proof.

• Only if part:
• without loss of generality, suppose $c_{1}\mathbf {v} _{1}+\cdots +c_{k}\mathbf {v} _{k}$  in which $c_{1}\neq 0$  (we can replace $c_{1}$  by another scalar, and the result still holds by symmetry).
• Then, $\mathbf {v} _{1}=-{\frac {c_{2}}{c_{1}}}\mathbf {v} _{2}-\cdots -{\frac {c_{k}}{c_{1}}}\mathbf {v} _{k}$ , i.e. $\mathbf {v} _{1}$  is a linear combination of the others.
• If part:
• without loss of generality, suppose $\mathbf {v} _{1}=b_{2}\mathbf {v} _{2}+\cdots +b_{k}\mathbf {v} _{k}$  (we can similarly replace $\mathbf {v} _{1}$  by another vector, and the result still holds by symmetry).
• Then, $\mathbf {v} _{1}-b_{2}\mathbf {v} _{2}-\cdots -b_{k}\mathbf {v} _{k}=\mathbf {0}$ .
• Since the coefficient of $\mathbf {v} _{1}$  is nonzero (it is one), $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}$  are linearly dependent.

$\Box$

Remark.

• This does not mean that each of them is a linear combination of the others.
• If vectors are 'linearly dependent', we may intuitively think that they are related in a linear sense, and this is true, since one of them is a linear combination of the others, i.e. it has a relationship with all other vectors.

Example. The vectors $(1,2,3)^{T},(2,3,4)^{T},(3,4,5)^{T}$  is linearly dependent.

Proof.

• Consider the equation $a(1,2,3)^{T}+b(2,3,4)^{T}+c(3,4,5)^{T}=(0,0,0)^{T}$ .
• Since the determinant of the coefficient matrix

${\begin{vmatrix}1&2&3\\2&3&4\\3&4&5\\\end{vmatrix}}=1(3)(5)+2(4)(3)+3(2)(4)-3(3)(3)-2(2)(5)-4(4)(1)=0,$

the coefficient matrix is non-invertible.
• Thus, the homogeneous SLE has a non-trivial solution by simplified invertible matrix theorem, i.e. there exist scalars that are not all zero satisfying the equation.

$\Box$

Then, we will introduce a proposition for linearly independent vectors.

Proposition. (Comparing coefficients for linearly independent vectors) Let $\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}$  be linearly independent vectors. If

$a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}=b_{1}\mathbf {v} _{1}+\cdots +b_{k}\mathbf {v} _{k},$

$(a_{1},\ldots ,a_{k})=(b_{1},\ldots ,b_{k})$ .

Proof.

$a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}=b_{1}\mathbf {v} _{1}+\cdots +b_{k}\mathbf {v} _{k},\Leftrightarrow (a_{1}-b_{1})\mathbf {v} _{1}+\cdots +(a_{k}-b_{k})\mathbf {v} _{k}=\mathbf {0}$

By the linear independence of vectors,
$a_{1}-b_{1}=\cdots =a_{k}-b_{k}=0,$

and the result follows.

$\Box$

Example. (Finding unknown coefficients by comparison) Suppose $\mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}$  are linearly independent vectors, and

$2\mathbf {v} _{1}+a\mathbf {v} _{2}+\mathbf {v} _{3}=b\mathbf {v} _{1}+7\mathbf {v} _{2}+c\mathbf {v} _{3}.$

Then, by comparing coefficients, we have
$(a,b,c)=(7,2,1).$

Example. Consider three linearly dependent vectors $(1,2,4)^{T},(2,4,8)^{T},(1,2,3)^{T}$ . Even if

$a_{1}(1,2,4)^{T}+a_{2}(2,4,8)^{T}+a_{3}(1,2,3)^{T}=b_{1}(1,2,4)^{T}+b_{2}(2,4,8)^{T}+b_{3}(1,2,3)^{T},$

we may not have $(a_{1},a_{2},a_{3})=(b_{1},b_{2},b_{3})$ . E.g., we have
$4(1,2,4)^{T}+(2,4,8)^{T}+(1,2,3)^{T}=2(1,2,4)^{T}+2(2,4,8)^{T}+(1,2,3)^{T}.$

Exercise. Let $\mathbf {u} ,\mathbf {v} ,\mathbf {w}$  be linearly independent vectors.

1 Select all correct statement(s).

 $\mathbf {u} ,\mathbf {v}$ are linearly independent. $\mathbf {u} ,\mathbf {v} ,\mathbf {w} ,\mathbf {0}$ are linearly independent. $\mathbf {v} ,\mathbf {w} ,\mathbf {u} +\mathbf {v}$ are linearly dependent $a\mathbf {u} ,b\mathbf {v} ,c\mathbf {w}$ are linearly independent for each scalar $a,b,c$ 2 Select all correct statement(s).

 Single arbitrary vector is linearly independent. If $\mathbf {u} ,\mathbf {v}$ are linearly independent, and $\mathbf {v} ,\mathbf {w}$ are linearly independent, then $\mathbf {u} ,\mathbf {v} ,\mathbf {w}$ are linearly independent. $a_{1}\mathbf {e} _{1}+\cdots +a_{k}\mathbf {e} _{k}=b_{1}\mathbf {e} _{1}+\cdots +b_{k}\mathbf {e} _{k}\Rightarrow (a_{1},\ldots ,a_{k})=(b_{1},\ldots ,b_{k})$ $\mathbf {e} _{1},\ldots ,\mathbf {e} _{k}$ are linearly independent.

Then, we will discuss two results that relate linear independence with SLE.

Proposition. (Relationship between linear independence and invertibility) Let $S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{\color {green}n}\}$  be a set of vectors in $\mathbb {R} ^{\color {green}n}$  (the number of vectors must be $n$ , so that the following matrix is square). Let $A$  be the square matrix whose columns are $\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}$  respectively. Then, $S$  is linearly independent if and only if $A$  is invertible.

Proof. Setting $\mathbf {x} =(c_{1},\ldots ,c_{n})^{T}$ ,

$c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}=\mathbf {0} \iff A\mathbf {x} =\mathbf {0} ,$

a homogeneous SLE. By definition of linear independence, $S$  is linearly independent is equivalent to $A\mathbf {x} =\mathbf {0}$  only has the trivial solution, which is also equivalent to $A$  is invertible, by the simplified invertible matrix theorem.

$\Box$

Remark. This gives a convenient way to check linear (in)dependence if the number of vectors involved matches with the number of entries for each of them.

Example. The set $\{(1,1,1)^{T},(2,3,4)^{T},(7,3,6)^{T}\}$  is linearly independent, since

${\begin{vmatrix}1&2&7\\1&3&3\\1&4&6\\\end{vmatrix}}=7\neq 0$

Proposition. (Relationship between linear independence and number of solutions in SLE) Let $A\mathbf {x} =\mathbf {b}$  be a SLE ($A$  may not be square matrix, and $\mathbf {b}$  can be nonzero). If the columns of $A$  are linearly independent, then the SLE has at most one solution.

Proof. Let $\mathbf {a} _{1},\ldots ,\mathbf {a} _{n}$  be the columns of $A$ , and let $\mathbf {x} =(x_{1},\ldots ,x_{n})^{T}$ .

Then, $A\mathbf {x} =\mathbf {b} \iff x_{1}\mathbf {a} _{1}+\cdots +x_{n}\mathbf {a} _{n}=\mathbf {b}$ . Assume there are two distinct solutions $(x_{1},\ldots ,x_{n})^{T}$  and $(y_{1},\ldots ,y_{n})^{T}$  for the SLE, i.e.,

$x_{1}\mathbf {a} _{1}+\cdots +x_{n}\mathbf {a} _{n}=\mathbf {b} =y_{1}\mathbf {a} _{1}+\cdots +y_{n}\mathbf {a} _{n}.$

But, by the proposition about comparing coefficients for linear independent vectors,
$(x_{1},\ldots ,x_{n})=(y_{1},\ldots ,y_{n}),$

which contradicts with our assumption that these two solutions are distinct.

$\Box$

Remark.

• A special case is that if the columns of $A$  are linearly independent, then $A\mathbf {x} =\mathbf {0}$  only has a trivial solution, so $A$  is invertible, which matches with the previous proposition.
• The SLE has at most one solution and is equivalent to the SLE having either no solutions or one unique solution.

Example. The set $\{(1,1,1)^{T},(2,3,4)^{T}\}$  is linearly independent, since none of them is a linear combination of the others. Thus, the SLE

${\begin{cases}x+2y&=a\\x+3y&=b\\x+4y&=c\\\end{cases}}$

has at most one solution for each $a,b$ . E.g., the SLE
${\begin{cases}x+2y&=4\\x+3y&=3\\x+4y&=2\\\end{cases}}$

has no solutions, by considering the augmented matrix of this SLE
${\begin{pmatrix}1&2&4\\1&3&3\\1&4&2\\\end{pmatrix}}.$

Since ${\begin{vmatrix}1&2&4\\1&3&3\\1&4&2\\\end{vmatrix}}=-4,$  and the augmented matrix is invertible, and thus its RREF is $I_{3}$  by the simplified invertible matrix theorem. Then, since there is a leading one at the 3th column in $I_{3}$ , the SLE is inconsistent.

Exercise. Let $A={\begin{pmatrix}2&0&2&2\\2&5&8&9\\3&1&2&5\\1&2&3&4\\\end{pmatrix}},\mathbf {x} =(x_{1},x_{2},x_{3},x_{4})^{T},\mathbf {b} =(b_{1},b_{2},b_{3},b_{4})^{T}$ . It is given that $\det A=-2$ .

Select all correct statement(s).

 The set $\{(2,0,2,2)^{T},(2,5,8,9)^{T},(3,1,2,5)^{T},(1,2,3,4)^{T}\}$ is linearly independent. The set $\{(2,2,3,1)^{T},(0,5,1,2)^{T},(2,8,2,3)^{T},(2,9,5,4)^{T}\}$ is linearly independent. The homogeneous SLE $A\mathbf {x} =\mathbf {0}$ only has the trivial solution. The SLE $A\mathbf {x} =\mathbf {b}$ may have infinitely many solutions

## Subspaces

Then, we will discuss subspaces. Simply speaking, they are some subsets of $\mathbb {R} ^{n}$  that have some nice properties. To be more precise, we have the following definition.

Definition. (Subspace) A subset $V$  of $\mathbb {R} ^{n}$  is a subspace of $\mathbb {R} ^{n}$  if all of the following conditions hold.

1. $\mathbf {0} \in V$
2. (closure under addition) for each $\mathbf {u} ,\mathbf {v} \in V$ , $\mathbf {u} +\mathbf {v} \in V$
3. (closure under scalar multiplication) for each $\mathbf {v} \in V$  and scalar $c$ , $c\mathbf {v} \in V$

Remark.

• $V$  stands for vector space, since it is a kind of vector space, that is a subset of some larger vector spaces.
• The definition of vector space involves more conditions and is more complicated, and thus not included here.
• For subspaces, after these conditions are satisfied, the remaining conditions for vector spaces are automatically satisfied.
• If $V$  is nonempty, then the first condition is redundant.
• But, we may not know whether $V$  is empty or nonempty, so it may be more convenient to simply check the first condition.
• This is because for each $\mathbf {v} \in V$ , $-\mathbf {v} =(-1)\cdot \mathbf {v} \in V$  by closure under scalar multiplication, and thus $\mathbf {0} =\mathbf {v} +(-\mathbf {v} )\in V$  by closure under addition.

Example. (Zero space) The set containing only the zero vector, $\{\mathbf {0} \}$ , is a subspace of $\mathbb {R} ^{n}$ , and is called the zero space.

Proof.

• $\mathbf {0} \in \{\mathbf {0} \}$ ;
• $\mathbf {0} +\mathbf {0} =\mathbf {0} \in \{\mathbf {0} \}$ ;
• $c\mathbf {0} =\mathbf {0} \in \{\mathbf {0} \}$  for each scalar $c$ .

$\Box$

Example. The span of the set $\{(1,2,3)^{T},(3,4,5)^{T}\}$  is a subspace of $\mathbb {R} ^{3}$ .

Proof. Let $V=\operatorname {span} {(\{(1,2,3)^{T},(3,4,5)^{T}\})}$ .

• $\mathbf {0} \in V$  since $\mathbf {0} =0\cdot (1,2,3)^{T}+0\cdot (3,4,5)^{T}$ , is a linear combination of $(1,2,3)^{T}$  and $(3,4,5)^{T}$ ;
• $\mathbf {u} ,\mathbf {v} \in V\Rightarrow \mathbf {u} +\mathbf {v} \in V$ , since
• let $\mathbf {u} =a(1,2,3)^{T}+b(3,4,5)^{T}$  and $\mathbf {v} =c(1,2,3)^{T}+d(3,4,5)^{T}$ , then $\mathbf {u} +\mathbf {v} =(a+c)(1,2,3)^{T}+(b+d)(3,4,5)^{T}\in V$ .
• $\mathbf {v} \in V,c\in \mathbb {R} \Rightarrow c\mathbf {v} \in V$ , since
• let $\mathbf {v} =a(1,2,3)^{T}+b(3,4,5)^{T}$ , then $c\mathbf {v} =ca(1,2,3)^{T}+cb(3,4,5)^{T}\in V$ .

$\Box$

We can see from this example that the entries themselves do not really matter. Indeed, we have the following general result:

Proposition. (a span of finite set is a subspace) For each finite set $S$ , $\operatorname {span} (S)$  is a subspace.

Proof. The idea of the proof is shown in the above example:

• $\mathbf {0} \in V$  since zero vector is a linear combination of the vectors belonging to $S$ ;
• $\mathbf {u} ,\mathbf {v} \in V\Rightarrow \mathbf {u} +\mathbf {v} \in V$  since $\mathbf {u} +\mathbf {v}$  is a linear combination of the vectors belonging to $S$ ;
• $\mathbf {v} \in V,c\in \mathbb {R} \Rightarrow c\mathbf {v} \in V$  since $c\mathbf {v}$  is a linear combination of the vectors belonging to $S$ .

$\Box$

Exercise.

Select all correct statement(s).

 Empty set, $\varnothing$ , is a subspace. $L=\{(1,0,0)^{T}+t(2,3,4)^{T}:t\in \mathbb {R} \}$ is a subspace. $\{(1,2,3)^{T}\}$ is a subspace. $\mathbb {R} ^{3}$ is a subspace. $S=\{(x,y):x\geq 0,y\leq 0\}$ is a subspace.

In particular, we have special names for some of the spans, as follows:

Definition. (Row, column, and null space) Let $A$  be a matrix. The row (column) space of $A$  is the span of the rows (columns) of $A$ , denoted by $\operatorname {Row} (A)$  ($\operatorname {Col} (A)$ ). The null space (or kernel) of $A$  is the solution set to the homogeneous SLE $A\mathbf {x} =\mathbf {0}$ , denoted by $\operatorname {Null} (A)$  (or $\operatorname {ker} (A)$  for the name 'kernel').

Remark.

• It follows from the proposition about the span of a finite set being a subspace, that row and column spaces are subspaces.
• Row and column spaces may belong to different Euclidean spaces,
• e.g. for a $m\times n$  matrix $A$ , $\operatorname {Row} (A)\in \mathbb {R} ^{m}$ , and $\operatorname {Col} (A)\in \mathbb {R} ^{n}$ .

Example. Null space is a subspace.

Proof. Consider a homogeneous SLE $A\mathbf {x} =\mathbf {0}$ . Let $V$  be the solution set to $A\mathbf {x} =\mathbf {0}$ , i.e. $V=\operatorname {Null} (A)$ .

• $\mathbf {0} \in V$  since $A\mathbf {0} =\mathbf {0}$ ;
• $\mathbf {u} ,\mathbf {v} \in V\Rightarrow \mathbf {u} +\mathbf {v} \in V$ , because
• since $\mathbf {u} ,\mathbf {v} \in V$ , $A\mathbf {u} =\mathbf {0} ,A\mathbf {v} =\mathbf {0}$ ;
• since $A(\mathbf {u} +\mathbf {v} )=A\mathbf {u} +A\mathbf {v} =\mathbf {0} +\mathbf {0} =\mathbf {0}$ , $\mathbf {u} +\mathbf {v} \in V$ .
• $\mathbf {v} \in V,c\in \mathbb {R} \Rightarrow c\mathbf {v} \in V$ , because
• since $\mathbf {v} \in V$ , $A\mathbf {v} =\mathbf {0}$ ;
• since $A(c\mathbf {v} )=c(A\mathbf {v} )=c\mathbf {0} =\mathbf {0}$ , $c\mathbf {v} \in V$ .

$\Box$

Example. (Example of row, column and null spaces) Consider the matrix $A={\begin{pmatrix}1&2&3\\3&4&5\\\end{pmatrix}}$ .

• $\operatorname {Row} (A)=\operatorname {span} {(\{{\begin{pmatrix}1&2&3\end{pmatrix}},{\begin{pmatrix}3&4&5\end{pmatrix}}\})}$ ;
• $\operatorname {Col} (A)=\operatorname {span} {(\{(1,3)^{T},(2,4)^{T},(3,5)^{T}\})}$ ;
• $\operatorname {Null} (A)=\{(x,y,z)^{T}=(z,-2z,z)^{T}:z\in \mathbb {R} \}$  (one possible expression)
• since the solution set of ${\begin{pmatrix}1&2&3\\3&4&5\\\end{pmatrix}}{\begin{pmatrix}x\\y\\z\\\end{pmatrix}}={\begin{pmatrix}0\\0\\\end{pmatrix}}$  is $\{(x,y,z)^{T}=(z,-2z,z)^{T}:z\in \mathbb {R} \}$ .

Example. The set $\{(x,y,z)^{T}:x+y+z=0\}$  is a subspace of $\mathbb {R} ^{3}$ .

Proof. $x+y+z=0\iff {\begin{pmatrix}1&1&1\end{pmatrix}}{\begin{pmatrix}x\\y\\z\\\end{pmatrix}}=0$ , so the set is the null space of the $1\times 3$  matrix, ${\begin{pmatrix}1&1&1\end{pmatrix}}$ , which is a subspace.

$\Box$

Geometrically, the set is a plane passing through the origin in $\mathbb {R} ^{3}$ .

Exercise. Let $A={\begin{pmatrix}1&3&2\\3&2&1\\2&1&3\\\end{pmatrix}}$ .

Select all correct statement(s).

 $\operatorname {Row} (A)$ is a subspace. $\operatorname {Col} (A)$ is a subspace. $\operatorname {Row} (A)=\operatorname {Col} (A).$ $\operatorname {Null} {((A|\mathbf {0} ))}$ is a subspace, in which $(A|\mathbf {0} )$ is an augmented matrix. $\operatorname {Row} (A^{T})=\operatorname {Row} (A)$ Then, we will introduce some more terminologies related to subspaces.

Definition. (Generating set) Let $V$  be a subspace and let $S$  be a subset of $V$ . The set $S$  is a generating set (or spanning set) of $V$  if

$\operatorname {span} (S)=V.$

We may also say that $S$  generates (or spans) $V$  in this case.

Definition. (Basis) Let $V$  be a subspace. A basis (plural: bases) for $V$  is a linearly independent generating set of $V$ .

Remark.

• Basis is quite important, since it tells us the whole structure of $V$ , with minimal number of vectors (a generating set of $V$  can tell us the whole structure of $V$ ).
• The linear independence ensures that there are no 'redundant' vectors in the generating set ('redundant' vectors are the vectors that are linear combinations of the others).
• Given a linearly dependent generating set, we may remove some of the vectors to make it linearly independent (this is known as the reduction theorem, we will discuss this later).
• We usually use $\beta$  to denote basis, since the initial syllable of 'beta' and 'basis' is similar.

The following theorem highlights the importance of basis.

Theorem. Let $V$  be a subspace of $\mathbb {R} ^{n}$  and let $\beta$  be a subset of $V$ . Then, $\beta$  is a basis for $V$  if and only if each vector in $V$  can be represented as a linear combination of vectors in $\beta$  in a unique way.

Proof.

• Only if part:
• $\beta$  is a generating set, so each vector in $V$  belongs to $\operatorname {span} {\beta }$ , i.e. each vector in $V$  is a linear combination of vectors in $\beta$ .
• Uniqueness follows from the proposition about comparing coefficients for linear independent vectors.
• If part:
• $\beta$  generates $V$  because:
1. by definition of subspace, $\operatorname {span} {\beta }\subseteq V$  (since linear combinations of vectors in $\beta$  (the vectors in $\beta$  are also in $V$ , by $\beta \subseteq V$ ) are belonging to $V$ );
2. on the other hand, since each vector in $V$  can represented as a linear combination of vectors in $\beta$ , we have $V\subseteq \operatorname {span} {\beta }$ .
3. Thus, we have $\operatorname {span} {\beta }=V$ , so $\beta$  generates $V$  by definition.
• $\beta$  is linearly independent because
• let $\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}$  be the vectors in $\beta$  and
• assume $a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0}$  in which $a_{1},\ldots ,a_{n}$  are not all zero, (i.e. $\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}$  are linearly dependent) then we can express the zero vector in $V$  in two different ways:

$a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n},{\text{ and }}0\mathbf {v} _{1}+\cdots +0\mathbf {v} _{n},$

which contradicts the uniqueness of representation.

$\Box$

Example. (Standard basis) A basis for $\mathbb {R} ^{n}$  is $\{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}$ , which is called the standard basis.

Proof. Let $S=\{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}$ .

• $S$  generates $\mathbb {R} ^{n}$  since

$\mathbb {R} ^{n}=\{(v_{1},\ldots ,v_{n})^{T}:v_{1},\ldots ,v_{n}\in \mathbb {R} \}=\{v_{1}\mathbf {e} _{1}+\cdots +v_{n}\mathbf {e} _{n}:v_{1},\ldots ,v_{n}\in \mathbb {R} \}=\operatorname {span} (S).$

• $S$  is linearly independent since

$v_{1}\mathbf {e} _{1}+\cdots +v_{n}\mathbf {e} _{n}=\mathbf {0} \implies (v_{1},\ldots ,v_{n})^{T}=(0,\ldots ,0)^{T}\implies v_{1}=\cdots =v_{n}=0.$

$\Box$

Example. A basis for the set $S=\{(x,y,z)^{T}:x+y+z=0\}$  is $\{(-1,1,0)^{T},(-1,0,1)^{T}\}$ .

Proof. The general solution to $x+y+z=0$  is

$(x,y,z)^{T}=(-s-t,s,t)^{T}=s(-1,1,0)^{T}+t(-1,0,1)^{T}$

by setting $y=s,z=t$  be independent unknowns. Since the general solution is linear combination of $(-1,1,0)^{T}$  and $(-1,0,1)^{T}$ , $\beta =\{(-1,1,0)^{T},(-1,0,1)^{T}\}$  generates $S$ .  Also, $\beta$  is linearly independent, since
$a(-1,1,0)^{T}+b(-1,0,1)^{T}=(0,0,0)^{T}\implies (-a-b,a,b)^{T}=(0,0,0)^{T}\implies a=b=0.$

$\Box$

There are infinitely many other bases, since we can multiply a vector in this basis by arbitrary nonzero scalar.

Exercise.

Select all correct statement(s).

 $\mathbf {i} ,\mathbf {j}$ generate $\mathbb {R} ^{2}$ . $\{\mathbf {i} ,\mathbf {j} \}$ generates $\{\mathbf {i} +\mathbf {j} \}$ The generating set of a subspace is unique. The basis of a subspace is unique. The basis for column space is a set containing some column vectors. Let $S$ be a generating set of subspace $V$ . Then, $V$ also generates $S$ .

Then, we will discuss some ways to construct a basis.

Theorem. (Extension and reduction theorem) Let $V$  be a subspace of $\mathbb {R} ^{n}$ . Then, the following hold.

• (Extension theorem) Every linearly independent subset of $V$  can be extended to a basis for $V$ ;
• (Reduction theorem) every finite generating set of $V$  consists of a basis for $V$ .

Remark. By convention, a basis for the zero space $\{\mathbf {0} \}$  is the empty set $\varnothing$ .

Its proof is complicated.

Corollary. (Existence of basis for subspace of $\mathbb {R} ^{n}$ ) Each subspace of $\mathbb {R} ^{n}$  has a basis.

Proof. We start with an empty set $\varnothing$ . It is linearly independent (since it is not linearly dependent by definition), and is a subset of every set. By extension theorem, it can be extended to a basis for subspace of $\mathbb {R} ^{n}$ . Thus, each subspace of $\mathbb {R} ^{n}$  has a basis, found by this method.

$\Box$

Example. (Using reduction theorem to find basis) A basis for the subspace $V=\{(x+2y+3z,2x+3y+4z,5x+8y+11z)^{T}:x,y,z\in \mathbb {R} \}$  is $\{(1,2,3)^{T},(2,3,4)^{T}\}$ .

Proof. Observe that a generating set of $V$  is $S_{1}=\{(1,2,3)^{T},(2,3,4)^{T},(5,8,11)^{T}\}$ , since

$(x+2y+3z,2x+3y+4z,5x+8y+11z)^{T}=x(1,2,3)^{T}+y(2,3,4)^{T}+z(5,8,11)^{T}.$

By reduction theorem, $S_{1}$  must contain a basis. Observe that $(5,8,11)^{T}=(1,2,3)^{T}+2(2,3,4)^{T}$  (if this is not observed, we may use the equation in the definition of linear (in)dependence to find this). Thus, vectors in $V=\operatorname {span} {S_{1}}$  can also be generated by $S_{2}=\{(1,2,3)^{T},(2,3,4)^{T}\}$ . Therefore, $S_{2}$  is a smaller generating set. Since
$c_{1}(1,2,3)^{T}+c_{2}(2,3,4)^{T}=(0,0,0)^{T}\implies (c_{1}+2c_{2},2c_{1}+3c_{2},3c_{1}+4c_{2})=(0,0,0)^{T},$

and the RREF of the augmented matrix representing this SLE is
${\begin{pmatrix}1&0&0\\0&1&0\\0&0&0\\\end{pmatrix}},$

the only solution to this SLE is $c_{1}=c_{2}=0$ , i.e. $S_{2}$  is linearly independent. It follows that $S_{2}$  is a basis.

$\Box$

Exercise.

It is given that $S=\{(2,0,0)^{T},(0,3,0)^{T}\}$  is a linearly independent subset of subspace $\mathbb {R} ^{3}$ . After adding a vector $\mathbf {v}$  into $S$ , $S$  becomes a basis for $\mathbb {R} ^{3}$ . Which of the following is (are) possible choice(s) of $\mathbf {v}$ ?

 $(0,0,1)^{T}$ $(0,0,2)^{T}$ $(1,0,0)^{T}$ $(1,2,0)^{T}$ A terminology that is related to basis is dimension.

Definition. (Dimension) Let $\beta$  be a basis for a subspace $V$  of $\mathbb {R} ^{n}$ . The number of vectors in $\beta$ , denoted by $\operatorname {dim} (V)$ , is the dimension of $V$ .

Remark.

• By convention, the dimension of the zero space $\{\mathbf {0} \}$  is $0$ .
• That is, we say that the number of vectors in empty set $\varnothing$  is $0$ .
• When the subspace has a higher dimension, there is more 'flexibility', since there are more parameters that are changeable.

Recall that there are infinitely many bases for a subspace. Luckily, all bases have the same number of vectors, and so the dimension of subspace is unique, as one will expect intuitively. This is assured bFy the following theorem.

Theorem. (Uniqueness of dimension) Dimension of arbitrary subspace is unique, i.e., if we let $\beta _{1}$  and $\beta _{2}$  be two finite bases for a subspace $V$  of $\mathbb {R} ^{n}$ , then, the number of vectors in $\beta _{1}$  equals that of $\beta _{2}$ .

Proof. Let $\beta _{1}=\{\mathbf {u} _{1},\ldots ,\mathbf {u} _{k}\}$  and $\beta _{2}=\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{\ell }\}$ . Also, let $U_{n\times k}$  and $W_{n\times \ell }$  be the matrices with $\mathbf {u} _{i}$ 's and $\mathbf {w} _{j}$ 's as columns.

By definition of basis, $\operatorname {span} {\beta _{1}}=V$ , and $\operatorname {span} {\beta _{2}}=V$ , so

$\mathbf {w} _{1}+0\mathbf {w} _{2}+\cdots +0\mathbf {w} _{k}=a_{11}\mathbf {u} _{1}+a_{21}\mathbf {u} _{2}+\cdots +a_{k1}\mathbf {u} _{k}\implies \mathbf {w} _{1}=U\mathbf {a} _{1}$

for some $\mathbf {a} _{1}=(a_{11},\ldots ,a_{k1})^{T}$ . By symmetry, $\mathbf {w} _{2}=U\mathbf {a} _{2},\ldots ,\mathbf {w} _{\ell }=U\mathbf {a} _{\ell }$ . Thus, $W=UA$  in which $A$  is the matrix whose columns are $\mathbf {a} _{1},\ldots ,\mathbf {a} _{\ell }$ , of size $k\times \ell$ .

We claim that $A\mathbf {x} =\mathbf {0}$  only has the trivial solution, and this is true, since:

• If $A\mathbf {x} =\mathbf {0}$ , then $W\mathbf {x} =UA\mathbf {x} =U\mathbf {0} =\mathbf {0}$ , and thus $\mathbf {x} =0$ , since the columns of $W$  are linearly independent, by the proposition about the relationship between linear independence and number of solutions in SLE.

Thus, the RREF of $(A|\mathbf {0} )$  (with $\ell +1$  columns) has a leading one in each of the first $\ell$  columns. Since there are $k$  rows in $(A|\mathbf {0} )$ , we have $k\geq \ell$  (if $k<\ell$  , we cannot have $\ell$  leading ones, since there are at most $k<\ell$  leading ones)

By symmetry (swapping the role of $\beta _{1}$  and $\beta _{2}$ ), $k\leq \ell$ , and thus $k=\ell$

$\Box$

Example. (Dimension of Euclidean space) The dimension of $\mathbb {R} ^{n}$  is $n$ , since a basis for $\mathbb {R} ^{n}$  is standard basis $\{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}$ , which contains $n$  vectors.

Example. (Dimension of plane) The dimension of the subspace $\{(x,y,z)^{T}:x+y+z=0\}$  is $2$ , since a basis for this subspace is $\{(-1,1,0)^{T},(-1,0,1)^{T}\}$  (from a previous example), which contains $2$  vectors. Geometrically, the subspace is a plane. In general, the dimension of each plane is $2$ .

Exercise.

Select all correct statement(s).

 The number of vectors in each finite generating set of subspace $V$ is greater than or equal to the dimension of $V$ . The dimension of row space of a matrix is its number of rows. The dimension of column space of a matrix is its number of columns.

Then, we will discuss the bases of row, column and null spaces, and also their dimensions.

Proposition. (Basis for row space) Let $A$  be a matrix and $R$  be the RREF of $A$ . Then, a basis for $\operatorname {Row} (A)$  is the set of all nonzero rows of $R$ .

Proof. It can be proved that the row space is unchanged when an ERO is performed. E.g.,

• (type I ERO) $\operatorname {span} {(\{\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}\})}=\operatorname {span} {(\{\mathbf {r} _{2},\mathbf {r} _{1},\mathbf {r} _{3}\})}$ ;
• (type II ERO) $\operatorname {span} {(\{\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}\})}=\operatorname {span} {(\{k\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}\})}$ ;
• (type III ERO) $\operatorname {span} {(\{\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}\})}=\operatorname {span} {(\{\mathbf {r} _{1}+k\mathbf {r} _{2},\mathbf {r} _{2},\mathbf {r} _{3}\})}$ .

Assuming this is true, we have $\operatorname {Row} (A)=\operatorname {Row} (R)$ . It can be proved that the nonzero rows of $R$  generate $\operatorname {Row} (R)$ , and they are linearly independent, so the nonzero rows form a basis for $\operatorname {Row} (R)=\operatorname {Row} (A)$ .

$\Box$

Remark. Another basis for $\operatorname {Row} (A)$  is the set of all rows of $A$  corresponding to the nonzero rows of $R$ , i.e. the set of the rows that are originally located at the position of the nonzero rows of $R$ , since it also generates the row space, and is also linear independent.

Example. Let $A={\begin{pmatrix}1&3&4\\2&5&3\\6&15&9\\\end{pmatrix}}$ . It can be proved that its RREF is ${\begin{pmatrix}1&0&-11\\0&1&5\\0&0&0\\\end{pmatrix}}$ , and therefore a basis for $\operatorname {Row} (A)$  is $\beta =\{{\begin{pmatrix}1&0&-11\end{pmatrix}},{\begin{pmatrix}0&1&5\end{pmatrix}}\}$  (it can be proved that $\beta$  is linearly independent), and its dimension is thus $2$ .

Another basis is $\{{\begin{pmatrix}1&3&4\end{pmatrix}},{\begin{pmatrix}2&5&3\end{pmatrix}}\}$ , the corresponding rows to the nonzero rows of the RREF.

Proposition. (Basis for column space) Let $A$  be a matrix with columns $\mathbf {a} _{1},\ldots ,\mathbf {a} _{n}$ , and let $R$  be RREF of $A$ . Suppose columns $i_{1},\ldots ,i_{k}$  are the only columns of $R$  containing leading ones (they are indexes of column containing leading one). Then, a basis for $\operatorname {Col} (A)$  is $\{\mathbf {a} _{i_{1}},\ldots ,\mathbf {a} _{i_{k}}\}$ .

Proof. Using Gauss-Jordan algorithm, $(A|\mathbf {0} )$  can be transformed to $(R|\mathbf {0} )$  via EROs, and they are row equivalent. Thus, $A\mathbf {x} =\mathbf {0}$  and $R\mathbf {x} =\mathbf {0}$  have the same solution set. Then, it can be proved that linearly (in)dependent columns of $A$  correspond to linearly (in)dependent columns of $R$ . It follows that $\{\mathbf {a} _{i_{1}},\ldots ,\mathbf {a} _{i_{k}}\}$  is linearly independent, and all other columns belong to the span of this set.

$\Box$

Example. Let $A={\begin{pmatrix}1&3&4\\2&5&3\\6&15&9\\\end{pmatrix}}$ . From previous example, the RREF of $A$  is ${\begin{pmatrix}{\color {green}1}&0&-11\\0&{\color {green}1}&5\\0&0&0\\\end{pmatrix}}$ . So, the columns of $A$  corresponding to the columns of $R$  containing leading ones, namely 1st and 2nd columns, form a basis. Thus, a basis for $\operatorname {Col} (A)$  is $\{(1,2,6)^{T},(4,5,15)^{T}\}$ .

If we let $\mathbf {a} _{1},\mathbf {a} _{2},\mathbf {a} _{3}$  be the 1st, 2nd, 3rd columns of $A$ , then $\mathbf {a} _{3}=-11\mathbf {a} _{1}+5\mathbf {a} _{2}$ . If we let the notations be the corresponding columns of $R$  instead, the same equation also holds.

Example. (Basis for null space) Let $A={\begin{pmatrix}1&3&4\\2&5&3\\6&15&9\\\end{pmatrix}}$ . From previous example, the RREF of $A$  is ${\begin{pmatrix}{\color {green}1}&0&-11\\0&{\color {green}1}&5\\0&0&0\\\end{pmatrix}}$ . A basis for $\operatorname {Null} (A)$  is $\{(11,-5,1)^{T}\}$ , since the solution set of $A\mathbf {x} =\mathbf {0}$  is $\{\mathbf {x} =(11t,-5t,t):t\in \mathbb {R} \}$ , and its dimension is $1$ .

Exercise.

Let $A$  be a matrix, and $R$  be its RREF. Select all correct statement(s).

 A basis for $\operatorname {Col} (A)$ is the set of all columns of $R$ containing leading ones. Each basis for $\operatorname {Col} (I_{3})$ and $\operatorname {Row} (I_{3})$ contains the same number of vectors. Dimension of $\operatorname {Row} (A)$ is the number of leading ones in $R$ The dimension of the basis for $\operatorname {Row} (A)$ is smaller than or equal to the number of rows of $A$ . The dimension of the basis for $\operatorname {Col} (A)$ is smaller than or equal to the number of columns of $A$ .

Proposition. (Dimension of null space) Let $A$  be a matrix. The dimension of $\operatorname {Null} (A)$  is the number of independent unknowns in the solution set of $A\mathbf {x} =\mathbf {0}$ .

Proof. The idea of the proof is illustrated in the above example: if there are $k$  independent unknowns in the solution set, we need a set consisting at least $k$  vectors to generate the solution set.

$\Box$

We have special names for the dimensions of row, column and null spaces, as follows:

Definition. (Row rank, column rank and nullity) Let $A$  be a matrix. The dimensions of $\operatorname {Row} (A),\operatorname {Col} (A)$  and $\operatorname {Null} (A)$  are called the row rank, column rank, and nullity of $A$ . They are denoted by $\operatorname {row\;rank} (A),\operatorname {column\;rank} (A)$  and $\operatorname {nullity} (A)$  respectively.

Indeed, the row rank and the column rank of each matrix are the same.

Proposition. (Row and column rank both equal number of leading ones of RREF) For each matrix $A$ , $\operatorname {row\;rank} (A)=\operatorname {column\;rank} (A)$  is the number of leading ones of the RREF of $A$ .

Proof. We can see this from the bases found by the proposition about basis for row space (number of nonzero rows is the number of leading ones of the RREF of $A$ ) and the proposition about basis for column space (there are $k$  column vectors, and $k$  is the number of leading ones by the assumption).

$\Box$

Because of this proposition, we have the following definition.

Definition. (Rank) Let $A$  be a matrix. The rank of $A$ , denoted by $\operatorname {rank} (A)$ , is the common value of the row rank and the column rank of $A$ .

Remark. We usually use this terminology and notation, instead of those for row rank and column rank.

Then, we will introduce an important theorem that relates rank and nullity.

Theorem. (Rank-nullity theorem) Let $A$  be an $m\times {\color {green}n}$  matrix. Then,

$\operatorname {rank} (A)+\operatorname {nullity} (A)={\color {green}n}$

Proof. Let $R$  be the RREF of $A$ . $\operatorname {rank} (A)$  is the number of leading ones of $R$ , and $\operatorname {nullity} (A)$  is the number of independent unknowns of $R\mathbf {x} =\mathbf {0}$ , which is ${\color {green}n}$  minus the number of leading ones of $R$ . The result follows.

$\Box$

Example. Let $A={\begin{pmatrix}1&3&4\\2&5&3\\6&15&9\\\end{pmatrix}}$ . From previous examples,

• A basis for $\operatorname {Row} (A)$  is $\{{\begin{pmatrix}1&0&-11\end{pmatrix}},{\begin{pmatrix}0&1&5\end{pmatrix}}\}$ ;
• a basis for $\operatorname {Col} (A)$  is $\{(1,2,6)^{T},(4,5,15)^{T}\}$