Introductory Linear Algebra/Vectors and subspaces

Vectors edit

Introduction edit

Without distinguishing row and column vectors, we can define vectors as follows:

Definition. (Vector) Let   be a positive integer. A vector is an  -tuple   of real numbers. The set   of all such vectors is the Euclidean space of dimension  .

Remark.

  • We will define dimension later.
  • We use a boldface letter to denote a vector. We may write   in written form.
  • In particular, we use   to denote zero vector in which each entry is zero.
  • The entries   are called the coordinates or entries of  .

A special type of vector is the standard vector.

Definition. (Standard vector) The standard vectors in   are   whose  th entry is   and all other entries are  , in which  

Remark.

  • In  ,   and   are usually denoted by   and   respectively.
  • In  ,  ,   and   are usually denoted by  ,   and   respectively.

Example.

  • In  ,  .
  • In  ,  .
  • In  ,  .

We can see that   in   is different from   in  .

However, in linear algebra, we sometimes need to distinguish row and column vectors, which are defined as follows:

Definition. (Row and column vector) A row vector is a   matrix, and a column vector is a   matrix.

Remark.

  • It is more common to use column vectors.
  • Because of this, we can apply the definitions of addition and scalar multiplication of a matrix to the corresponding vector operations.

Example. (Row and column vectors)   is a row vector, and   is a column vector.

Remark.

  • To save space, we may use   to represent column vectors.
  • To save more space, it is more common to denote this transpose by  .
  • On the other hand, we usually do not denote row vectors by   to avoid confusion with the notation of vectors (without specifying row or column vectors).

The two basic vector operations are addition and scalar multiplication. Using these two operations only, we can combine multiple vectors as in the following definition.

Definition. (Linear combination) Let  . A vector   is a linear combination of   if

 
for some scalars (or real numbers)  .

Example. The vector   is a linear combination of   and  , while the vector   is not.

Proof. Since

 
and we can transform the augmented matrix representing this SLE as follows:
 
Then, we can directly read that the unique solution is  . So, we can express   as the linear combination of   and  .

On the other hand, since

 
and we can transform the augmented matrix representing this SLE as follows:
 
Since there is a leading one at the 3rd column, the SLE is inconsistent, and therefore we cannot express   as a linear combination of   and  .

 

 
Exercise.

1 Choose linear combination(s) of  .

 .
 .
 .
 .
 .

2 Select all correct statement(s).

If a nonzero vector   is a linear combination of a vector  , and also a linear combination of a vector  , then   is a linear combination of  .
Linear combination of vectors   and linear combination of vectors   are both linear combination of vectors  .
Zero vector is a linear combination of arbitrary vector(s).
There are infinitely many possible linear combinations for arbitrary vector(s).
  is not a linear combination of vectors  .

Another concept that is closely related to linear combinations is span.

Definition. (Span)

 
The cross-hatched plane is the span of   and   in  .

Let   be a nonempty subset of  . The span of  , denoted by  , is the set of all linear combinations of  .

Remark.

  • It follows that   contains infinitely many vectors, since there are infinitely many possible linear combinations of such vectors.
  • It follows that we can use the vectors belonging in the set   to generate each vector in  .
  • We have a span of a set containing some vector(s), instead of a span of vectors.

Example. The span of the set   is

 
since linear combinations of   and   are in the form of
 
Geometrically, the span is a plane in  .

The span of the set   is

 
since linear combinations of   are in the form of
 
Geometrically, the span is a line in  .
 
Exercise.

1 Select all correct expression(s) for  , in which  .

 .
Empty set,  .
 .
 .
 

2 Select all correct expression(s) for  , in which   are column vectors.

 .
 .
 .
 .

3 Let   and   be sets containing some vectors, which are nonempty subset of  . Select all correct statement(s).

 
If  , then  .
If  , then  
 
  for each  


Linear independence edit

Definition. (Linear (in)dependence)

 
Linearly independent vectors in  .
 
Linearly dependent vectors in  .

Let   be a subset of  . The set   is linearly dependent if there exist scalars   that are not all zero such that

 
Otherwise, the set is linearly independent.

Remark.

  • (terminology) We may also say that the vectors   are linearly independent, instead of saying the set containing these vectors is linearly (in)dependent.
  • If the vectors are linearly dependent, it is possible that some of  , in the equation, are zero.
  • Equivalently, if the vectors are linearly independent, then we have 'if  , then the only solution is  '.
  • This is a more common way to check linear independence.

Then, we will introduce an intuitive result about linear dependence, in the sense that the results match with the name 'linear dependence'.

Proposition. (Equivalent condition for linearly dependence) The vectors   are linearly dependent if and only if one of them is a linear combination of the others.

Proof.

  • Only if part:
  • without loss of generality, suppose   in which   (we can replace   by another scalar, and the result still holds by symmetry).
  • Then,  , i.e.   is a linear combination of the others.
  • If part:
  • without loss of generality, suppose   (we can similarly replace   by another vector, and the result still holds by symmetry).
  • Then,  .
  • Since the coefficient of   is nonzero (it is one),   are linearly dependent.

 

Remark.

  • This does not mean that each of them is a linear combination of the others.
  • If vectors are 'linearly dependent', we may intuitively think that they are related in a linear sense, and this is true, since one of them is a linear combination of the others, i.e. it has a relationship with all other vectors.

Example. The vectors   is linearly dependent.

Proof.

  • Consider the equation  .
  • Since the determinant of the coefficient matrix

 
the coefficient matrix is non-invertible.
  • Thus, the homogeneous SLE has a non-trivial solution by simplified invertible matrix theorem, i.e. there exist scalars that are not all zero satisfying the equation.

 


Then, we will introduce a proposition for linearly independent vectors.

Proposition. (Comparing coefficients for linearly independent vectors) Let   be linearly independent vectors. If

 
 .

Proof.

 
By the linear independence of vectors,
 
and the result follows.

 

Example. (Finding unknown coefficients by comparison) Suppose   are linearly independent vectors, and

 
Then, by comparing coefficients, we have
 

Example. Consider three linearly dependent vectors  . Even if

 
we may not have  . E.g., we have
 
 

Exercise. Let   be linearly independent vectors.

1 Select all correct statement(s).

  are linearly independent.
  are linearly independent.
  are linearly dependent
  are linearly independent for each scalar  

2 Select all correct statement(s).

Single arbitrary vector is linearly independent.
If   are linearly independent, and   are linearly independent, then   are linearly independent.
 
  are linearly independent.


Then, we will discuss two results that relate linear independence with SLE.

Proposition. (Relationship between linear independence and invertibility) Let   be a set of vectors in   (the number of vectors must be  , so that the following matrix is square). Let   be the square matrix whose columns are   respectively. Then,   is linearly independent if and only if   is invertible.

Proof. Setting  ,

 
a homogeneous SLE. By definition of linear independence,   is linearly independent is equivalent to   only has the trivial solution, which is also equivalent to   is invertible, by the simplified invertible matrix theorem.

 

Remark. This gives a convenient way to check linear (in)dependence if the number of vectors involved matches with the number of entries for each of them.

Example. The set   is linearly independent, since

 

Proposition. (Relationship between linear independence and number of solutions in SLE) Let   be a SLE (  may not be square matrix, and   can be nonzero). If the columns of   are linearly independent, then the SLE has at most one solution.

Proof. Let   be the columns of  , and let  .

Then,  . Assume there are two distinct solutions   and   for the SLE, i.e.,

 
But, by the proposition about comparing coefficients for linear independent vectors,
 
which contradicts with our assumption that these two solutions are distinct.

 

Remark.

  • A special case is that if the columns of   are linearly independent, then   only has a trivial solution, so   is invertible, which matches with the previous proposition.
  • The SLE has at most one solution and is equivalent to the SLE having either no solutions or one unique solution.

Example. The set   is linearly independent, since none of them is a linear combination of the others. Thus, the SLE

 
has at most one solution for each  . E.g., the SLE
 
has no solutions, by considering the augmented matrix of this SLE
 
Since   and the augmented matrix is invertible, and thus its RREF is   by the simplified invertible matrix theorem. Then, since there is a leading one at the 3th column in  , the SLE is inconsistent.
 

Exercise. Let  . It is given that  .

Select all correct statement(s).

The set   is linearly independent.
The set   is linearly independent.
The homogeneous SLE   only has the trivial solution.
The SLE   may have infinitely many solutions


Subspaces edit

Then, we will discuss subspaces. Simply speaking, they are some subsets of   that have some nice properties. To be more precise, we have the following definition.

Definition. (Subspace) A subset   of   is a subspace of   if all of the following conditions hold.

  1.  
  2. (closure under addition) for each  ,  
  3. (closure under scalar multiplication) for each   and scalar  ,  

Remark.

  •   stands for vector space, since it is a kind of vector space, that is a subset of some larger vector spaces.
  • The definition of vector space involves more conditions and is more complicated, and thus not included here.
  • For subspaces, after these conditions are satisfied, the remaining conditions for vector spaces are automatically satisfied.
  • If   is nonempty, then the first condition is redundant.
  • But, we may not know whether   is empty or nonempty, so it may be more convenient to simply check the first condition.
  • This is because for each  ,   by closure under scalar multiplication, and thus   by closure under addition.

Example. (Zero space) The set containing only the zero vector,  , is a subspace of  , and is called the zero space.

Proof.

  •  ;  Y
  •  ;  Y
  •   for each scalar  .  Y

 


Example. The span of the set   is a subspace of  .

Proof. Let  .

  •   since  , is a linear combination of   and  ;  Y
  •  , since
  • let   and  , then  .  Y
  •  , since
  • let  , then  .  Y

 


We can see from this example that the entries themselves do not really matter. Indeed, we have the following general result:

Proposition. (a span of finite set is a subspace) For each finite set  ,   is a subspace.

Proof. The idea of the proof is shown in the above example:

  •   since zero vector is a linear combination of the vectors belonging to  ;  Y
  •   since   is a linear combination of the vectors belonging to  ;  Y
  •   since   is a linear combination of the vectors belonging to  .  Y

 

 

Exercise.

Select all correct statement(s).

Empty set,  , is a subspace.
  is a subspace.
  is a subspace.
  is a subspace.
  is a subspace.



In particular, we have special names for some of the spans, as follows:

Definition. (Row, column, and null space) Let   be a matrix. The row (column) space of   is the span of the rows (columns) of  , denoted by   ( ). The null space (or kernel) of   is the solution set to the homogeneous SLE  , denoted by   (or   for the name 'kernel').

Remark.

  • It follows from the proposition about the span of a finite set being a subspace, that row and column spaces are subspaces.
  • Row and column spaces may belong to different Euclidean spaces,
  • e.g. for a   matrix  ,  , and  .

Example. Null space is a subspace.

Proof. Consider a homogeneous SLE  . Let   be the solution set to  , i.e.  .

  •   since  ; Y
  •  , because
  • since  ,  ;
  • since  ,  .  Y
  •  , because
  • since  ,  ;
  • since  ,  .  Y

 


Example. (Example of row, column and null spaces) Consider the matrix  .

  •  ;
  •  ;
  •   (one possible expression)
  • since the solution set of   is  .

Example. The set   is a subspace of  .

Proof.  , so the set is the null space of the   matrix,  , which is a subspace.

 

Geometrically, the set is a plane passing through the origin in  .

 

Exercise. Let  .

Select all correct statement(s).

  is a subspace.
  is a subspace.
 
  is a subspace, in which   is an augmented matrix.
 


Then, we will introduce some more terminologies related to subspaces.

Definition. (Generating set) Let   be a subspace and let   be a subset of  . The set   is a generating set (or spanning set) of   if

 
We may also say that   generates (or spans)   in this case.

Definition. (Basis) Let   be a subspace. A basis (plural: bases) for   is a linearly independent generating set of  .

Remark.

  • Basis is quite important, since it tells us the whole structure of  , with minimal number of vectors (a generating set of   can tell us the whole structure of  ).
  • The linear independence ensures that there are no 'redundant' vectors in the generating set ('redundant' vectors are the vectors that are linear combinations of the others).
  • Given a linearly dependent generating set, we may remove some of the vectors to make it linearly independent (this is known as the reduction theorem, we will discuss this later).
  • We usually use   to denote basis, since the initial syllable of 'beta' and 'basis' is similar.

The following theorem highlights the importance of basis.

Theorem. Let   be a subspace of   and let   be a subset of  . Then,   is a basis for   if and only if each vector in   can be represented as a linear combination of vectors in   in a unique way.

Proof.

  • Only if part:
  •   is a generating set, so each vector in   belongs to  , i.e. each vector in   is a linear combination of vectors in  .
  • Uniqueness follows from the proposition about comparing coefficients for linear independent vectors.
  • If part:
  •   generates   because:
  1. by definition of subspace,   (since linear combinations of vectors in   (the vectors in   are also in  , by  ) are belonging to  );
  2. on the other hand, since each vector in   can represented as a linear combination of vectors in  , we have  .
  3. Thus, we have  , so   generates   by definition.  Y
  •   is linearly independent because
  • let   be the vectors in   and
  • assume   in which   are not all zero, (i.e.   are linearly dependent) then we can express the zero vector in   in two different ways:

 
which contradicts the uniqueness of representation.  Y

 

Example. (Standard basis) A basis for   is  , which is called the standard basis.

Proof. Let  .

  •   generates   since

 
 Y
  •   is linearly independent since

 
 Y

 


Example. A basis for the set   is  .

Proof. The general solution to   is

 
by setting   be independent unknowns. Since the general solution is linear combination of   and  ,   generates  . Y Also,   is linearly independent, since
 
 Y

 

There are infinitely many other bases, since we can multiply a vector in this basis by arbitrary nonzero scalar.

 

Exercise.

Select all correct statement(s).

  generate  .
  generates  
The generating set of a subspace is unique.
The basis of a subspace is unique.
The basis for column space is a set containing some column vectors.
Let   be a generating set of subspace  . Then,   also generates  .


Then, we will discuss some ways to construct a basis.

Theorem. (Extension and reduction theorem) Let   be a subspace of  . Then, the following hold.

  • (Extension theorem) Every linearly independent subset of   can be extended to a basis for  ;
  • (Reduction theorem) every finite generating set of   consists of a basis for  .

Remark. By convention, a basis for the zero space   is the empty set  .

Its proof is complicated.

Corollary. (Existence of basis for subspace of  ) Each subspace of   has a basis.

Proof. We start with an empty set  . It is linearly independent (since it is not linearly dependent by definition), and is a subset of every set. By extension theorem, it can be extended to a basis for subspace of  . Thus, each subspace of   has a basis, found by this method.

 

Example. (Using reduction theorem to find basis) A basis for the subspace   is  .

Proof. Observe that a generating set of   is  , since

 
By reduction theorem,   must contain a basis. Observe that   (if this is not observed, we may use the equation in the definition of linear (in)dependence to find this). Thus, vectors in   can also be generated by  . Therefore,   is a smaller generating set. Since
 
and the RREF of the augmented matrix representing this SLE is
 
the only solution to this SLE is  , i.e.   is linearly independent. It follows that   is a basis.

 


 

Exercise.

It is given that   is a linearly independent subset of subspace  . After adding a vector   into  ,   becomes a basis for  . Which of the following is (are) possible choice(s) of  ?

 
 
 
 


A terminology that is related to basis is dimension.

Definition. (Dimension) Let   be a basis for a subspace   of  . The number of vectors in  , denoted by  , is the dimension of  .

Remark.

  • By convention, the dimension of the zero space   is  .
  • That is, we say that the number of vectors in empty set   is  .
  • When the subspace has a higher dimension, there is more 'flexibility', since there are more parameters that are changeable.

Recall that there are infinitely many bases for a subspace. Luckily, all bases have the same number of vectors, and so the dimension of subspace is unique, as one will expect intuitively. This is assured bFy the following theorem.

Theorem. (Uniqueness of dimension) Dimension of arbitrary subspace is unique, i.e., if we let   and   be two finite bases for a subspace   of  , then, the number of vectors in   equals that of  .

Proof. Let   and  . Also, let   and   be the matrices with  's and  's as columns.

By definition of basis,  , and  , so

 
for some  . By symmetry,  . Thus,   in which   is the matrix whose columns are  , of size  .

We claim that   only has the trivial solution, and this is true, since:

  • If  , then  , and thus  , since the columns of   are linearly independent, by the proposition about the relationship between linear independence and number of solutions in SLE.

Thus, the RREF of   (with   columns) has a leading one in each of the first   columns. Since there are   rows in  , we have   (if   , we cannot have   leading ones, since there are at most   leading ones)

By symmetry (swapping the role of   and  ),  , and thus  

 

Example. (Dimension of Euclidean space) The dimension of   is  , since a basis for   is standard basis  , which contains   vectors.

Example. (Dimension of plane) The dimension of the subspace   is  , since a basis for this subspace is   (from a previous example), which contains   vectors. Geometrically, the subspace is a plane. In general, the dimension of each plane is  .

 

Exercise.

Select all correct statement(s).

The number of vectors in each finite generating set of subspace   is greater than or equal to the dimension of  .
The dimension of row space of a matrix is its number of rows.
The dimension of column space of a matrix is its number of columns.


Then, we will discuss the bases of row, column and null spaces, and also their dimensions.

Proposition. (Basis for row space) Let   be a matrix and   be the RREF of  . Then, a basis for   is the set of all nonzero rows of  .

Proof. It can be proved that the row space is unchanged when an ERO is performed. E.g.,

  • (type I ERO)  ;
  • (type II ERO)  ;
  • (type III ERO)  .

Assuming this is true, we have  . It can be proved that the nonzero rows of   generate  , and they are linearly independent, so the nonzero rows form a basis for  .

 

Remark. Another basis for   is the set of all rows of   corresponding to the nonzero rows of  , i.e. the set of the rows that are originally located at the position of the nonzero rows of  , since it also generates the row space, and is also linear independent.

Example. Let  . It can be proved that its RREF is  , and therefore a basis for   is   (it can be proved that   is linearly independent), and its dimension is thus  .

Another basis is  , the corresponding rows to the nonzero rows of the RREF.

Proposition. (Basis for column space) Let   be a matrix with columns  , and let   be RREF of  . Suppose columns   are the only columns of   containing leading ones (they are indexes of column containing leading one). Then, a basis for   is  .

Proof. Using Gauss-Jordan algorithm,   can be transformed to   via EROs, and they are row equivalent. Thus,   and   have the same solution set. Then, it can be proved that linearly (in)dependent columns of   correspond to linearly (in)dependent columns of  . It follows that   is linearly independent, and all other columns belong to the span of this set.

 

Example. Let  . From previous example, the RREF of   is  . So, the columns of   corresponding to the columns of   containing leading ones, namely 1st and 2nd columns, form a basis. Thus, a basis for   is  .

If we let   be the 1st, 2nd, 3rd columns of  , then  . If we let the notations be the corresponding columns of   instead, the same equation also holds.

Example. (Basis for null space) Let  . From previous example, the RREF of   is  . A basis for   is  , since the solution set of   is  , and its dimension is  .

 

Exercise.

Let   be a matrix, and   be its RREF. Select all correct statement(s).

A basis for   is the set of all columns of   containing leading ones.
Each basis for   and   contains the same number of vectors.
Dimension of   is the number of leading ones in  
The dimension of the basis for   is smaller than or equal to the number of rows of  .
The dimension of the basis for   is smaller than or equal to the number of columns of  .


Proposition. (Dimension of null space) Let   be a matrix. The dimension of   is the number of independent unknowns in the solution set of  .

Proof. The idea of the proof is illustrated in the above example: if there are   independent unknowns in the solution set, we need a set consisting at least   vectors to generate the solution set.

 

We have special names for the dimensions of row, column and null spaces, as follows:

Definition. (Row rank, column rank and nullity) Let   be a matrix. The dimensions of   and   are called the row rank, column rank, and nullity of  . They are denoted by   and   respectively.

Indeed, the row rank and the column rank of each matrix are the same.

Proposition. (Row and column rank both equal number of leading ones of RREF) For each matrix  ,   is the number of leading ones of the RREF of  .

Proof. We can see this from the bases found by the proposition about basis for row space (number of nonzero rows is the number of leading ones of the RREF of  ) and the proposition about basis for column space (there are   column vectors, and   is the number of leading ones by the assumption).

 

Because of this proposition, we have the following definition.

Definition. (Rank) Let   be a matrix. The rank of  , denoted by  , is the common value of the row rank and the column rank of  .

Remark. We usually use this terminology and notation, instead of those for row rank and column rank.

Then, we will introduce an important theorem that relates rank and nullity.

Theorem. (Rank-nullity theorem) Let   be an   matrix. Then,

 

Proof. Let   be the RREF of  .   is the number of leading ones of  , and   is the number of independent unknowns of  , which is   minus the number of leading ones of  . The result follows.

 

Example. Let  . From previous examples,

  • A basis for   is  ;
  • a basis for   is  ;
  • a basis for   is  .

Thus,   and  . Since the number of columns of   is  , this verifies the rank-nullity theorem.

 

Exercise.

Select all correct statement(s).

 
 
 
 
  is invertible if and only if   is the number of columns of  .