Last modified on 23 August 2012, at 08:27

Linear Algebra/Null Spaces

Among the three important vector spaces associated with a matrix of order m x n is the Null Space. Null spaces apply to linear transformations.

RangeEdit

Let T be a linear transformation from an m-dimension vector space X to an n-dimensional vector space Y, and let x1, x2, x3, ..., xm be a basis for X and let y1, y2, y3, ..., yn be a basis for Y, and consider its corresponding n × m matrix,

M=
\begin{pmatrix}
a_{11} & a_{12} & a_{13} & \ldots & a_{1m} \\
a_{21} & a_{22} & a_{23} & \ldots & a_{2m} \\
a_{31} & a_{32} & a_{33} & \ldots & a_{3m} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
a_{n1} & a_{n2} & a_{n3} & \ldots & a_{nm} \\
\end{pmatrix}.

The image of X, T(X), is called the range of T. T(A) is obviously a subspace of Y.

Since any element x within X can be expressed as

x=\sum_{i=1}^m c_i x_i,

T(x)=T(\sum_{i=1}^m c_i x_i)=\sum_{i=1}^n c_i T(x_i)

implying that the range of T is the vector space spanned by the vectors T(xi) which is indicated by the columns of the matrix. By a theorem proven earlier, the dimension of the vector space spanned by those vectors is equal to the maximum number of vectors that are linearly independent. Since the linear dependence of columns in the matrix is the same as the the linear dependence of the vectors T(xi), the dimension is equal to the maximum number of columns that are linearly independent, which is equal to the rank. We have the following important conclusion:

The dimension of the range of a linear transformation is equal to the rank of its corresponding matrix.

Null SpaceEdit

For example, consider the matrix: A = \begin{pmatrix}
  1 & 2 \\
  2 & 4 
\end{pmatrix}
.

The null space of this matrix consists of the set:

\hbox{Null Space}(A) = \left\{\mathbf{\begin{pmatrix}
  -2r \\
  r  
\end{pmatrix}
} : r \in \mathbb{R} \right\}

It may not be immediately obvious how we found this set but it can be readily checked that any element of this set indeed gives the zero vector on being multiplied by A. Clearly,

\begin{pmatrix}
  -2 \\
  1  
\end{pmatrix} \in \hbox{Null Space}(A) as

\begin{pmatrix}
  1 & 2 \\
  2 & 4 
\end{pmatrix}\begin{pmatrix}
  -2 \\
  1  
\end{pmatrix} = \begin{pmatrix}
  0 \\
  0  
\end{pmatrix}.

Null Space as a vector spaceEdit

It is easy to show that the null space is in fact a vector space. If we identify a n x 1 column matrix with an element of the n dimensional Euclidean space then the null space becomes its subspace with the usual operations. The null space may also be treated as a subspace of the vector space of all n x 1 column matrices with matrix addition and scalar multiplication of a matrix as the two operations.

To show that the null space is indeed a vector space it is sufficient to show that

 x_1,x_2 \in \hbox{Null Space}(A)\Rightarrow x_1 + x_2\in \hbox{Null Space}(A)

and

x \in \hbox{Null Space}(A) \Rightarrow \alpha x \in \hbox{Null Space}(A)


These are true due to the distributive law of matrices. The details of the proof are left to the reader as an exercise.

PropertiesEdit

Null spaces of row equivalent matricesEdit

If A and B are two row equivalent matrices then they share the same null space. This fact, which is in fact a little theorem, can be proved as follows:

Suppose x is an element of the null space of A. Then Ax = 0. Also since A is row equivalent to B so B = E_1E_2\cdots E_kA where each E_i is an elementary matrix. (Recall that an elementary matrix is the matrix obtained from I from performing any elementary row operation.) Now,

Bx = E_1E_2\cdots E_kAx = 0

and so x is in the null space of B as well. So the null space of A is contained in that of B. Similarly the null space of B is contained in that of A. It is now clear that A and B have the same null space.

Basis of Null SpaceEdit

As the null space of a matrix is a vector space, it is natural to wonder what its basis will be. Of course, since the null space is a subspace of \mathbb{R}^n, its basis can have at most n elements in it. The number of elements in the basis of the null space is important and is called as the nullity of A. To find out the basis of the null space of A we follow the following steps:

  1. First convert the given matrix into row echelon form say U.
  2. Next circle the first non zero entries in each row.
  3. Call the variable x_1 as a basic variable if the first column has a circled entry, and call it a free variable if the first column doesn't have a circled entry. Similarly call the variable x_2 basic if the second column has a non zero entry and free otherwise. In this way name n variables x_1,x_2...,x_n.
  4. If x_i for any i, is a free variable, then let v_1 be the solution obtained by solving the system Ux = 0 where all the free variables are exactly 0, except for x_i which is 1. If x_i is not a free variable don't do anything.
  5. Repeat the above step for all the free variables getting vectors v_2,v_3 etc in the process.
  6. The set \{v_1,v_2...\} is the required basis.

The key point in the above algorithm was that A and U have the same null space. For a complete proof of why the algorithm works we refer the reader to the excellent text book given in the references by Hoffman and Kunze.

Let us look at an example:

Suppose A = \begin{pmatrix}
  1 & 1 & 0 & 1 & 5 \\
  1 & 0 & 0 & 2 & 2 \\
  0 & 0 & 1 & 4 & -1 \\
  0 & 0 & 0 & 0 & 0 
\end{pmatrix}

The first step involves reducing A to its row echelon form U.

Now U = \begin{pmatrix}
  1 & 0 & 0 & 2 & 2 \\
  0 & 1 & 0 & -1 & 3 \\
  0 & 0 & 1 & 4 & -1 \\
  0 & 0 & 0 & 0 & 0 
\end{pmatrix}

We encircle the first non zero entries in each row by brackets:

U = \begin{pmatrix}
  (1) & 0 & 0 & 2 & 2 \\
  0 & (1) & 0 & -1 & 3 \\
  0 & 0 & (1) & 4 & -1 \\
  0 & 0 & 0 & 0 & 0 
\end{pmatrix}


Clearly the free variables are x_4 and x_5 and the rest x_1,x_2 and x_3 are basic variables. Now we shall solve the system Ux = 0 with x_4 = 1, x_5 = 0 to get the vector v_1. Thus we need to solve,

\begin{pmatrix}
  1 & 0 & 0 & 2 & 2 \\
  0 & 1 & 0 & -1 & 3 \\
  0 & 0 & 1 & 4 & -1 \\
  0 & 0 & 0 & 0 & 0 
\end{pmatrix}
\begin{pmatrix}
  x_1\\
  x_2\\
  x_3\\
  1\\
  0
\end{pmatrix}
=
\begin{pmatrix}
  0\\
  0\\
  0\\
  0
\end{pmatrix}

This reduces to the following system on matrix multiplication:

\begin{pmatrix}
  x_1 + 2\\
  x_2 - 1\\
  x_3 + 4\\
  0
\end{pmatrix}
=
\begin{pmatrix}
  0\\
  0\\
  0\\
  0
\end{pmatrix}

It is clear from here that x_1 = -2,\ x_2 = 1,\ x_3  = -4,\ x_4 = 1,\ x_5 = 0 is the solution.

Thus v_1=(-2,\ 1,\ -4,\ 1,\ 0). Similarly v_2 is found to be (-2,\ -3,\ 1,\ 0,\ 1).

The set \{v_1,v_2\} is the basis of the null space and the nullity of the matrix A is 2. In fact this method gives us a way to describe the null space as well which would be: \{\alpha v_1 + \beta v_2 : \alpha,\beta \in \mathbb{R} \} (Why?)

Implications of nullity being zeroEdit

The example given above gives no hint as to what happens when there are no free variables in the row echelon form of A. All we said that in step 4 of our algorithm was that if x_i is not a free variable then don't do anything. Following that logic, if no variable is free then we keep on doing nothing, leading to the conclusion that if no variable is free then the basis of the null space is an empty set i.e. \empty. In that case we say that the nullity of the null space is 0. Note that the null space itself is not empty and contains precisely one element which is the zero vector.

Now suppose that A is any matrix of order m x n with columns c_1, c_2,...c_n. Each c_i is a vector in the m-dimensional space. If the nullity of A is zero, then it follows that Ax=0 has only the zero vector as the solution.

More precisely,

Ax = 
\begin{pmatrix}
  c_1 & c_2 & \ldots & c_n
\end{pmatrix}
\begin{pmatrix}
  x_1\\
  x_2\\
  \vdots\\
  x_n
\end{pmatrix}
=
x_1c_1 + x_2c_2 + \ldots + x_nc_n = 0

has the trivial solution only. This implies that nullity being zero makes it necessary for the columns of A to be linearly independent. By retracing our steps we can show that the converse is true as well.

Let us examine the special case of a square matrix, i.e. when m = n. Now if the nullity is zero then there is no free variable in the row reduced echelon form of the matrix A, which is say U. Hence each row contains a pivot, or a leading non zero entry. In that case U must be of the form, 
\begin{bmatrix}
  1 & 0 & 0 & \cdots & 0\\
  0 & 1 & 0 & \cdots & 0\\
  \vdots & \vdots & \vdots & \vdots & \vdots\\ 
  0 & 0 & 0 & \cdots & 1
\end{bmatrix}

or U must precisely be the identity matrix I. Conversely, if A is row equivalent to I then Ax = 0 and Ix = 0 have the same solutions, due to their being equivalent. Since Ix = 0 has only the trivial solution x = 0, so does Ax = 0. It follows that the null space of A is merely {0} and so the nullity of A is 0.

Thus nullity of A is 0 \iff A is row equivalent to I.

Now if A is row equivalent to I then A = E_1E_2\cdots E_k where each E_i is an elementary matrix. Since a product of invertible matrices is invertible and each E_i is invertible so A is invertible. Conversely if A was invertible, and U its row reduced echelon form then U=E_1\cdots E_kA which is clearly invertible (by virtue of being a product of invertible matrices). Now a matrix containing a zero row can never be invertible (why?), so U has pivots in each row. It follows that there are n pivots all equal to 1, with zeros above and below them and so U = I. Thus A is row equivalent to I.

In summary, A is row equivalent to I \iff A is invertible.

We can collect the entire argument in this section, to state the:

Theorem: For a square matrix of order n, the following are equivalent:

  1. A is invertible.
  2. Nullity of A is 0.
  3. A is row equivalent to the identity matrix.
  4. Columns of A are linearly independent.
  5. The system Ax = 0 has only the trivial solution.
  6. A is a product of elemenatry matrices.

It will be a good exercise for the reader at this stage to try to rewrite the proof of the theorem in detail.

ExercisesEdit

  1. Evaluate null spaces and bases for:
    1. \begin{pmatrix}
  2 & 1 & -2 \\
  1 & -2 & 1 \\
  -3 & 1 & 1 \\ 
\end{pmatrix}
    2. \begin{pmatrix}
  -2 & -1 & 4 & 2 \\
  1 & -2 & 1 & 1 \\
  -3 & 3 & -5 & -3 
\end{pmatrix}
    3. \begin{pmatrix}
  1 & 1 & 1 \\
  1 & 0 & 1 \\
  0 & 1 & 1 
\end{pmatrix}
  2. Show that null space of a matrix is a vector space.
  3. Prove the theorem regarding invertibility of a square matrix. Also by showing that A is invertible iff A^T is, show that the condition that the rows are linearly independent can be added to the list.
  4. Is the solution set for Ax = b where b is a non zero vector (i.e. has at least one component non zero) a vector space? Give reasons.
  5. Let r be the number of basic variables associated with a n order matrix A (which is equal to those associated with its row echelon form). Show that A is invertible if and only if r = n.

ReferencesEdit