Matrix inverses are analogous to the multiplicative inverse (or reciprocal) in the number system.
An matrix is invertible (or non-singular) if there exists an matrix such that
The matrix is the inverse of , and is usually denoted by .
A matrix that has no inverse is non-invertible (or singular).
by the invertible matrix theorem (proof of its complete version is complicated, and so is skipped), if one of and holds, then the other also holds
In the number system, the multiplicative inverse, if it exists, is unique. Indeed, the matrix inverse, if it exists, is also unique similarly, as shown in the following proposition.
(Uniqueness of matrix inverse)
Matrix inverse, if it exists, is unique.
Suppose to the contrary, that distinct matrices and are both inverses of matrix .
Then, by definition of matrix inverse.
If the matrix inverse of exists, we have
which causes a contradiction.
is invertible and its inverse is
(it implies that the matrix product in another order is also by the invertible matrix theorem)
Suppose to the contrary, that the matrix is invertible, i.e.
there exists a matrix
But, this equality is equivalent to
which is impossible, causing a contradiction.
(Properties of matrix inverse)
Let and be invertible matrices of the same size,
and let be a nonzero scalar. Then,
(self-invertibility) is invertible and
(scalar multiplicativity) is invertible and
('reverse multiplicativity') is invertible and
(interchangibility of inverse and transpose) is invertible
(self-invertibility) since is invertible, , and thus is invertible, and its inverse is
(scalar multiplicativity) , as desired
('reverse multiplicativity') , as desired
(interchangibility of inverse and transpose) , as desired
inductively, we can have general 'reverse multiplicativity': is invertible and
Matrix inverse can be used to solve SLE, as follows:
Let be a SLE in which is an invertible matrix. Then, the SLE has a unique solution given by .
Then, we will define the elementary matrix, which is closely related to EROs, and is important for the proof of results related to EROs.
Let be a positive integer.
There are three types of elementary matrices. An elementary matrix
of type I,II or III is a matrix obtained by a type I,II or III ERO on the identity matrix respectively.
if a matrix needs to be obtained by performing two or more EROs on the identity matrix, it is not an elementary matrix
is an elementary matrix of type I, since
it can be obtained by performing the ERO on ,
is an elementary matrix of type II, since
it can be obtained by performing the ERO on , and
is an elementary matrix of type III, since
it can be obtained by performing the ERO on .
is not an elementary matrix, since it needs to be obtained by performing at least two EROs on , e.g. , in this order
Let be an matrix. If is obtained from by performing an ERO,
then there exists an elementary matrix such that , and can be obtained from by performing the same ERO.
Conversely, if is an elementary matrix, then is the matrix obtained from by performing the corresponding ERO.
Outline: case: e.g.
type I ERO:
type II ERO:
type III ERO:
illustration of the proposition:
inductively, we have:
The following EROs
correspond to the matrix multiplication
(Invertibility of elementary matrix)
Elementary matrices are invertible. The inverse of an elementary matrix is an elementary matrix of the same type.
The reverse process of each ERO is an ERO of the same type.
Let and be the elementary matrices corresponding to these two EROs (an ERO and its reverse process), which are of the same type.
Then, , as desired (since can be obtained from by performing an ERO and its reverse process together).
if is the RREF of , then for some elementary matrices
since elementary matrices are invertible, is invertible, and equal to
in other words, for some invertible matrix
Since the reverse process of , and are
, and ,
the inverses of the elementary matrices
In particular, the inverse of type I elementary matrix is itself.
It is given that matrix
is obtained from matrix by performing the EROs
, in this order, and
Then, we will state a simplified version of invertible matrix theorem, in which some results from the complete version of invertible matrix theorem are removed.
(Simplified invertible matrix theorem)
Let be an matrix. Then, the following are equivalent.
(i) is invertible
(ii) the homogeneous SLE only has the trivial solution
(iii) the RREF of is
(iv) is a product of elementary matrices
To prove this, we may establish a cycle of implications, i.e.
(i) (ii) (iii) (iv) (i), then, when we pick two arbitrary statements form the four statements, they are equivalent to each other, which means that the four statements are equivalent.
(i) (ii): it follows from the proposition about solving SLE, and
(ii) (iii): since the SLE has a unique solution, the RREF of the augmented matrix of the SLE has a leading one in each of the first columns, but not the st column, i.e. it is . It follows that the RREF of is , since after arbitrary EROs, the rightmost zero column is still zero column.
(iii) (iv): since RREF of is , and RREF of equals for some elementary matrices , it follows that . By definition and general 'reverse multiplicativity' of matrix inverse, we have
i.e. is a product of elementary matrices
(iv) (i): since is a product of elementary matrices and an elementary matrix is invertible, it follows that is invertible by general 'reverse multiplicativity' of matrix inverse.
this theorem provides us multiple ways to prove invertibility of matrix: we can prove it by proving one of the equivalent statements
this may make the proof easier
later, when we discuss some results about these equivalent statements, they can be linked to this theorem
Consider the matrix
We can find its RREF by Gauss-Jordan algorithm, as follows:
Since its RREF is ,
by the simplified invertible matrix theorem, we also have the following results:
(i) is invertible
(ii) the homogeneous SLE only has trivial solution
(iii) is a product of elementary matrices
Let's verify them one by one.
(ii): the SLE can be represented by the augmented matrix , and we can find its RREF by Gauss-Jordan algorithm, as follows:
Then, we can directly read from the RREF of augmented matrix that the SLE only has the trivial solution. Y
Consider the matrix , and the SLE .
The following provides us a convenient and efficient way to find the inverse of a matrix.
(Finding matrix inverse using Gauss-Jordan algorithm)
Let be an invertible matrix.
Then, we can transform the (augmented) matrix to the (augmented) matrix ( is of same size as ), which is RREF of , using a finite number of EROs, and we have .
we can write
for some elementary matrices , since is RREF of
Then, it can be proved that
It follows that
and thus .
if is not invertible, we are not able to transform to (but RREF of still exists, it is just not in the form of )
After performing EROs as follows:
we have .
We have previously proved that is non-invertible.
Now, we verify that it is impossible to transform to in which .
We perform EROs as follows:
The last matrix is in RREF.
We can see from the first ERO that, to make the th entry zero, we will also make the th entry zero. Thus, it is impossible to have such transformation.
Let be some elementary matrices of same size .
Then, we will discuss the determinant, which allows characterizing some properties of a square matrix.
Let be an matrix. The determinant of ,
denoted by or ,
is defined recursively as follows:
when , we define
when , suppose we have defined the determinant of each matrix. Let be the (sub)matrix obtained by deleting the th and the th column of . We define the -cofactor of by . Then, we define
minor is determinant of a square submatrix
the matrix consisting of all cofactors is called cofactor matrix
the definition when is also called the cofactor expansion (or Laplace expansion) along the first row.
another notation: , and there are similar notations for matrices with different sizes
the signs of the cofactors are alternating. the signs of cofactor at the position of each entry of a matrix is shown below:
which looks like a 'chessboard' pattern.
we can observe from the above pattern that the sign of cofactors located at the main diagonal are always positive
this is the case since the row number equals column number in main diagonal, and so
(some illustrations of deleting the row and column)
(Formulas of determinants of and matrices)
For the formula of determinants of matrices, we have a useful mnemonic device for it, namely the Rule of Sarrus, as follows:
(Rule of Sarrus)
We can compute matrix as shown in the following image:
, in which red arrows correspond to positive terms, and blue arrows correspond to negative terms. To be more precise,
we can compute the matrix in the image by
It follows from the formula in the above example.
Then, we will given an example about computing the determinant of a matrix, which cannot be computed by the Rule of Sarrus directly.
(Determinant of zero matrix and identity matrix)
The determinant of the zero matrix is , and the determinant of the identity matrix is .
(since the submatrix obtained after removing the 1st row and 1st column of is )
Indeed, we can compute a determinant by the cofactor expansion along an arbitrary row, as in the following theorem.
(Cofactor expansion theorem)
Let be an matrix with cofactors . Then,
for each positive integer .
the first formula is cofactor expansion along the th row, and the second formula is cofactor expansion along the th column
Its proof (for the general case) is complicated, and thus is skipped.
(Illustration of cofactor expansion theorem)
We use cofactor expansion along the 2nd column here.
Then, we will discuss several properties of determinants that ease its computation.
(Effects on determinant when performing EROs)
Let be a square matrix.
(type I ERO) If we interchange two rows of , the determinant is multiplied by ;
(type II ERO) if we multiply a row of by a nonzero constant , the determinant is multiplied by ;
(type III ERO) if we add a multiple of a row of to another row, the determinant remains unchanged.
(type I ERO) e.g.
(type II ERO) e.g.
(type III ERO) e.g.
for the property related to type II ERO, can be zero, and the determinant is multiplied by zero. But, multiplying the row by is not type II ERO if
the determinant of matrix with two identical rows is zero because of the following corollary, which is based on the result about type I ERO
in view of this proposition, we have some strategies to compute determinant more easily, as follows:
apply type II EROs to take out common multiples of a row to reduce the numerical value of entries, so that the computation is easier
apply type III EROs to create more zeros in entries
apply cofactor expansion along a row or column with many zeros
apart from the EROs mentioned in the proposition, we can actually also apply elementary column operations (ECOs)
this is because determinant of matrix transpose equals that of the original matrix (it will be mentioned in the proposition about properties of determinants)
so, applying elementary column operations is essentially the same as applying EROs, by viewing the operations in different perspectives
we have similar notations for elementary column operations, with (stand for row) replaced by (stand for column)
Example. (Vandermonde matrix)
The determinant of a square matrix with two identical rows is zero.
Let be a square matrix with two identical rows.
If we interchange the two identical rows in , the matrix is still the same,
but its determinant is multiplied by , i.e.
Alternatively, it can be proved by definition and induction.
Then, we will introduce a convenient way to determine invertibility of a matrix.
Before introducing the theorem, we have a lemma.
For each elementary matrix and matrix ,
(type I: ) and (since we are interchanging rows)
(type II: ) and (since we are multiplying a row by nonzero constant)
(type III: ) and (since we are adding a multiple of a row to another row)
(Determining invertibility by determinant)
A square matrix is invertible if and only if its determinant is nonzero.
only if part: by simplified invertible matrix theorem, a matrix is invertible is equivalent to is product of elementary matrices. So, if we denote the elementary matrices by , then
if part: Let in which are elementary matrices and is the RREF of . This implies that
Since , so . Thus, has no zero row (its determinant is zero otherwise). Since is in RREF, it follows that (since is square matrix, if not all columns contain leading ones, then there is at least one zero row lying at its bottom, by definition of RREF). By simplified invertible matrix theorem, is invertible.
After introducing this result, we will give some properties of determinants which can ease the computation of determinants.
(Properties of determinants)
Let and be square matrices of the same size. Then, the following hold.
(invariance of determinant after transpose)
(determinant of matrix inverse is inverse of matrix determinant)
(multiplicativity) let in which are elementary matrices and is the RREF of . Then,
then, it remains to prove that
if , then
if , then the last row of is a zero row, so
the last row of is also a zero row, so
the result follows
(invariance of determinant after transpose) we may prove it by induction and cofactor expansion theorem, e.g. vs.
(determinant of matrix inverse is inverse of matrix determinant) using multiplicativity,
( since is invertible)
By simplified invertible matrix theorem, we also have the following results:
the homogeneous SLE has not only trivial solution
the RREF of is not
cannot be expressed as product of elementary matrices
Then, we will introduce adjugate of matrix, which has a notable result related to computation of matrix inverse.
(Adjugate of matrix)
Let be an matrix. The adjugate of ,
denoted by , is the matrix whose th entry is the cofactor .
it follows that is the transpose of the cofactor matrix of , i.e.
it is more common to use this way to compute adjugate
(Relationship between adjugate and determinant)
Let be an matrix. Then,
The proof is complicated, and so is skipped.
(Formula of matrix inverse)
If is invertible, its inverse is given by
(Formula of matrix inverse)
Let . Then,
That is, we can find the inverse of a matrix by interchanging the -th and th entries, multiplying the th and th by (without interchanging them), and multiplying the matrix by the reciprocal of its determinant.
(Adjugate of non-invertible matrix)
Let . Then,