# Introductory Linear Algebra/Matrix inverses and determinants

## Matrix inverses

Matrix inverses are analogous to the multiplicative inverse (or reciprocal) in the number system.

Definition. (Matrix inverses) An $n\times n$  matrix $A$  is invertible (or non-singular) if there exists an $n\times n$  matrix $B$  such that

$AB=I_{n}=BA.$

The matrix $B$  is the inverse of $A$ , and is usually denoted by $A^{-1}$ . A matrix that has no inverse is non-invertible (or singular).

Remark.

• by the invertible matrix theorem (proof of its complete version is complicated, and so is skipped), if one of $AB=I_{n}$  and $BA=I_{n}$  holds, then the other also holds

In the number system, the multiplicative inverse, if it exists, is unique. Indeed, the matrix inverse, if it exists, is also unique similarly, as shown in the following proposition.

Proposition. (Uniqueness of matrix inverse) Matrix inverse, if it exists, is unique.

Proof. Suppose to the contrary, that distinct matrices $B$  and $C$  are both inverses of matrix $A$ . Then, $AB=BA=AC=CA=I$  by definition of matrix inverse. If the matrix inverse of $A$  exists, we have

$AB=AC\Leftrightarrow A^{-1}AB=A^{-1}AC\Leftrightarrow IB=IC\Leftrightarrow B=C,$

$\Box$

Example. (Invertible matrix) The matrix

${\begin{pmatrix}1&2\\3&0\\\end{pmatrix}}$

is invertible and its inverse is
${\begin{pmatrix}0&{\frac {1}{3}}\\{\frac {1}{2}}&-{\frac {1}{6}}\end{pmatrix}},$

since
${\begin{pmatrix}1&2\\3&0\\\end{pmatrix}}{\begin{pmatrix}0&{\frac {1}{3}}\\{\frac {1}{2}}&-{\frac {1}{6}}\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}=I_{2}$

(it implies that the matrix product in another order is also $I_{2}$  by the invertible matrix theorem)

Exercise.

Is ${\begin{pmatrix}0&{\frac {1}{3}}\\{\frac {1}{2}}&-{\frac {1}{6}}\end{pmatrix}}$  invertible?

 yes no

Example. (Non-invertible matrix) The matrix

${\begin{pmatrix}1&3\\4&12\\\end{pmatrix}}$

is non-invertible.

Proof. Suppose to the contrary, that the matrix is invertible, i.e. there exists a matrix ${\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}$  such that

${\begin{pmatrix}1&3\\4&12\\\end{pmatrix}}{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}.$

But, this equality is equivalent to
${\begin{pmatrix}a+4c&b+4d\\3a+12c&b+12d\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}\Leftrightarrow {\begin{cases}a+4c&=1\\3a+12c&=0\end{cases}}\Leftrightarrow {\begin{cases}a+4c&=1\\a+4c&=0\end{cases}},$

which is impossible, causing a contradiction.

$\Box$

Exercise.

1 Choose all correct statements.

 if matrices $A$ and $B$ are invertible, $A+B$ is also invertible if matrices $A$ and $B$ are non-invertible, $A+B$ is also non-invertible if matrices $A$ and $B$ are invertible, $AB$ and $BA$ are also invertible if matrices $A$ and $B$ are non-invertible, $AB$ is also non-invertible

2 Choose all correct statements.

 since ${\begin{pmatrix}1&0&0\\0&0&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&1\\1&1\\\end{pmatrix}}={\begin{pmatrix}1&1\\1&1\\\end{pmatrix}}$ , the inverse of ${\begin{pmatrix}1&0&0\\0&0&1\end{pmatrix}}$ is ${\begin{pmatrix}1&1\\0&1\\1&1\\\end{pmatrix}}$ let $A$ be a matrix. $(A^{-1})^{-1}=A$ if matrix $A$ is invertible, $AB=BA$ for each matrix with the same size as $A$ Proposition. (Properties of matrix inverse) Let $A$  and $B$  be invertible matrices of the same size, and let $c$  be a nonzero scalar. Then,

• (self-invertibility) $A^{-1}$  is invertible and $(A^{-1})^{-1}=A$
• (scalar multiplicativity) $cA$  is invertible and $(cA)^{-1}=c^{-1}A^{-1}$
• ('reverse multiplicativity') $AB$  is invertible and $(AB)^{-1}=B^{-1}A^{-1}$
• (interchangibility of inverse and transpose) $A^{T}$  is invertible $(A^{T})^{-1}=(A^{-1})^{T}$

Proof.

• (self-invertibility) since $A$  is invertible, $AA^{-1}=A^{-1}A=I$ , and thus $A^{-1}$  is invertible, and its inverse is $A$
• (scalar multiplicativity) $(cA)(c^{-1}A^{-1})=(cc^{-1})(AA^{-1})=1(I)=I$ , as desired
• ('reverse multiplicativity') $(AB)(B^{-1}A^{-1})=A(BB^{-1})A^{-1}=AIA^{-1}=AA^{-1}=I$ , as desired
• (interchangibility of inverse and transpose) $(A^{T})(A^{-1})^{T}=(A^{-1}A)^{T}=I^{T}=I$ , as desired

$\Box$

Remark.

• inductively, we can have general 'reverse multiplicativity': $A_{1}A_{2}\cdots A_{n}$  is invertible and

$(A_{1}A_{2}\cdots A_{n})^{-1}=A_{n}^{-1}\cdots A_{2}^{-1}A_{1}^{-1}$

Matrix inverse can be used to solve SLE, as follows:

Proposition. Let $A\mathbf {x} =\mathbf {b}$  be a SLE in which $A$  is an invertible matrix. Then, the SLE has a unique solution given by $\mathbf {x} =A^{-1}\mathbf {b}$ .

Proof.

{\begin{aligned}&&A\mathbf {x} &=\mathbf {b} \\&\Leftrightarrow &{\color {green}A^{-1}}A\mathbf {x} &={\color {green}A^{-1}}\mathbf {b} \\&\Leftrightarrow &I\mathbf {x} &={\color {green}A^{-1}}\mathbf {b} \\&\Leftrightarrow &\mathbf {x} &={\color {green}A^{-1}}\mathbf {b} \\\end{aligned}}

$\Box$

Then, we will define the elementary matrix, which is closely related to EROs, and is important for the proof of results related to EROs.

Definition. (Elementary matrix) Let $n$  be a positive integer. There are three types of $n\times n$  elementary matrices. An elementary matrix of type I,II or III is a matrix obtained by a type I,II or III ERO on the identity matrix $I_{n}$  respectively.

Remark.

• if a matrix needs to be obtained by performing two or more EROs on the identity matrix, it is not an elementary matrix

Example. The matrix

${\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\\\end{pmatrix}}$

is an elementary matrix of type I, since it can be obtained by performing the ERO $\mathbf {r} _{2}\leftrightarrow \mathbf {r} _{3}$  on $I_{3}$ , the matrix
${\begin{pmatrix}1&0&0\\0&7&0\\0&0&1\\\end{pmatrix}}$

is an elementary matrix of type II, since it can be obtained by performing the ERO $7\mathbf {r} _{2}\to \mathbf {r} _{2}$  on $I_{3}$ , and the matrix
${\begin{pmatrix}1&-9&0\\0&1&0\\0&0&1\\\end{pmatrix}}$

is an elementary matrix of type III, since it can be obtained by performing the ERO $-9\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}$  on $I_{3}$ .

The matrix

${\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}}$

is not an elementary matrix, since it needs to be obtained by performing at least two EROs on $I_{3}$ , e.g. $\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{3},\mathbf {r} _{2}\leftrightarrow \mathbf {r} _{3}$ , in this order

Exercise.

Choose correct statement(s)

 product of elementary matrices is elementary matrix $(AB)^{-1}=(BA)^{-1}$ if $B$ is the inverse of $A$ if $A$ and $B$ are invertible matrices of the same size, the SLE $AB\mathbf {x} =\mathbf {b}$ has an unique solution sum of elementary matrices is elementary matrix

Proposition. Let $A$  be an $m\times n$  matrix. If $B$  is obtained from $A$  by performing an ERO, then there exists an $m\times m$  elementary matrix $E$  such that $B=EA$ , and $E$  can be obtained from $I_{m}$  by performing the same ERO.

Conversely, if $E$  is an $m\times m$  elementary matrix, then $EA$  is the matrix obtained from $A$  by performing the corresponding ERO.

Proof. Outline: $2\times 2$  case: e.g.

• type I ERO:
${\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}{\overset {\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{2}}{\to }}{\begin{pmatrix}c&d\\a&b\\\end{pmatrix}}={\begin{pmatrix}0&1\\1&0\\\end{pmatrix}}{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}={\begin{pmatrix}0\times a+1\times c&0\times b+1\times d\\1\times a+0\times c&1\times b+0\times d\\\end{pmatrix}}$

• type II ERO:

${\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}{\overset {k\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}{\begin{pmatrix}ka&kb\\c&d\\\end{pmatrix}}={\begin{pmatrix}k&0\\0&1\end{pmatrix}}{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}={\begin{pmatrix}k\times a+0\times c&k\times b+0\times d\\0\times a+k\times c&0\times b+k\times d\\\end{pmatrix}}$

• type III ERO:

${\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}{\overset {k\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}a&b\\c+ka&d+kb\\\end{pmatrix}}={\begin{pmatrix}1&0\\k&1\end{pmatrix}}{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}={\begin{pmatrix}1\times a+k\times c&1\times b+0\times d\\k\times a+1\times c&k\times b+1\times d\\\end{pmatrix}}$

$\Box$

Remark.

• illustration of the proposition:

{\begin{aligned}A&{\overset {\text{ERO}}{\to }}B={\color {green}E}A\\I_{m}&{\overset {\text{ERO}}{\to }}{\color {green}E}\end{aligned}}

• inductively, we have:

{\begin{aligned}A&{\overset {\color {green}{\text{ERO 1}}}{\to }}{\color {green}E_{1}}A{\overset {\color {blue}{\text{ERO 2}}}{\to }}{\color {blue}E_{2}}({\color {green}E_{1}}A)\cdots {\overset {\color {brown}{\text{ERO }}n}{\to }}B={\color {brown}E_{n}}\cdots {\color {blue}E_{2}}{\color {green}E_{1}}A\\I_{m}&{\overset {\color {green}{\text{ERO 1}}}{\to }}{\color {green}E_{1}}{\overset {\color {blue}{\text{ERO 2}}}{\to }}{\color {blue}E_{2}}{\color {green}E_{1}}\cdots {\overset {\color {brown}{\text{ERO }}n}{\to }}{\color {brown}E_{n}}\cdots {\color {blue}E_{2}}{\color {green}E_{1}}\end{aligned}}

Example. The following EROs

${\begin{pmatrix}1&2\\3&4\\\end{pmatrix}}{\overset {\color {green}\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{2}}{\to }}{\begin{pmatrix}3&4\\1&2\\\end{pmatrix}}{\overset {\color {blue}-3\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}{\begin{pmatrix}-9&-12\\1&2\\\end{pmatrix}}{\overset {\color {brown}4\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}{\begin{pmatrix}-5&-4\\1&2\\\end{pmatrix}}$

correspond to the matrix multiplication
${\begin{pmatrix}-5&-4\\1&2\\\end{pmatrix}}={\color {brown}{\begin{pmatrix}1&4\\0&1\\\end{pmatrix}}}{\color {blue}{\begin{pmatrix}-3&0\\0&1\\\end{pmatrix}}}{\color {green}{\begin{pmatrix}0&1\\1&0\\\end{pmatrix}}}{\begin{pmatrix}1&2\\3&4\\\end{pmatrix}}$

Proposition. (Invertibility of elementary matrix) Elementary matrices are invertible. The inverse of an elementary matrix is an elementary matrix of the same type.

Proof. The reverse process of each ERO is an ERO of the same type. Let $E_{1}$  and $E_{2}$  be the elementary matrices corresponding to these two EROs (an ERO and its reverse process), which are of the same type. Then, $E_{2}E_{1}=I\Leftrightarrow E_{1}^{-1}=E_{2}$ , as desired (since $I$  can be obtained from $I$  by performing an ERO and its reverse process together).

$\Box$

Remark.

• if $R$  is the RREF of $A$ , then $R=E_{k}\cdots E_{2}E_{1}A$  for some elementary matrices $E_{1},E_{2},\ldots ,E_{k}$
• since elementary matrices are invertible, $E_{k}\cdots E_{2}E_{1}$  is invertible, and equal to $E_{1}^{-1}E_{2}^{-1}\cdots E_{k}^{-1}$
• in other words, $R=PA$  for some invertible matrix $P$

Example. Since the reverse process of $\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{2}$ , $2\mathbf {r} _{1}\to \mathbf {r} _{1}$  and $2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}$  are $\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{2}$ , ${\frac {1}{2}}\mathbf {r} _{1}\to \mathbf {r} _{1}$  and $-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}$ , the inverses of the elementary matrices ${\begin{pmatrix}0&1\\1&0\\\end{pmatrix}}$ , ${\begin{pmatrix}2&0\\0&1\\\end{pmatrix}}$  and ${\begin{pmatrix}1&0\\2&1\\\end{pmatrix}}$  are ${\begin{pmatrix}0&1\\1&0\\\end{pmatrix}}$ , ${\begin{pmatrix}{\frac {1}{2}}&0\\0&1\\\end{pmatrix}}$  and ${\begin{pmatrix}1&0\\-2&1\\\end{pmatrix}}$  respectively.

In particular, the inverse of type I elementary matrix is itself.

Exercise. It is given that matrix $B={\begin{pmatrix}1&3&8\\3&7&2\\0&2&2\\\end{pmatrix}}$  is obtained from matrix $A$  by performing the EROs $\mathbf {r} _{1}\leftrightarrow \mathbf {r} _{3},3\mathbf {r} _{2}\to \mathbf {r} _{2},-\mathbf {r} _{1}+\mathbf {r} _{3}\to \mathbf {r} _{3}$ , in this order, and $B^{-1}={\frac {1}{20}}{\begin{pmatrix}5&5&-25\\-3&1&11\\3&-1&-1\\\end{pmatrix}}$ .

1 Find $A$ .

 ${\begin{pmatrix}1&5&10\\1&{\frac {7}{3}}&{\frac {2}{3}}\\1&3&8\\\end{pmatrix}}$ ${\begin{pmatrix}0&2&2\\9&21&6\\1&1&6\\\end{pmatrix}}$ ${\begin{pmatrix}-1&-1&-6\\9&21&6\\1&3&8\\\end{pmatrix}}$ ${\begin{pmatrix}7&9&1\\-1&21&3\\2&6&0\\\end{pmatrix}}$ 2 Find $A^{-1}$ .

 ${\frac {1}{20}}{\begin{pmatrix}-25&15&30\\11&3&-14\\-1&-3&4\end{pmatrix}}$ ${\frac {1}{20}}{\begin{pmatrix}-30&15&5\\41&3&-3\\-4&-3&3\end{pmatrix}}$ ${\frac {1}{20}}{\begin{pmatrix}8&4&-26\\-1&{\frac {1}{3}}&{\frac {11}{3}}\\5&5&-25\end{pmatrix}}$ Then, we will state a simplified version of invertible matrix theorem, in which some results from the complete version of invertible matrix theorem are removed.

Theorem. (Simplified invertible matrix theorem) Let $A$  be an $n\times n$  matrix. Then, the following are equivalent.

(i) $A$  is invertible

(ii) the homogeneous SLE $A\mathbf {x} =\mathbf {0}$  only has the trivial solution $\mathbf {x} =\mathbf {0}$

(iii) the RREF of $A$  is ${\color {green}I_{n}}$

(iv) $A$  is a product of elementary matrices

Proof. To prove this, we may establish a cycle of implications, i.e. (i) $\Rightarrow$  (ii) $\Rightarrow$  (iii) $\Rightarrow$  (iv) $\Rightarrow$  (i), then, when we pick two arbitrary statements form the four statements, they are equivalent to each other, which means that the four statements are equivalent.

(i) $\Rightarrow$  (ii): it follows from the proposition about solving SLE, and $\mathbf {x} =A^{-1}\mathbf {0} =\mathbf {0}$

(ii) $\Rightarrow$  (iii): since the SLE has a unique solution, the RREF of the augmented matrix of the SLE $(A|\mathbf {0} )$  has a leading one in each of the first $n$  columns, but not the $(n+1)$ st column, i.e. it is $(I_{n}|\mathbf {0} )$ . It follows that the RREF of $A$  is $I_{n}$ , since after arbitrary EROs, the rightmost zero column is still zero column.

(iii) $\Rightarrow$  (iv): since RREF of $A$  is $I_{n}$ , and RREF of $A$  equals $E_{k}\cdots E_{2}E_{1}A$  for some elementary matrices $E_{1},E_{2},\ldots ,E_{k}$ , it follows that $E_{k}\cdots E_{2}E_{1}A=I_{n}$ . By definition and general 'reverse multiplicativity' of matrix inverse, we have

$A=(E_{k}\cdots E_{2}E_{1})^{-1}=E_{1}^{-1}E_{2}^{-1}\cdots E_{k}^{-1},$

i.e. $A$  is a product of elementary matrices

(iv) $\Rightarrow$  (i): since $A$  is a product of elementary matrices and an elementary matrix is invertible, it follows that $A$  is invertible by general 'reverse multiplicativity' of matrix inverse.

$\Box$

Remark.

• this theorem provides us multiple ways to prove invertibility of matrix: we can prove it by proving one of the equivalent statements
• this may make the proof easier
• later, when we discuss some results about these equivalent statements, they can be linked to this theorem

Example. Consider the matrix

${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}$

. We can find its RREF by Gauss-Jordan algorithm, as follows:
${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}{\overset {-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2\\0&-3\\\end{pmatrix}}{\overset {-{\frac {1}{3}}\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2\\0&1\\\end{pmatrix}}{\overset {-2\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}{\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}.$

Since its RREF is ${\begin{pmatrix}1&0\\0&1\\\end{pmatrix}}=I_{2}$ , by the simplified invertible matrix theorem, we also have the following results:

(i) ${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}$  is invertible

(ii) the homogeneous SLE ${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}\mathbf {x} =\mathbf {0}$  only has trivial solution $\mathbf {x} =0$

(iii) ${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}$  is a product of elementary matrices

Let's verify them one by one.

(i):

${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}{\begin{pmatrix}-1/3&2/3\\2/3&-1/3\\\end{pmatrix}}=I_{2}$

(ii): the SLE can be represented by the augmented matrix ${\begin{pmatrix}1&2&0\\2&1&0\\\end{pmatrix}}$ , and we can find its RREF by Gauss-Jordan algorithm, as follows:

${\begin{pmatrix}1&2&0\\2&1&0\\\end{pmatrix}}{\overset {-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2&0\\0&-3&0\\\end{pmatrix}}{\overset {-{\frac {1}{3}}\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}{\begin{pmatrix}1&2&0\\0&1&0\\\end{pmatrix}}{\overset {-2\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}{\begin{pmatrix}1&0&0\\0&1&0\\\end{pmatrix}}.$

Then, we can directly read from the RREF of augmented matrix that the SLE only has the trivial solution.

(iii):

${\begin{pmatrix}1&2\\2&1\\\end{pmatrix}}={\begin{pmatrix}1&0\\2&1\\\end{pmatrix}}{\begin{pmatrix}1&0\\0&-3\\\end{pmatrix}}{\begin{pmatrix}1&2\\0&1\\\end{pmatrix}}$

Exercise. Consider the matrix $A={\begin{pmatrix}2&3&1\\3&6&1\\2&0&2\\\end{pmatrix}}$ , and the SLE $S={\begin{cases}3x+y&=-2z\\6x+y&=-3z\\y&=-z\end{cases}}$ .

Choose correct statement(s).

 $A$ is invertible $S$ has unique solution $A$ is not a product of elementary matrices the RREF of $A$ is ${\begin{pmatrix}1&0&1\\0&1&-3\\0&0&0\\\end{pmatrix}}$ The following provides us a convenient and efficient way to find the inverse of a matrix.

Theorem. (Finding matrix inverse using Gauss-Jordan algorithm) Let $A$  be an $n\times n$  invertible matrix. Then, we can transform the (augmented) matrix $(A|I_{n})$  to the (augmented) matrix $(I_{n}|B)$  ($B$  is of same size as $A$ ), which is RREF of $(A|I_{n})$ , using a finite number of EROs, and we have ${\color {green}B=A^{-1}}$ .

Proof. Outline: we can write $E_{k}\cdots E_{1}({\color {green}A}|{\color {blue}I_{n}})=({\color {green}I_{n}}|{\color {blue}B})$  for some elementary matrices $E_{1},\ldots ,E_{k}$ , since $({\color {green}I_{n}}|{\color {blue}B})$  is RREF of $({\color {green}A}|{\color {blue}I_{n}})$  Then, it can be proved that $E_{k}\cdots E_{1}{\color {green}A}={\color {green}I_{n}}$  and $E_{k}\cdots E_{1}{\color {blue}I_{n}}={\color {blue}B}$ . It follows that

${\color {blue}B}{\color {green}A}=(E_{k}\cdots \underbrace {E_{1}{\color {blue}I_{n}}} _{E_{1}}){\color {green}A}=I_{n},$

and thus $B=A^{-1}$ .

$\Box$

Remark.

• if $A$  is not invertible, we are not able to transform $(A|I_{n})$  to $(I_{n}|B)$  (but RREF of $(A|I_{n})$  still exists, it is just not in the form of $(I_{n}|B)$ )

Example. Let $A={\begin{pmatrix}2&0\\2&2\end{pmatrix}}$ . After performing EROs as follows:

$\left({\begin{array}{cc|cc}2&0&1&0\\2&2&0&1\\\end{array}}\right){\overset {{\frac {1}{2}}\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}\left({\begin{array}{cc|cc}1&0&1/2&0\\2&2&0&1\\\end{array}}\right){\overset {-2\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}\left({\begin{array}{cc|cc}1&0&1/2&0\\0&2&-1&1\\\end{array}}\right){\overset {{\frac {1}{2}}\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}\left({\begin{array}{cc|cc}1&0&1/2&0\\0&1&-1/2&1/2\\\end{array}}\right),$

we have $A^{-1}={\begin{pmatrix}1/2&0\\-1/2&1/2\\\end{pmatrix}}$ .

We have previously proved that $C={\begin{pmatrix}1&3\\4&12\\\end{pmatrix}}$  is non-invertible. Now, we verify that it is impossible to transform $(C|I_{2})$  to $(I_{2}|B)$  in which $B=C^{-1}$ . We perform EROs as follows:

$\left({\begin{array}{cc|cc}1&3&1&0\\4&12&0&1\\\end{array}}\right){\overset {-4\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}\left({\begin{array}{cc|cc}1&3&1&0\\0&0&-4&1\\\end{array}}\right){\overset {-{\frac {1}{4}}\mathbf {r} _{2}\to \mathbf {r} _{2}}{\to }}\left({\begin{array}{cc|cc}1&3&1&0\\0&0&1&-1/4\\\end{array}}\right){\overset {-\mathbf {r} _{2}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{\to }}\left({\begin{array}{cc|cc}1&3&0&1/4\\0&0&1&-1/4\\\end{array}}\right)$

The last matrix is in RREF. We can see from the first ERO that, to make the $(2,1)$ th entry zero, we will also make the $(2,2)$ th entry zero. Thus, it is impossible to have such transformation.

Exercise. Let $E_{1},\ldots ,E_{k}$  be some elementary matrices of same size $n\times n$ .

Choose correct statement(s).

 we can transform $(E_{1}|I_{n})$ to $(I_{n}|B)$ in which $B$ is of same size as $E_{1}$ we can transform $(E_{1}|I_{n})(E_{2}|I_{n})$ to $(I_{n}|B_{1})(I_{n}|B_{2})$ in which $B_{1},B_{2}$ are of same size as $E$ we can transform $(E_{1}|I_{n})+(E_{2}|I_{n})$ to $(I_{n}|B_{1})+(I_{n}|B_{2})$ in which $B_{1},B_{2}$ are of same size as $E$ we can transform $(A|I_{n})$ to $(C|B)$ in which $A,B,C$ are of same size $n\times n$ ## Determinants

Then, we will discuss the determinant, which allows characterizing some properties of a square matrix.

Definition. (Determinant) Let $A=(a_{ij})$  be an $n\times n$  matrix. The determinant of $A$ , denoted by $\det A$  or $|A|$ , is defined recursively as follows:

• when $n=1$ , we define $\det A=a_{11}$
• when $n\geq 2$ , suppose we have defined the determinant of each $(n-1)\times (n-1)$  matrix. Let $A_{ij}$  be the $(n-1)\times (n-1)$  (sub)matrix obtained by deleting the $i$ th and the $j$ th column of $A$ . We define the $(i,j)$ -cofactor of $A$  by $c_{ij}=(-1)^{i+j}\det(A_{ij})$ . Then, we define

$\det A=a_{11}c_{11}+a_{12}c_{12}+\cdots +a_{1n}c_{1n}.$

Remark.

• minor is determinant of a square submatrix
• the matrix $(c_{ij})$  consisting of all cofactors is called cofactor matrix
• the definition when $n\geq 2$  is also called the cofactor expansion (or Laplace expansion) along the first row.
• another notation: ${\begin{vmatrix}a&b\\c&d\\\end{vmatrix}}=\det {\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}$ , and there are similar notations for matrices with different sizes
• the signs of the cofactors are alternating. the signs of cofactor at the position of each entry of a matrix is shown below:

${\begin{pmatrix}+&-&+&-&\cdots \\-&+&-&+&\cdots \\+&-&+&-&\cdots \\-&+&-&+&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}},$

which looks like a 'chessboard' pattern.
• we can observe from the above pattern that the sign of cofactors located at the main diagonal are always positive
• this is the case since the row number $i$  equals column number $j$  in main diagonal, and so $(-1)^{i+j}=(-1)^{2i}=1^{i}=1$
• (some illustrations of deleting the row and column)

{\begin{aligned}{\begin{pmatrix}{\color {red}{\cancel {a_{11}}}}&{\color {red}{\cancel {a_{12}}}}&{\color {red}{\cancel {a_{13}}}}\\{\color {red}{\cancel {a_{21}}}}&a_{22}&a_{23}\\{\color {red}{\cancel {a_{31}}}}&a_{32}&a_{33}\\\end{pmatrix}},{\begin{pmatrix}{\color {red}{\cancel {a_{11}}}}&{\color {red}{\cancel {a_{12}}}}&{\color {red}{\cancel {a_{13}}}}\\a_{21}&{\color {red}{\cancel {a_{22}}}}&a_{23}\\a_{31}&{\color {red}{\cancel {a_{32}}}}&a_{33}\\\end{pmatrix}},{\begin{pmatrix}{\color {red}{\cancel {a_{11}}}}&{\color {red}{\cancel {a_{12}}}}&{\color {red}{\cancel {a_{13}}}}\\a_{21}&a_{22}&{\color {red}{\cancel {a_{23}}}}\\a_{31}&a_{32}&{\color {red}{\cancel {a_{33}}}}\\\end{pmatrix}},\\{\begin{pmatrix}{\color {red}{\cancel {a_{11}}}}&a_{12}&a_{13}\\{\color {red}{\cancel {a_{21}}}}&{\color {red}{\cancel {a_{22}}}}&{\color {red}{\cancel {a_{23}}}}\\{\color {red}{\cancel {a_{31}}}}&a_{32}&a_{33}\\\end{pmatrix}},{\begin{pmatrix}a_{11}&{\color {red}{\cancel {a_{12}}}}&a_{13}\\{\color {red}{\cancel {a_{21}}}}&{\color {red}{\cancel {a_{22}}}}&{\color {red}{\cancel {a_{23}}}}\\a_{31}&{\color {red}{\cancel {a_{32}}}}&a_{33}\\\end{pmatrix}},{\begin{pmatrix}a_{11}&a_{12}&{\color {red}{\cancel {a_{13}}}}\\{\color {red}{\cancel {a_{21}}}}&{\color {red}{\cancel {a_{22}}}}&{\color {red}{\cancel {a_{23}}}}\\a_{31}&a_{32}&{\color {red}{\cancel {a_{33}}}}\\\end{pmatrix}},\\{\begin{pmatrix}{\color {red}{\cancel {a_{11}}}}&a_{12}&a_{13}\\{\color {red}{\cancel {a_{21}}}}&a_{22}&a_{23}\\{\color {red}{\cancel {a_{31}}}}&{\color {red}{\cancel {a_{32}}}}&{\color {red}{\cancel {a_{33}}}}\\\end{pmatrix}},{\begin{pmatrix}a_{11}&{\color {red}{\cancel {a_{12}}}}&a_{13}\\a_{21}&{\color {red}{\cancel {a_{22}}}}&a_{23}\\{\color {red}{\cancel {a_{31}}}}&{\color {red}{\cancel {a_{32}}}}&{\color {red}{\cancel {a_{33}}}}\\\end{pmatrix}},{\begin{pmatrix}a_{11}&a_{12}&{\color {red}{\cancel {a_{13}}}}\\a_{21}&a_{22}&{\color {red}{\cancel {a_{23}}}}\\{\color {red}{\cancel {a_{31}}}}&{\color {red}{\cancel {a_{32}}}}&{\color {red}{\cancel {a_{33}}}}\\\end{pmatrix}}\\\end{aligned}}

Example. (Formulas of determinants of $2\times 2$  and $3\times 3$  matrices)

${\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\\end{vmatrix}}=a_{11}\underbrace {\det(a_{22})} _{a_{22}}-a_{12}\underbrace {\det(a_{21})} _{a_{21}}={\color {green}a_{11}a_{22}-a_{12}a_{21}}$

and
${\begin{vmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}=+a_{11}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-a_{12}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+a_{13}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}=a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{21}a_{32}a_{31}-a_{13}a_{22}a_{31}-a_{12}a_{21}a_{33}-a_{23}a_{32}a_{11}$

For the formula of determinants of $3\times 3$  matrices, we have a useful mnemonic device for it, namely the Rule of Sarrus, as follows:

Proposition. (Rule of Sarrus) We can compute $3\times 3$  matrix as shown in the following image:  , in which red arrows correspond to positive terms, and blue arrows correspond to negative terms. To be more precise, we can compute the matrix in the image by $a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{21}a_{32}a_{31}-a_{13}a_{22}a_{31}-a_{12}a_{21}a_{33}-a_{23}a_{32}a_{11}$

Proof. It follows from the formula in the above example.

$\Box$

Then, we will given an example about computing the determinant of a $4\times 4$  matrix, which cannot be computed by the Rule of Sarrus directly.

Example.

${\begin{vmatrix}2&1&0&0\\3&2&5&6\\2&0&2&2\\9&8&7&6\\\end{vmatrix}}=2{\begin{vmatrix}2&5&6\\0&2&2\\8&7&6\\\end{vmatrix}}+0-0-1{\begin{vmatrix}3&5&6\\2&2&2\\9&7&6\\\end{vmatrix}}=2[2(2)(6)+5(2)(8)+0(7)(6)-6(2)(8)-5(0)(6)-2(7)(2)]-2[3(2)(6)+5(2)(9)+2(7)(6)-6(2)(9)-5(2)(6)-2(7)(3)]=-40$

Proposition. (Determinant of zero matrix and identity matrix) The determinant of the zero matrix is $0$ , and the determinant of the identity matrix is $1$ .

Proof.

• $\det O=0\cdot c_{11}+0\cdot c_{12}+\cdots +0\cdot c_{1n}$
• $\det I_{n}=1\cdot c_{11}+0\cdot c_{12}+\cdots +0\cdot c_{1n}=c_{1}1=\det I_{n-1}$  (since the submatrix obtained after removing the 1st row and 1st column of $I_{n}$  is $I_{n-1}$ )
• so, inductively, $\det I_{n}=\det I_{n-1}=\cdots =\underbrace {\det I_{1}=1} _{\text{by definition}}$

$\Box$

Indeed, we can compute a determinant by the cofactor expansion along an arbitrary row, as in the following theorem.

Theorem. (Cofactor expansion theorem) Let $A=(a_{ij})$  be an $n\times n$  matrix with cofactors $c_{ij}$ . Then,

$\det A=a_{i1}c_{i1}+a_{i2}c_{i2}+\cdots +a_{in}c_{in}$

and
$\det A=a_{1j}c_{1j}+a_{2j}c_{2j}+\cdots +a_{nj}c_{nj}$

for each positive integer $i,j$ .

Remark.

• the first formula is cofactor expansion along the $i$ th row, and the second formula is cofactor expansion along the $j$ th column

Its proof (for the general case) is complicated, and thus is skipped.

Example. (Illustration of cofactor expansion theorem)

${\begin{vmatrix}2&{\color {green}0}&2&2\\1&{\color {green}0}&2&3\\3&{\color {green}4}&4&5\\8&{\color {green}0}&7&6\end{vmatrix}}=-4{\begin{vmatrix}2&2&2\\1&2&3\\8&7&6\\\end{vmatrix}}=-4[2(2)(6)+2(3)(8)+2(1)(7)-2(2)(8)-2(1)(6)-3(7)(2)]=0.$

We use cofactor expansion along the 2nd column here.

Exercise. Let $A={\begin{pmatrix}2&0&0&0\\5&3&0&0\\9&6&4&0\\12&4&8&5\\\end{pmatrix}}$ .

1 Calculate $\det A$ .

 14 60 104 120 150

2 Calculate $\det A^{T}$ .

 14 60 104 120 150

3 Choose correct statement(s).

 $\det A-\det A^{T}=\det(A-A^{T})$ for each matrix $A$ determinant of each matrix has only one possible value if two matrices have the same determinant, then these two matrices are the same determinant of submatrices of a matrix $A$ must be smaller than the determinant of $A$ Then, we will discuss several properties of determinants that ease its computation.

Proposition. (Effects on determinant when performing EROs) Let $A$  be a square matrix.

• (type I ERO) If we interchange two rows of $A$ , the determinant is multiplied by $-1$ ;
• (type II ERO) if we multiply a row of $A$  by a nonzero constant $k$ , the determinant is multiplied by $k$ ;
• (type III ERO) if we add a multiple of a row of $A$  to another row, the determinant remains unchanged.

Proof. Outline:

• (type I ERO) e.g.

{\begin{aligned}{\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}&={\color {green}a_{11}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-{\color {green}a_{12}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+{\color {green}a_{13}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}\\{\begin{vmatrix}a_{21}&a_{22}&a_{23}\\{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}&=-{\color {green}a_{11}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}+{\color {green}a_{12}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}-{\color {green}a_{13}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}=-{\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}\end{aligned}}

• (type II ERO) e.g.

${\begin{vmatrix}{\color {green}ka_{11}}&{\color {green}ka_{12}}&{\color {green}ka_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}={\color {green}ka_{11}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-{\color {green}ka_{12}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+{\color {green}ka_{13}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}=k\left({\color {green}a_{11}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-{\color {green}a_{12}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+{\color {green}a_{13}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}\right)=k{\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}$

• (type III ERO) e.g.

{\begin{aligned}{\begin{vmatrix}{\color {green}a_{11}}+{\color {blue}ka_{21}}&{\color {green}a_{11}}+{\color {blue}ka_{22}}&{\color {green}a_{11}}+{\color {blue}ka_{22}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}&=({\color {green}a_{11}}+{\color {blue}ka_{21}}){\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-({\color {green}a_{12}}+{\color {blue}ka_{22}}){\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+({\color {green}a_{13}}+{\color {blue}ka_{23}}){\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}\\&=\underbrace {{\color {green}a_{11}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-{\color {green}a_{12}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+{\color {green}a_{13}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}} _{\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}+\underbrace {{\color {blue}ka_{21}}{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}-{\color {blue}ka_{22}}{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}+{\color {blue}ka_{23}}{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}} _{\begin{vmatrix}{\color {blue}ka_{21}}&{\color {blue}ka_{22}}&{\color {blue}ka_{23}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}\\&={\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}+{\color {blue}k}\underbrace {\begin{vmatrix}{\color {blue}a_{21}}&{\color {blue}a_{22}}&{\color {blue}a_{23}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}} _{0}\\&={\begin{vmatrix}{\color {green}a_{11}}&{\color {green}a_{12}}&{\color {green}a_{13}}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{vmatrix}}\end{aligned}}

$\Box$

Remark.

• for the property related to type II ERO, $k$  can be zero, and the determinant is multiplied by zero. But, multiplying the row by $k$  is not type II ERO if $k=0$
• the determinant of matrix with two identical rows is zero because of the following corollary, which is based on the result about type I ERO
• in view of this proposition, we have some strategies to compute determinant more easily, as follows:
• apply type II EROs to take out common multiples of a row to reduce the numerical value of entries, so that the computation is easier
• apply type III EROs to create more zeros in entries
• apply cofactor expansion along a row or column with many zeros
• apart from the EROs mentioned in the proposition, we can actually also apply elementary column operations (ECOs)
• this is because determinant of matrix transpose equals that of the original matrix (it will be mentioned in the proposition about properties of determinants)
• so, applying elementary column operations is essentially the same as applying EROs, by viewing the operations in different perspectives
• we have similar notations for elementary column operations, with $\mathbf {r}$  (stand for row) replaced by $\mathbf {c}$  (stand for column)

Example. (Vandermonde matrix)

${\begin{vmatrix}1&a&a^{2}\\1&b&b^{2}\\1&c&c^{2}\\\end{vmatrix}}{\overset {-\mathbf {r} _{1}+\mathbf {r} _{2}\to \mathbf {r} _{2}}{\overset {-\mathbf {r} _{1}+\mathbf {r} _{3}\to \mathbf {r} _{3}}{=}}}{\begin{vmatrix}1&a&a^{2}\\0&b-a&b^{2}-a^{2}\\0&c-a&c^{2}-a^{2}\\\end{vmatrix}}={\begin{vmatrix}b-a&(b-a)(b+a)\\c-a&(c-a)(c+a)\end{vmatrix}}=(b-a)(c-a){\begin{vmatrix}1&b+a\\1&c+a\\\end{vmatrix}}=(b-a)(c-a)(c+a-b-a)=(b-a)(c-a)(c-b)$

Corollary. The determinant of a square matrix with two identical rows is zero.

Proof. Let $A$  be a square matrix with two identical rows. If we interchange the two identical rows in $A$ , the matrix is still the same, but its determinant is multiplied by $-1$ , i.e.

$\det A=-\det A\Leftrightarrow 2\det A=0\Leftrightarrow \det A=0.$

Alternatively, it can be proved by definition and induction.

$\Box$

Exercise.

Calculate ${\begin{vmatrix}1&2&3&4\\2&3&4&1\\3&4&1&2\\4&1&2&3\\\end{vmatrix}}$ . (Hint: apply type III EROs or ECOs multiple times, without affecting the value of determinant, to ease computaion)

 10 16 80 160 320

Then, we will introduce a convenient way to determine invertibility of a matrix. Before introducing the theorem, we have a lemma.

Lemma. For each elementary matrix $E$  and matrix $A$  ,

$\det(EA)=(\det E)(\det A).$

Proof.

• (type I: $\mathbf {r} _{i}\leftrightarrow \mathbf {r} _{j}$ ) $\det E=-1$  and $\det(EA)=-\det A=(\det E)(\det A)$  (since we are interchanging rows)
• (type II: $k\mathbf {r} _{i}\to \mathbf {r} _{i}$ ) $\det E=k$  and $\det(EA)=k\det A=(\det E)(\det A)$  (since we are multiplying a row by nonzero constant)
• (type III: $k\mathbf {r} _{j}+\mathbf {r} _{i}\to \mathbf {r} _{i}$ ) $\det E=1$  and $\det(EA)=\det A=(\det E)(\det A)$  (since we are adding a multiple of a row to another row)

$\Box$

Theorem. (Determining invertibility by determinant) A square matrix is invertible if and only if its determinant is nonzero.

Proof.

• only if part: by simplified invertible matrix theorem, a matrix $A$  is invertible is equivalent to $A$  is product of elementary matrices. So, if we denote the elementary matrices by $E_{1},\ldots ,E_{k}$ , then

{\begin{aligned}&&A&=E_{1}E_{2}\cdots E_{k}\\&\Rightarrow &\det A&=(\det E_{1})\det(\underbrace {E_{2}E_{3}\cdots E_{k}} _{\text{a matrix}})\quad ({\text{not }}\Leftrightarrow {\text{ since determinant function is many-to-one}})\\&&&=(\det E_{1})\det(E_{2})\det(E_{3}\cdots E_{k})\\&&&=\cdots \\&&&=(\det E_{1})\det(E_{2})\cdots (\det E_{k})\end{aligned}}

• if part: Let $A=E_{1}\cdots E_{k}R$  in which $E_{1},\ldots ,E_{k}$  are elementary matrices and $R$  is the RREF of $A$ . This implies that

$\det A=(\det E_{1})\cdots (\det E_{k})(\det R).$

Since $\det A\neq 0$ , so $\det R\neq 0$ . Thus, $R$  has no zero row (its determinant is zero otherwise). Since $R$  is in RREF, it follows that $R=I$  (since $R$  is square matrix, if not all columns contain leading ones, then there is at least one zero row lying at its bottom, by definition of RREF). By simplified invertible matrix theorem, $A$  is invertible.

$\Box$

After introducing this result, we will give some properties of determinants which can ease the computation of determinants.

Proposition. (Properties of determinants) Let $A$  and $B$  be square matrices of the same size. Then, the following hold.

• (multiplicativity) $\det(AB)=(\det A)(\det B)$
• (invariance of determinant after transpose) $\det(A^{T})=\det A$
• (determinant of matrix inverse is inverse of matrix determinant) $\det(A^{-1})=(\det A)^{-1}$

Proof.

• (multiplicativity) let $A=E_{1}\cdots E_{k}R$  in which $E_{1},\ldots ,E_{k}$  are elementary matrices and $R$  is the RREF of $A$ . Then,

$\det(A{\color {green}B})=(E_{1}\cdots E_{k}R{\color {green}B})=(\det E_{1})\cdots (\det E_{k})\det(R{\color {green}B}),$

and

$(\det A)(\det {\color {green}B})=(\det E_{1})\cdots (\det E_{k})(\det R)(\det {\color {green}B}).$

• then, it remains to prove that $\det(RB)=(\det R)(\det B)$
• if $R=I$ , then $\det(RB)=\det B=(\det R)(\det B)$
• if $R\neq I$ , then the last row of $R$  is a zero row, so $\det R=0=(\underbrace {\det R} _{0})(\det B)$
• the last row of $RB$  is also a zero row, so $\det(RB)=0=(\det R)(\det B)$
• the result follows
• (invariance of determinant after transpose) we may prove it by induction and cofactor expansion theorem, e.g. ${\begin{vmatrix}{\color {green}1}&{\color {green}2}&{\color {green}3}\\4&5&6\\7&8&9\\\end{vmatrix}}$  vs. ${\begin{vmatrix}{\color {green}1}&4&7\\{\color {green}2}&5&8\\{\color {green}3}&6&9\\\end{vmatrix}}$
• (determinant of matrix inverse is inverse of matrix determinant) using multiplicativity,

$AA^{-1}=I\implies (\det A)(\det(A^{-1}))=\det I=1\implies \det(A^{-1})=(\det A)^{-1}$

($\det A\neq 0$  since $A$  is invertible)

$\Box$

Example. Let $A={\begin{pmatrix}1&2&3&4\\2&3&4&5\\3&4&5&6\\4&5&6&7\\\end{pmatrix}}$ . Since

$\det A{\overset {-2\mathbf {r} _{2}+\mathbf {r} _{3}+\mathbf {r} _{1}\to \mathbf {r} _{1}}{=}}{\begin{vmatrix}0&0&0&0\\2&3&4&5\\3&4&5&6\\4&5&6&7\\\end{vmatrix}}=0,$

$A$  is non-invertible. By simplified invertible matrix theorem, we also have the following results:
• the homogeneous SLE $A\mathbf {x} =\mathbf {0}$  has not only trivial solution
• the RREF of $A$  is not $I$
• $A$  cannot be expressed as product of elementary matrices

Exercise.

Choose correct statement(s).

 if $A$ and $B$ are non-invertible, then $AB$ is also non-invertible if $A$ and $B$ are invertible, then $AB$ is also invertible if $A$ and $B$ are non-invertible, then $A+B$ is also non-invertible if $A$ and $B$ are invertible, then $A+B$ is also invertible $\det(A+B)=\det A+\det B$ for each matrix $A,B$ $\det(AB)=\det(BA)$ for each matrix $A,B$ $\det(A^{n})=(\det A)^{n}$ for each matrix $A$ and for each integer $n\geq -1$ Then, we will introduce adjugate of matrix, which has a notable result related to computation of matrix inverse.

Definition. (Adjugate of matrix) Let $A$  be an $n\times n$  matrix. The adjugate of $A$ , denoted by $\operatorname {adj} A$ , is the $n\times n$  matrix whose $(i,j)$ th entry is the cofactor $c_{ji}$ .

Remark.

• it follows that $\operatorname {adj} A$  is the transpose of the cofactor matrix of $A$ , i.e. $\operatorname {adj} A=(c_{ij})^{T}$
• it is more common to use this way to compute adjugate

Theorem. (Relationship between adjugate and determinant) Let $A$  be an $n\times n$  matrix. Then,

$A(\operatorname {adj} A)=(\operatorname {adj} A)A=(\det A)I_{n}$

Proof. The proof is complicated, and so is skipped.

$\Box$

Corollary. (Formula of matrix inverse) If $A$  is invertible, its inverse is given by

$A^{-1}={\frac {1}{\det A}}\operatorname {adj} A$

Proof.

$A(\operatorname {adj} A)=(\det A)I_{n}\Leftrightarrow A\left({\frac {1}{\det A}}\operatorname {adj} A\right)=I_{n}\Leftrightarrow A^{-1}={\frac {1}{\det A}}\operatorname {adj} A$

$\Box$

Example. (Formula of $2\times 2$  matrix inverse) Let $A={\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{pmatrix}}$ . Then,

$A^{-1}={\frac {1}{\det A}}\operatorname {adj} A={\frac {1}{a_{11}a_{22}-a_{12}a_{21}}}{\begin{pmatrix}(-1)^{1+1}a_{22}&(-1)^{1+2}a_{21}\\(-1)^{2+1}a_{12}&(-1)^{2+2}a_{11}\end{pmatrix}}^{\color {green}T}={\frac {1}{a_{11}a_{22}-a_{12}a_{21}}}{\begin{pmatrix}a_{22}&-a_{12}\\-a_{21}&a_{11}\end{pmatrix}}.$

That is, we can find the inverse of a $2\times 2$  matrix by interchanging the $(1,1)$ -th and $(2,2)$ th entries, multiplying the $(1,2)$ th and $(2,1)$ th by $-1$  (without interchanging them), and multiplying the matrix by the reciprocal of its determinant.

Example. (Adjugate of non-invertible matrix) Let $A={\begin{pmatrix}1&2&3\\1&2&3\\3&4&5\\\end{pmatrix}}$ . Then,

$\operatorname {adj} A={\begin{pmatrix}{\begin{vmatrix}2&3\\4&5\\\end{vmatrix}}&-{\begin{vmatrix}1&3\\3&5\\\end{vmatrix}}&{\begin{vmatrix}1&2\\3&4\end{vmatrix}}\\-{\begin{vmatrix}2&3\\4&5\\\end{vmatrix}}&{\begin{vmatrix}1&3\\3&5\\\end{vmatrix}}&-{\begin{vmatrix}1&2\\3&4\end{vmatrix}}\\{\begin{vmatrix}2&3\\2&3\\\end{vmatrix}}&-{\begin{vmatrix}1&3\\1&3\\\end{vmatrix}}&{\begin{vmatrix}1&2\\1&2\end{vmatrix}}\\\end{pmatrix}}^{\color {green}T}={\begin{pmatrix}-2&4&-2\\2&-4&2\\0&0&0\\\end{pmatrix}}^{T}={\begin{pmatrix}-2&2&0\\4&-4&0\\-2&2&0\\\end{pmatrix}}.$