# Topics in Abstract Algebra/Linear algebra

## The Moore-Penrose inverse

Inverse matrices play a key role in linear algebra, and particularly in computations. However, only square matrices can possibly be invertible. This leads us to introduce the Moore-Penrose inverse of a potentially non-square real- or complex-valued matrix, which satisfies some but not necessarily all of the properties of an inverse matrix.

Definition. Let ${\displaystyle A}$  be an m-by-n matrix over a field ${\displaystyle \mathbb {K} }$  and ${\displaystyle A^{+}}$  be an n-by-m matrix over ${\displaystyle \mathbb {K} }$ , where ${\displaystyle \mathbb {K} }$  is either ${\displaystyle \mathbb {R} }$ , the real numbers, or ${\displaystyle \mathbb {C} }$ , the complex numbers. Recall that ${\displaystyle A^{*}}$  refers to the conjugate transpose of ${\displaystyle A}$ . Then the following four criteria are called the Moore–Penrose conditions for ${\displaystyle A}$ :

1. ${\displaystyle AA^{+}A=A}$ ,
2. ${\displaystyle A^{+}AA^{+}=A^{+}}$ ,
3. ${\displaystyle \left(AA^{+}\right)^{*}=AA^{+}}$ ,
4. ${\displaystyle \left(A^{+}A\right)^{*}=A^{+}A}$ .

We will see below that given a matrix ${\displaystyle A}$ , there exists a unique matrix ${\displaystyle A^{+}}$  that satisfies all four of the Moore–Penrose conditions. They generalise the properties of the usual inverse.

Remark. If ${\displaystyle A}$  is an invertible square matrix, then the ordinary inverse ${\displaystyle A^{-1}}$  satisfies the Moore-Penrose conditions for ${\displaystyle A}$ . Observe also that if ${\displaystyle A^{+}}$  satisfies the Moore-Penrose conditions for ${\displaystyle A}$ , then ${\displaystyle A}$  satisfies the Moore-Penrose conditions for ${\displaystyle A^{+}}$ .

### Basic properties of the Hermitian conjugate

We assemble some basic properties of the conjugate transpose for later use. In the following lemmas, ${\displaystyle A}$  is a matrix with complex elements and n columns, ${\displaystyle B}$  is a matrix with complex elements and n rows.

Lemma (1). For any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle A^{*}A=0\Rightarrow A=0}$
Proof. The assumption says that all elements of A*A are zero. Therefore,

${\displaystyle 0=\operatorname {Tr} \left(A^{*}A\right)=\sum _{j=1}^{n}\left(A^{*}A\right)_{jj}=\sum _{j=1}^{n}\sum _{i=1}^{m}\left(A^{*}\right)_{ji}A_{ij}=\sum _{i=1}^{m}\sum _{j=1}^{n}\left|A_{ij}\right|^{2}.}$

Therefore, all ${\displaystyle A_{ij}}$  equal 0 i.e. ${\displaystyle A=0}$ . ${\displaystyle \square }$

Lemma (2). For any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle A^{*}AB=0\Rightarrow AB=0}$
Proof. : {\displaystyle {\begin{aligned}0&=A^{*}AB&\\\Rightarrow 0&=B^{*}A^{*}AB&\\\Rightarrow 0&=(AB)^{*}(AB)&\\\Rightarrow 0&=AB&({\text{by Lemma 1}})\end{aligned}}}  ${\displaystyle \square }$

Lemma (3). For any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle ABB^{*}=0\Rightarrow AB=0}$
Proof. This is proved in a manner similar to the argument of Lemma 2 (or by simply taking the Hermitian conjugate). ${\displaystyle \square }$

### Existence and uniqueness

We establish existence and uniqueness of the Moore-Penrose inverse for every matrix.

Theorem. If ${\displaystyle A}$  is a ${\displaystyle \mathbb {K} }$ -matrix and ${\displaystyle A_{1}^{+}}$  and ${\displaystyle A_{2}^{+}}$  satisfy the Moore-Penrose conditions for ${\displaystyle A}$ , then ${\displaystyle A_{1}^{+}=A_{2}^{+}}$ .
Proof. Let ${\displaystyle A}$  be a matrix over ${\displaystyle \mathbb {R} }$  or ${\displaystyle \mathbb {C} }$ . Suppose that ${\displaystyle {A_{1}^{+}}}$  and ${\displaystyle {A_{2}^{+}}}$  are Moore–Penrose inverses of ${\displaystyle A}$ . Observe then that

${\displaystyle A{A_{1}^{+}}{\overset {(1)}{{}={}}}(A{A_{2}^{+}}A){A_{1}^{+}}=(A{A_{2}^{+}})(A{A_{1}^{+}}){\overset {(3)}{{}={}}}(A{A_{2}^{+}})^{*}(A{A_{1}^{+}})^{*}={A_{2}^{+}}^{*}(A{A_{1}^{+}}A)^{*}{\overset {(1)}{{}={}}}{A_{2}^{+}}^{*}A^{*}=(A{A_{2}^{+}})^{*}{\overset {(3)}{{}={}}}A{A_{2}^{+}}.}$

Analogously we conclude that ${\displaystyle {A_{1}^{+}}A={A_{2}^{+}}A}$ . The proof is completed by observing that then

${\displaystyle {A_{1}^{+}}{\overset {(2)}{{}={}}}{A_{1}^{+}}A{A_{1}^{+}}={A_{1}^{+}}A{A_{2}^{+}}=A_{2}^{+}A{A_{2}^{+}}{\overset {(2)}{{}={}}}{A_{2}^{+}}.}$  ${\displaystyle \square }$

Theorem. For every ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$  there is a matrix ${\displaystyle A^{+}}$  satisfying the Moore-Penrose conditions for ${\displaystyle A}$
Proof. The proof proceeds in stages.

${\displaystyle A}$  is a 1-by-1 matrix

For any ${\displaystyle x\in \mathbb {K} }$ , we define:

${\displaystyle x^{+}:={\begin{cases}x^{-1},&{\mbox{if }}x\neq 0\\0,&{\mbox{if }}x=0\end{cases}}}$

It is easy to see that ${\displaystyle x^{+}}$  is a pseudoinverse of ${\displaystyle x}$  (interpreted as a 1-by-1 matrix).

${\displaystyle A}$  is a square diagonal matrix

Let ${\displaystyle D}$  be an n-by-n matrix over ${\displaystyle \mathbb {K} }$  with zeros off the diagonal. We define ${\displaystyle D^{+}}$  as an n-by-n matrix over ${\displaystyle \mathbb {K} }$  with ${\displaystyle \left(D^{+}\right)_{ij}:=\left(D_{ij}\right)^{+}}$  as defined above. We write simply ${\displaystyle D_{ij}^{+}}$  for ${\displaystyle \left(D^{+}\right)_{ij}=\left(D_{ij}\right)^{+}}$ .

Notice that ${\displaystyle D^{+}}$  is also a matrix with zeros off the diagonal.

We now show that ${\displaystyle D^{+}}$  is a pseudoinverse of ${\displaystyle D}$ :

1. ${\displaystyle \left(DD^{+}D\right)_{ij}=D_{ij}D_{ij}^{+}D_{ij}=D_{ij}\Rightarrow DD^{+}D=D}$
2. ${\displaystyle \left(D^{+}DD^{+}\right)_{ij}=D_{ij}^{+}D_{ij}D_{ij}^{+}=D_{ij}^{+}\Rightarrow D^{+}DD^{+}=D^{+}}$
3. ${\displaystyle \left(DD^{+}\right)_{ij}^{*}={\overline {\left(DD^{+}\right)_{ji}}}={\overline {D_{ji}D_{ji}^{+}}}=\left(D_{ji}D_{ji}^{+}\right)^{*}=D_{ji}D_{ji}^{+}=D_{ij}D_{ij}^{+}\Rightarrow \left(DD^{+}\right)^{*}=DD^{+}}$
4. ${\displaystyle \left(D^{+}D\right)_{ij}^{*}={\overline {\left(D^{+}D\right)_{ji}}}={\overline {D_{ji}^{+}D_{ji}}}=\left(D_{ji}^{+}D_{ji}\right)^{*}=D_{ji}^{+}D_{ji}=D_{ij}^{+}D_{ij}\Rightarrow \left(D^{+}D\right)^{*}=D^{+}D}$

${\displaystyle A}$  is a general diagonal matrix

Let ${\displaystyle D}$  be an m-by-n matrix over ${\displaystyle \mathbb {K} }$  with zeros off the main diagonal, where m and n are unequal. That is, ${\displaystyle D_{ij}=d_{i}}$  for some ${\displaystyle d_{i}\in \mathbb {K} }$  when ${\displaystyle i=j}$  and ${\displaystyle D_{ij}=0}$  otherwise.

Consider the case where ${\displaystyle n>m}$ . Then we can rewrite ${\displaystyle D=\left[D_{0}\,\,\mathbf {0} _{m\times (n-m)}\right]}$  by stacking where ${\displaystyle D_{0}}$  is a square diagonal m-by-m matrix, and ${\displaystyle \mathbf {0} _{m\times (n-m)}}$  is the m-by-(nm) zero matrix. We define ${\displaystyle D^{+}\equiv {\begin{bmatrix}D_{0}^{+}\\\mathbf {0} _{(n-m)\times m}\end{bmatrix}}}$  as an n-by-m matrix over ${\displaystyle \mathbb {K} }$ , with ${\displaystyle D_{0}^{+}}$  the pseudoinverse of ${\displaystyle D_{0}}$  defined above, and ${\displaystyle \mathbf {0} _{(n-m)\times m}}$  the (nm)-by-m zero matrix. We now show that ${\displaystyle D^{+}}$  is a pseudoinverse of ${\displaystyle D}$ :

1. By multiplication of block matrices, ${\displaystyle DD^{+}=D_{0}D_{0}^{+}+\mathbf {0} _{m\times (n-m)}\mathbf {0} _{(n-m)\times m}=D_{0}D_{0}^{+},}$  so by property 1 for square diagonal matrices ${\displaystyle D_{0}}$  proven in the previous section,${\displaystyle DD^{+}D=D_{0}D_{0}^{+}\left[D_{0}\,\,\mathbf {0} _{m\times (n-m)}\right]=\left[D_{0}D_{0}^{+}D_{0}\,\,\mathbf {0} _{m\times (n-m)}\right]=\left[D_{0}\,\,\mathbf {0} _{m\times (n-m)}\right]=D}$ .
2. Similarly, ${\displaystyle D^{+}D={\begin{bmatrix}D_{0}^{+}D_{0}&\mathbf {0} _{m\times (n-m)}\\\mathbf {0} _{(n-m)\times m}&\mathbf {0} _{(n-m)\times (n-m)}\end{bmatrix}}}$ , so ${\displaystyle D^{+}DD^{+}={\begin{bmatrix}D_{0}^{+}D_{0}&\mathbf {0} _{m\times (n-m)}\\\mathbf {0} _{(n-m)\times m}&\mathbf {0} _{(n-m)\times (n-m)}\end{bmatrix}}{\begin{bmatrix}D_{0}^{+}\\\mathbf {0} _{(n-m)\times m}\end{bmatrix}}={\begin{bmatrix}D_{0}^{+}D_{0}D_{0}^{+}\\\mathbf {0} _{(n-m)\times m}\end{bmatrix}}=D^{+}.}$
3. By 1 and property 3 for square diagonal matrices, ${\displaystyle \left(DD^{+}\right)^{*}=\left(D_{0}D_{0}^{+}\right)^{*}=D_{0}D_{0}^{+}=DD^{+}}$ .
4. By 2 and property 4 for square diagonal matrices, ${\displaystyle \left(D^{+}D\right)^{*}={\begin{bmatrix}\left(D_{0}^{+}D_{0}\right)^{*}&\mathbf {0} _{m\times (n-m)}\\\mathbf {0} _{(n-m)\times m}&\mathbf {0} _{(n-m)\times (n-m)}\end{bmatrix}}={\begin{bmatrix}D_{0}^{+}D_{0}&\mathbf {0} _{m\times (n-m)}\\\mathbf {0} _{(n-m)\times m}&\mathbf {0} _{(n-m)\times (n-m)}\end{bmatrix}}=D^{+}D.}$

Existence for ${\displaystyle D}$  such that ${\displaystyle m>n}$  follows by swapping the roles of ${\displaystyle D}$  and ${\displaystyle D^{+}}$  in the ${\displaystyle n>m}$  case and using the fact that ${\displaystyle \left(D^{+}\right)^{+}=D}$ .

${\displaystyle A}$  is an arbitrary matrix

The singular value decomposition theorem states that there exists a factorization of the form

${\displaystyle A=U\Sigma V^{*}}$

where:

${\displaystyle U}$  is an m-by-m unitary matrix over ${\displaystyle \mathbb {K} }$ .
${\displaystyle \Sigma }$  is an m-by-n matrix over ${\displaystyle \mathbb {K} }$  with nonnegative real numbers on the diagonal and zeros off the diagonal.
${\displaystyle V}$  is an n-by-n unitary matrix over ${\displaystyle \mathbb {K} }$ .[1]

Define ${\displaystyle A^{+}}$  as ${\displaystyle V\Sigma ^{+}U^{*}}$ .

We now show that ${\displaystyle A^{+}}$  is a pseudoinverse of ${\displaystyle A}$ :

1. ${\displaystyle AA^{+}A=U\Sigma V^{*}V\Sigma ^{+}U^{*}U\Sigma V^{*}=U\Sigma \Sigma ^{+}\Sigma V^{*}=U\Sigma V^{*}=A}$
2. ${\displaystyle A^{+}AA^{+}=V\Sigma ^{+}U^{*}U\Sigma V^{*}V\Sigma ^{+}U^{*}=V\Sigma ^{+}\Sigma \Sigma ^{+}U^{*}=V\Sigma ^{+}U^{*}=A^{+}}$
3. ${\displaystyle \left(AA^{+}\right)^{*}=\left(U\Sigma V^{*}V\Sigma ^{+}U^{*}\right)^{*}=\left(U\Sigma \Sigma ^{+}U^{*}\right)^{*}=U\left(\Sigma \Sigma ^{+}\right)^{*}U^{*}=U\left(\Sigma \Sigma ^{+}\right)U^{*}=U\Sigma V^{*}V\Sigma ^{+}U^{*}=AA^{+}}$
4. ${\displaystyle \left(A^{+}A\right)^{*}=\left(V\Sigma ^{+}U^{*}U\Sigma V^{*}\right)^{*}=\left(V\Sigma ^{+}\Sigma V^{*}\right)^{*}=V\left(\Sigma ^{+}\Sigma \right)^{*}V^{*}=V\left(\Sigma ^{+}\Sigma \right)V^{*}=V\Sigma ^{+}U^{*}U\Sigma V^{*}=A^{+}A}$  ${\displaystyle \square }$

This leads us to the natural definition:

Definition (Moore-Penrose inverse). Let ${\displaystyle A}$  be a ${\displaystyle \mathbb {K} }$ -matrix. Then the unique ${\displaystyle \mathbb {K} }$ -matrix satisfying the Moore-Penrose conditions for ${\displaystyle A}$  is called the Moore-Penrose inverse ${\displaystyle A^{+}}$  of ${\displaystyle A}$ .

### Basic properties

We have already seen above that the Moore-Penrose inverse generalises the classical inverse to potentially non-square matrices. We will now list some basic properties of its interaction with the Hermitian conjugate, leaving most of the proofs as exercises to the reader.

Exercise. For any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle {A^{*}}^{+}={A^{+}}^{*}}$

The following identities hold:

1. A+ = A+ A+* A*
2. A+ = A* A+* A+
3. A = A+* A* A
4. A = A A* A+*
5. A* = A* A A+
6. A* = A+ A A*

Proof of the first one: ${\displaystyle A^{+}=A^{+}AA^{+}}$  and ${\displaystyle AA^{+}=\left(AA^{+}\right)^{*}}$  imply that ${\displaystyle A^{+}=A^{+}\left(AA^{+}\right)^{*}=A^{+}A^{+^{*}}A^{*}}$ . □

The remaining identities are left as exercises.

### Reduction to the Hermitian case

The results of this section show that the computation of the pseudoinverse is reducible to its construction in the Hermitian case. It suffices to show that the putative constructions satisfy the defining criteria.

Proposition. For every ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle A^{+}=A^{*}(AA^{*})^{+}}$
Proof. Observe that

{\displaystyle {\begin{aligned}&&AA^{*}&=AA^{*}\left(AA^{*}\right)^{+}AA^{*}&\\&\Leftrightarrow &AA^{*}&=ADAA^{*}&\\&\Leftrightarrow &0&=(AD-I)AA^{*}&\\&\Leftrightarrow &0&=ADA-A&({\text{by Lemma 3}})\\&\Leftrightarrow &A&=ADA&\end{aligned}}}

Similarly, ${\displaystyle \left(AA^{*}\right)^{+}AA^{*}\left(AA^{*}\right)^{+}=\left(AA^{*}\right)^{+}}$  implies that ${\displaystyle A^{*}\left(AA^{*}\right)^{+}AA^{*}\left(AA^{*}\right)^{+}=A^{*}\left(AA^{*}\right)^{+}}$  i.e. ${\displaystyle DAD=D}$ .

Additionally, ${\displaystyle AD=AA^{*}\left(AA^{*}\right)^{+}}$  so ${\displaystyle AD=(AD)^{*}}$ .

Finally, ${\displaystyle DA=A^{*}\left(AA^{*}\right)^{+}A}$  implies that ${\displaystyle (DA)^{*}=A^{*}\left(\left(AA^{*}\right)^{+}\right)^{*}A=A^{*}\left(\left(AA^{*}\right)^{+}\right)A=DA}$ .

Therefore, ${\displaystyle D=A^{+}}$ . ${\displaystyle \square }$

Exercise. For every ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$ , ${\displaystyle A^{+}=(A^{*}A)^{+}A^{*}}$

### Products

We now turn to calculating the Moore-Penrose inverse for a product of two matrices, ${\displaystyle C=AB.}$

Proposition. If ${\displaystyle A}$  has orthonormal columns i.e. ${\displaystyle A^{*}A=I}$ , then for any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle B}$  of the right dimensions, ${\displaystyle (AB)^{+}=B^{+}A^{+}}$ .
Proof. Since ${\displaystyle A^{*}A=I}$ , ${\displaystyle A^{+}=A^{*}}$ . Write ${\displaystyle C=AB}$  and ${\displaystyle D=B^{+}A^{+}=B^{+}A^{*}}$ . We show that ${\displaystyle D}$  satisfies the Moore–Penrose criteria for ${\displaystyle C}$ .

{\displaystyle {\begin{aligned}CDC&=ABB^{+}A^{*}AB=ABB^{+}B=AB=C,\\[4pt]DCD&=B^{+}A^{*}ABB^{+}A^{*}=B^{+}BB^{+}A^{*}=B^{+}A^{*}=D,\\[4pt](CD)^{*}&=D^{*}B^{*}A^{*}=A\left(B^{+}\right)^{*}B^{*}A^{*}=A\left(BB^{+}\right)^{*}A^{*}=ABB^{+}A^{*}=CD,\\[4pt](DC)^{*}&=B^{*}A^{*}D^{*}=B^{*}A^{*}A\left(B^{+}\right)^{*}=\left(B^{+}B\right)^{*}=B^{+}B=B^{+}A^{*}AB=DC.\end{aligned}}}

Therefore, ${\displaystyle D=C^{+}}$ . ${\displaystyle \square }$

Exercise. If ${\displaystyle B}$  has orthonormal rows, then for any ${\displaystyle \mathbb {K} }$ -matrix ${\displaystyle A}$  of the right dimensions, ${\displaystyle (AB)^{+}=B^{+}A^{+}}$ .

Another important special case which approximates closely that of invertible matrices is when ${\displaystyle A}$  has full column rank and ${\displaystyle B}$  has full row rank.

Proposition. If ${\displaystyle A}$  has full column rank and ${\displaystyle B}$  has full row rank, then ${\displaystyle (AB)^{+}=B^{+}A^{+}}$ .
Proof. Since ${\displaystyle A}$  has full column rank, ${\displaystyle A^{*}A}$  is invertible so ${\displaystyle \left(A^{*}A\right)^{+}=\left(A^{*}A\right)^{-1}}$ . Similarly, since ${\displaystyle B}$  has full row rank, ${\displaystyle BB^{*}}$  is invertible so ${\displaystyle \left(BB^{*}\right)^{+}=\left(BB^{*}\right)^{-1}}$ .

Write ${\displaystyle D=B^{+}A^{+}=B^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}}$ (using reduction to the Hermitian case). We show that ${\displaystyle D}$  satisfies the Moore–Penrose criteria.

{\displaystyle {\begin{aligned}CDC&=ABB^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}AB=AB=C,\\[4pt]DCD&=B^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}ABB^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}=B^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}=D,\\[4pt]CD&=ABB^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}=A\left(A^{*}A\right)^{-1}A^{*}=\left(A\left(A^{*}A\right)^{-1}A^{*}\right)^{*},\\\Rightarrow (CD)^{*}&=CD,\\[4pt]DC&=B^{*}\left(BB^{*}\right)^{-1}\left(A^{*}A\right)^{-1}A^{*}AB=B^{*}\left(BB^{*}\right)^{-1}B=\left(B^{*}\left(BB^{*}\right)^{-1}B\right)^{*},\\\Rightarrow (DC)^{*}&=DC.\end{aligned}}}

Therefore, ${\displaystyle D=C^{+}}$ . ${\displaystyle \square }$

We finally derive a formula for calculating the Moore-Penrose inverse of ${\displaystyle AA^{*}}$ .

Proposition. If ${\displaystyle B=A^{*}}$ , then ${\displaystyle (AB)^{+}=A^{+*}A^{+}}$ .
Proof. Here, ${\displaystyle B=A^{*}}$ , and thus ${\displaystyle C=AA^{*}}$  and ${\displaystyle D=A^{+*}A^{+}}$ . We show that indeed ${\displaystyle D}$  satisfies the four Moore–Penrose criteria.

{\displaystyle {\begin{aligned}CDC&=AA^{*}A^{+*}A^{+}AA^{*}=A\left(A^{+}A\right)^{*}A^{+}AA^{*}=AA^{+}AA^{+}AA^{*}=AA^{+}AA^{*}=AA^{*}=C\\[4pt]DCD&=A^{+*}A^{+}AA^{*}A^{+*}A^{+}=A^{+*}A^{+}A\left(A^{+}A\right)^{*}A^{+}=A^{+*}A^{+}AA^{+}AA^{+}=A^{+*}A^{+}AA^{+}=A^{+*}A^{+}=D\\[4pt](CD)^{*}&=\left(AA^{*}A^{+*}A^{+}\right)^{*}=A^{+*}A^{+}AA^{*}=A^{+*}\left(A^{+}A\right)^{*}A^{*}=A^{+*}A^{*}A^{+*}A^{*}\\&=\left(AA^{+}\right)^{*}\left(AA^{+}\right)^{*}=AA^{+}AA^{+}=A\left(A^{+}A\right)^{*}A^{+}=AA^{*}A^{+*}A^{+}=CD\\[4pt](DC)^{*}&=\left(A^{+*}A^{+}AA^{*}\right)^{*}=AA^{*}A^{+*}A^{+}=A\left(A^{+}A\right)^{*}A^{+}=AA^{+}AA^{+}\\&=\left(AA^{+}\right)^{*}\left(AA^{+}\right)^{*}=A^{+*}A^{*}A^{+*}A^{*}=A^{+*}\left(A^{+}A\right)^{*}A^{*}=A^{+*}A^{+}AA^{*}=DC\end{aligned}}}

Therefore, ${\displaystyle D=C^{+}}$ . In other words:

${\displaystyle \left(AA^{*}\right)^{+}=A^{+*}A^{+}}$

and, since ${\displaystyle \left(A^{*}\right)^{*}=A}$

${\displaystyle \left(A^{*}A\right)^{+}=A^{+}A^{+*}}$  ${\displaystyle \square }$

### Projectors and subspaces

The defining feature of classical inverses is that ${\displaystyle AA^{-1}=A^{-1}A=I.}$  What can we say about ${\displaystyle AA^{+}}$  and ${\displaystyle A^{+}A}$ ?

We can derive some properties easily from the more basic properties above:

Exercise. Let ${\displaystyle A}$  be a ${\displaystyle \mathbb {K} }$ -matrix. Then ${\displaystyle (AA^{+})^{2}=(AA^{+}),(A^{+}A)^{2}=(A^{+}A),(AA^{+})*=(AA^{+})}$  and ${\displaystyle (A^{+}A)*=(A^{+}A)}$

We can conclude that ${\displaystyle P=AA^{+}}$  and ${\displaystyle Q=A^{+}A}$  are orthogonal projections.

Proposition. Let ${\displaystyle A}$  be a ${\displaystyle \mathbb {K} }$ -matrix. Then ${\displaystyle P=AA^{+}}$  and ${\displaystyle Q=A^{+}A}$  are orthogonal projections
Proof. Indeed, consider the operator ${\displaystyle P}$ : any vector decomposes as

${\displaystyle x=Px+(I-P)x}$

and for all vectors ${\displaystyle x}$  and ${\displaystyle y}$  satisfying ${\displaystyle Px=x}$  and ${\displaystyle (I-P)y=y}$ , we have

${\displaystyle x^{*}y=(Px)^{*}(I-P)y=x^{*}P^{*}(I-P)y=x^{*}P(I-P)y=0}$ .

It follows that ${\displaystyle PA=AA^{+}A=A}$  and ${\displaystyle A^{+}P=A^{+}AA^{+}=A^{+}}$ . Similarly, ${\displaystyle QA^{+}=A^{+}}$  and ${\displaystyle AQ=A}$ . The orthogonal components are now readily identified. ${\displaystyle \square }$

We finish our analysis by determining image and kernel of the mappings encoded by the Moore-Penrose inverse.

Proposition. Let ${\displaystyle A}$  be a ${\displaystyle \mathbb {K} }$ -matrix. Then ${\displaystyle \operatorname {Ker} \left(A^{+}\right)=\operatorname {Ker} \left(A^{*}\right)}$  and ${\displaystyle \operatorname {Im} \left(A^{+}\right)=\operatorname {Im} \left(A^{*}\right)}$ .
Proof. If ${\displaystyle y}$  belongs to the range of ${\displaystyle A}$  then for some ${\displaystyle x}$ , ${\displaystyle y=Ax}$  and ${\displaystyle Py=PAx=Ax=y}$ . Conversely, if ${\displaystyle Py=y}$  then ${\displaystyle y=AA^{+}y}$  so that ${\displaystyle y}$  belongs to the range of ${\displaystyle A}$ . It follows that ${\displaystyle P}$  is the orthogonal projector onto the range of ${\displaystyle A}$ . ${\displaystyle I-P}$  is then the orthogonal projector onto the orthogonal complement of the range of ${\displaystyle A}$ , which equals the kernel of ${\displaystyle A^{*}}$ .

A similar argument using the relation ${\displaystyle QA^{*}=A^{*}}$  establishes that ${\displaystyle Q}$  is the orthogonal projector onto the range of ${\displaystyle A^{*}}$  and ${\displaystyle (I-Q)}$  is the orthogonal projector onto the kernel of ${\displaystyle A}$ .

Using the relations ${\displaystyle P\left(A^{+}\right)^{*}=P^{*}\left(A^{+}\right)^{*}=\left(A^{+}P\right)^{*}=\left(A^{+}\right)^{*}}$  and ${\displaystyle P=P^{*}=\left(A^{+}\right)^{*}A^{*}}$  it follows that the range of P equals the range of ${\displaystyle \left(A^{+}\right)^{*}}$ , which in turn implies that the range of ${\displaystyle I-P}$  equals the kernel of ${\displaystyle A^{+}}$ . Similarly ${\displaystyle QA^{+}=A^{+}}$  implies that the range of ${\displaystyle Q}$  equals the range of ${\displaystyle A^{+}}$ . Therefore, we find,

{\displaystyle {\begin{aligned}\operatorname {Ker} \left(A^{+}\right)&=\operatorname {Ker} \left(A^{*}\right).\\\operatorname {Im} \left(A^{+}\right)&=\operatorname {Im} \left(A^{*}\right).\\\end{aligned}}}  ${\displaystyle \square }$

### Applications

We present two applications of the Moore-Penrose inverse in solving linear systems of equations.

#### Least-squares minimization

Moore-Penrose inverses can be used for least-squares minimisation of a system of equations that might not necessarily have an exact solution.

Proposition. For any ${\displaystyle m\times n}$  matrix ${\displaystyle A}$ , ${\displaystyle \|Ax-b\|_{2}\geq \|Az-b\|_{2}}$  where ${\displaystyle z=A^{+}b}$ .
Proof. We first note that (stating the complex case), using the fact that ${\displaystyle P=AA^{+}}$  satisfies ${\displaystyle PA=A}$  and ${\displaystyle P=P^{*}}$ , we have

{\displaystyle {\begin{alignedat}{2}A^{*}(Az-b)&=A^{*}(AA^{+}b-b)\\&=A^{*}(Pb-b)\\&=A^{*}P^{*}b-A^{*}b\\&=(PA)^{*}b-A^{*}b\\&=0\end{alignedat}}}

so that (${\displaystyle {\text{c.c.}}}$  stands for the Hermitian conjugate of the previous term in the following)

{\displaystyle {\begin{alignedat}{2}\|Ax-b\|_{2}^{2}&=\|Az-b\|_{2}^{2}+(A(x-z))^{*}(Az-b)+{\text{c.c.}}+\|A(x-z)\|_{2}^{2}\\&=\|Az-b\|_{2}^{2}+(x-z)^{*}A^{*}(Az-b)+{\text{c.c.}}+\|A(x-z)\|_{2}^{2}\\&=\|Az-b\|_{2}^{2}+\|A(x-z)\|_{2}^{2}\\&\geq \|Az-b\|_{2}^{2}\end{alignedat}}}

as claimed. ${\displaystyle \square }$

Remark. This lower bound need not be zero as the system ${\displaystyle Ax=b}$  may not have a solution (e.g. when the matrix A does not have full rank or the system is overdetermined). If ${\displaystyle A}$  is injective i.e. one-to-one (which implies ${\displaystyle m\geq n}$ ), then the bound is attained uniquely at ${\displaystyle z}$ .

#### Minimum-norm solution to a linear system

The proof above also shows that if the system ${\displaystyle Ax=b}$  is satisfiable i.e. has a solution, then necessarily ${\displaystyle z=A^{+}b}$  is a solution (not necessarily unique). We can say more:

Proposition. If the system ${\displaystyle Ax=b}$  is satisfiable, then ${\displaystyle z=A^{+}b}$  is the unique solution with smallest Euclidean norm.
Proof. Note first, with ${\displaystyle Q=A^{+}A}$ , that ${\displaystyle Qz=A^{+}AA^{+}b=A^{+}b=z}$  and that ${\displaystyle Q^{*}=Q}$ . Therefore, assuming that ${\displaystyle Ax=b}$ , we have

{\displaystyle {\begin{aligned}z^{*}(x-z)&=(Qz)^{*}(x-z)\\&=z^{*}Q(x-z)\\&=z^{*}\left(A^{+}Ax-z\right)\\&=z^{*}\left(A^{+}b-z\right)\\&=0.\end{aligned}}}

Thus

{\displaystyle {\begin{alignedat}{2}\|x\|_{2}^{2}&=\|z\|_{2}^{2}+z^{*}(x-z)+{\text{c.c.}}+\|x-z\|_{2}^{2}\\&=\|z\|_{2}^{2}+\|x-z\|_{2}^{2}\\&\geq \|z\|_{2}^{2}\end{alignedat}}}

with equality if and only if ${\displaystyle x=z}$ , as was to be shown. ${\displaystyle \square }$

An immediate consequence of this result is that ${\displaystyle z}$  is also the uniquely smallest solution to the least-squares minimization problem for all ${\displaystyle Ax=b}$ , including when ${\displaystyle A}$  is neither injective nor surjective. It can be shown that the least-squares approximation ${\displaystyle Az=y\approx b}$  is unique. Thus it is necessary and sufficient for all ${\displaystyle x}$  that solve the least-squares minimization to satisfy ${\displaystyle Ax=y=Az=AA^{+}b}$ . This system always has a solution (not necessarily unique) as ${\displaystyle Az}$  lies in the column space of ${\displaystyle A}$ . From the above result the smallest ${\displaystyle x}$  which solves this system is ${\displaystyle A^{+}(AA^{+}b)=A^{+}b=z}$ .

## Notes

1. Some authors use slightly different dimensions for the factors. The two definitions are equivalent.