Shifted Inverse Power Method
edit
Let
A
~
=
(
A
−
σ
I
)
−
1
{\displaystyle {\begin{aligned}{\tilde {A}}&=(A-\sigma I)^{-1}\\\end{aligned}}}
Then,
λ
i
~
=
1
|
λ
i
−
σ
|
{\displaystyle {\tilde {\lambda _{i}}}={\frac {1}{|\lambda _{i}-\sigma |}}}
which implies
λ
j
~
>
λ
~
i
for all
i
≠
j
{\displaystyle {\tilde {\lambda _{j}}}>{\tilde {\lambda }}_{i}{\mbox{ for all }}i\neq j}
Since shifting the eigenvalues and inverting the matrix does not affect the eigenvectors,
A
~
v
i
=
λ
i
~
v
i
i
=
1
,
2
,
…
,
n
{\displaystyle {\tilde {A}}v_{i}={\tilde {\lambda _{i}}}v_{i}\quad \quad i=1,2,\ldots ,n}
Assume
‖
v
i
‖
=
1
{\displaystyle \|v_{i}\|=1}
for all
i
{\displaystyle \!\,i}
. Generate
w
0
,
w
1
,
w
2
,
…
{\displaystyle w_{0},w_{1},w_{2},\ldots }
to find
v
j
{\displaystyle \,\!v_{j}}
. Start with arbitrary
w
0
{\displaystyle \,\!w_{0}}
such that
‖
w
0
‖
=
1
{\displaystyle \,\!\|w_{0}\|=1}
.
For
k
=
0
,
1
,
2
,
…
{\displaystyle \,\!k=0,1,2,\ldots }
w
^
k
+
1
=
A
~
w
k
{\displaystyle \,\!{\hat {w}}_{k+1}={\tilde {A}}w_{k}}
w
k
+
1
=
w
^
k
+
1
‖
w
^
k
+
1
‖
{\displaystyle \,\!w_{k+1}={\frac {{\hat {w}}_{k+1}}{\|{\hat {w}}_{k+1}\|}}}
λ
j
(
k
+
1
)
=
(
w
^
k
+
1
,
w
k
)
(
w
k
,
w
k
)
=
(
A
~
w
k
,
w
k
)
(
w
k
,
w
k
)
{\displaystyle \lambda _{j}^{(k+1)}={\frac {({\hat {w}}_{k+1},w_{k})}{(w_{k},w_{k})}}={\frac {({\tilde {A}}w_{k},w_{k})}{(w_{k},w_{k})}}}
(Rayleigh quotient)
End
Convergence of Power Method
edit
If
|
λ
1
|
>
|
λ
i
|
{\displaystyle \,\!|\lambda _{1}|>|\lambda _{i}|}
, for all
i
≠
1
{\displaystyle \!\,i\neq 1}
, then
A
k
w
0
{\displaystyle \,\!A^{k}w_{0}}
will be dominated by
v
1
{\displaystyle \,\!v_{1}}
.
Since
v
1
,
v
2
,
…
,
v
n
{\displaystyle v_{1},v_{2},\ldots ,v_{n}}
are linearly independent, they form a basis of
R
n
{\displaystyle R^{n}\,\!}
. Hence,
w
0
=
α
1
v
1
+
α
2
v
2
+
…
+
α
n
v
n
{\displaystyle w_{0}=\alpha _{1}v_{1}+\alpha _{2}v_{2}+\ldots +\alpha _{n}v_{n}\,\!}
From the definition of eigenvectors,
A
w
0
=
α
1
λ
1
v
1
+
α
2
λ
2
v
2
+
…
+
α
n
λ
n
v
n
A
2
w
0
=
α
1
λ
1
2
v
1
+
α
2
λ
2
2
v
2
+
…
+
α
n
λ
n
2
v
n
⋮
A
k
w
0
=
α
1
λ
1
k
v
1
+
α
2
λ
k
2
v
2
+
…
+
α
n
λ
n
k
v
n
{\displaystyle {\begin{aligned}Aw_{0}&=\alpha _{1}\lambda _{1}v_{1}+\alpha _{2}\lambda _{2}v_{2}+\ldots +\alpha _{n}\lambda _{n}v_{n}\\A^{2}w_{0}&=\alpha _{1}\lambda _{1}^{2}v_{1}+\alpha _{2}\lambda _{2}^{2}v_{2}+\ldots +\alpha _{n}\lambda _{n}^{2}v_{n}\\&\vdots \\A^{k}w_{0}&=\alpha _{1}\lambda _{1}^{k}v_{1}+\alpha _{2}\lambda _{k}^{2}v_{2}+\ldots +\alpha _{n}\lambda _{n}^{k}v_{n}\\\end{aligned}}}
To find a general form of
w
k
{\displaystyle w_{k}\,\!}
, the approximate eigenvector at the kth step, examine a few steps of the algorithm:
w
^
1
=
A
w
0
w
1
=
w
^
1
‖
w
^
1
‖
=
A
w
0
‖
A
w
0
‖
w
^
2
=
A
w
1
=
A
2
w
0
‖
A
w
0
‖
w
2
=
w
^
2
‖
w
^
2
‖
=
A
2
w
0
‖
A
w
0
‖
‖
A
2
w
0
‖
‖
A
w
0
‖
=
A
2
w
0
‖
A
2
w
0
‖
w
^
3
=
A
3
w
0
‖
A
2
w
0
‖
w
3
=
A
3
w
0
‖
A
2
w
0
‖
‖
A
3
w
0
‖
‖
A
2
w
0
‖
=
A
3
w
0
‖
A
3
w
0
‖
{\displaystyle {\begin{aligned}{\hat {w}}_{1}&=Aw_{0}\\w_{1}&={\frac {{\hat {w}}_{1}}{\|{\hat {w}}_{1}\|}}={\frac {Aw_{0}}{\|Aw_{0}\|}}\\{\hat {w}}_{2}&=Aw_{1}={\frac {A^{2}w_{0}}{\|Aw_{0}\|}}\\w_{2}&={\frac {{\hat {w}}_{2}}{\|{\hat {w}}_{2}\|}}={\frac {\frac {A^{2}w_{0}}{\|Aw_{0}\|}}{\frac {\|A^{2}w_{0}\|}{\|Aw_{0}\|}}}={\frac {A^{2}w_{0}}{\|A^{2}w_{0}\|}}\\{\hat {w}}_{3}&={\frac {A^{3}w_{0}}{\|A^{2}w_{0}\|}}\\w_{3}&={\frac {\frac {A^{3}w_{0}}{\|A^{2}w_{0}\|}}{\frac {\|A^{3}w_{0}\|}{\|A^{2}w_{0}\|}}}={\frac {A^{3}w_{0}}{\|A^{3}w_{0}\|}}\\\end{aligned}}}
From induction,
w
k
=
A
k
w
0
‖
A
k
w
0
‖
{\displaystyle w_{k}={\frac {A^{k}w_{0}}{\|A^{k}w_{0}\|}}}
Hence,
w
k
=
A
k
w
0
‖
A
k
w
0
‖
=
α
1
λ
1
k
v
1
+
α
2
λ
2
k
v
2
+
…
+
α
n
λ
n
k
v
n
‖
A
k
w
0
‖
=
λ
1
k
(
α
1
v
1
+
α
2
(
λ
2
λ
1
)
k
v
2
+
…
+
α
n
(
λ
n
λ
1
)
k
v
n
)
‖
A
k
w
0
‖
{\displaystyle {\begin{aligned}w_{k}&={\frac {A^{k}w_{0}}{\|A^{k}w_{0}\|}}\\&={\frac {\alpha _{1}\lambda _{1}^{k}v_{1}+\alpha _{2}\lambda _{2}^{k}v_{2}+\ldots +\alpha _{n}\lambda _{n}^{k}v_{n}}{\|A^{k}w_{0}\|}}\\&={\frac {\lambda _{1}^{k}(\alpha _{1}v_{1}+\alpha _{2}({\frac {\lambda _{2}}{\lambda _{1}}})^{k}v_{2}+\ldots +\alpha _{n}({\frac {\lambda _{n}}{\lambda _{1}}})^{k}v_{n})}{\|A^{k}w_{0}\|}}\end{aligned}}}
Comparing a weighted
w
k
{\displaystyle w_{k}\!\,}
and
v
1
{\displaystyle v_{1}\!\,}
,
‖
‖
A
k
w
0
‖
λ
1
k
w
k
−
α
1
v
1
‖
=
‖
α
2
(
λ
2
λ
1
)
k
v
2
+
…
+
α
n
(
λ
n
λ
1
)
k
v
n
‖
≤
α
2
|
λ
2
λ
1
|
k
+
…
+
α
n
|
λ
n
λ
1
|
k
{\displaystyle {\begin{aligned}\left\|{\frac {\|A^{k}w_{0}\|}{\lambda _{1}^{k}}}w_{k}-\alpha _{1}v_{1}\right\|&=\left\|\alpha _{2}\left({\frac {\lambda _{2}}{\lambda _{1}}}\right)^{k}v_{2}+\ldots +\alpha _{n}\left({\frac {\lambda _{n}}{\lambda _{1}}}\right)^{k}v_{n}\right\|\\&\leq \alpha _{2}\left|{\frac {\lambda _{2}}{\lambda _{1}}}\right|^{k}+\ldots +\alpha _{n}\left|{\frac {\lambda _{n}}{\lambda _{1}}}\right|^{k}\end{aligned}}}
since
‖
v
i
‖
=
1
{\displaystyle \|v_{i}\|=1}
by assumption.
The above expression goes to
0
{\displaystyle \,\!0}
as
k
→
∞
{\displaystyle k\rightarrow \infty }
since
|
λ
1
|
>
|
λ
i
|
{\displaystyle \,\!|\lambda _{1}|>|\lambda _{i}|}
, for all
i
≠
1
{\displaystyle \!\,i\neq 1}
. Hence as
k
{\displaystyle \!\,k}
grows,
w
k
{\displaystyle \!\,w_{k}}
is parallel to
v
1
{\displaystyle \!\,v_{1}}
. Because
‖
w
k
‖
=
1
{\displaystyle \|w_{k}\|=1}
, it must be that
w
k
→
±
v
1
{\displaystyle w_{k}\rightarrow \pm v_{1}}
.
Derivation of Iterations
edit
Let
A
=
D
+
L
+
U
{\displaystyle A=D+L+U\!\,}
where
D
{\displaystyle D\!\,}
is a diagonal matrix,
L
,
{\displaystyle L\!,}
is a lower triangular matrix with a zero diagonal and
U
{\displaystyle U\!\,}
is an upper triangular matrix also with a zero diagonal.
The Jacobi iteration can be found by substituting into
A
x
=
b
{\displaystyle Ax=b\!\,}
, grouping
L
+
U
{\displaystyle L+U\!\,}
, and solving for
x
{\displaystyle x\!\,}
i.e.
A
x
=
b
(
D
+
L
+
U
)
x
=
b
D
x
+
(
L
+
U
)
x
=
b
D
x
=
b
−
(
L
+
U
)
x
x
=
D
−
1
(
b
−
(
L
+
U
)
x
)
{\displaystyle {\begin{aligned}Ax&=b\\(D+L+U)x&=b\\Dx+(L+U)x&=b\\Dx&=b-(L+U)x\\x&=D^{-1}(b-(L+U)x)\end{aligned}}}
Since
L
=
0
{\displaystyle L=0\!\,}
by hypothesis, the iteration is
x
(
i
+
1
)
=
D
−
1
(
b
−
U
x
(
i
)
)
{\displaystyle x^{(i+1)}=D^{-1}(b-Ux^{(i)})\!\,}
Similarly, the Gauss-Seidel iteration can be found by substituting into
A
x
=
b
{\displaystyle Ax=b\!\,}
, grouping
D
+
L
{\displaystyle D+L\!\,}
, and solving for
x
{\displaystyle x\!\,}
i.e.
A
x
=
b
(
D
+
L
+
U
)
x
=
b
(
D
+
L
)
x
+
U
x
=
b
(
D
+
L
)
x
=
b
−
U
x
x
=
(
D
+
L
)
−
1
(
b
−
U
x
)
{\displaystyle {\begin{aligned}Ax&=b\\(D+L+U)x&=b\\(D+L)x+Ux&=b\\(D+L)x&=b-Ux\\x&=(D+L)^{-1}(b-Ux)\end{aligned}}}
Since
L
=
0
{\displaystyle L=0\!\,}
by hypothesis, the iteration has identical from as Jacopi:
x
(
i
+
1
)
=
D
−
1
(
b
−
U
x
(
i
)
)
{\displaystyle x^{(i+1)}=D^{-1}(b-Ux^{(i)})\!\,}
Convergence in Finite Number of Steps
edit
Jacobi and Gauss-Seidel are iterative methods that split the matrix
A
∈
R
n
×
n
{\displaystyle A\in \mathbb {R} ^{n\times n}\!\,}
into
D
{\displaystyle D\!\,}
,
U
{\displaystyle U\!\,}
, and
L
{\displaystyle L\!\,}
: Diagonal, Upper (everything above the diagonal), and Lower (everything below the diagonal) triangular matrices, respectively. Their iterations go as such
x
(
i
+
1
)
=
(
D
+
L
)
−
1
(
b
−
U
x
(
i
)
)
{\displaystyle x^{(i+1)}=(D+L)^{-1}(b-Ux^{(i)})\!\,}
(Gauss-Seidel)
x
(
i
+
1
)
=
D
−
1
(
b
−
(
U
+
L
)
x
(
i
)
)
{\displaystyle x^{(i+1)}=D^{-1}(b-(U+L)x^{(i)})\!\,}
(Jacobi)
In our case
A
{\displaystyle A\!\,}
is upper triangular, so
L
{\displaystyle L\!\,}
is the zero matrix. As a result, the Gauss-Seidel and Jacobi methods take on the following identical form
x
(
i
+
1
)
=
D
−
1
(
b
−
U
x
(
i
)
)
{\displaystyle x^{(i+1)}=D^{-1}(b-Ux^{(i)})\!\,}
Additionally,
x
{\displaystyle x\!\,}
can be written
x
=
D
−
1
(
b
−
U
x
)
{\displaystyle x=D^{-1}(b-Ux)\!\,}
Subtracting
x
{\displaystyle x\!\,}
from
x
(
i
+
1
)
{\displaystyle x^{(i+1)}\!\,}
we get
e
(
i
+
1
)
=
x
(
i
+
1
)
−
x
=
D
−
1
(
b
−
U
x
(
i
)
)
−
D
−
1
(
b
−
U
x
)
=
D
−
1
U
(
−
x
(
i
)
+
x
)
=
D
−
1
U
e
(
i
)
=
D
−
1
U
(
D
−
1
U
e
(
i
−
1
)
)
=
(
D
−
1
U
)
2
e
(
i
−
1
)
⋮
=
(
D
−
1
U
)
i
+
1
e
(
0
)
{\displaystyle {\begin{aligned}e^{(i+1)}&=x^{(i+1)}-x\\&=D^{-1}(b-Ux^{(i)})-D^{-1}(b-Ux)\\&=D^{-1}U(-x^{(i)}+x)\\&=D^{-1}Ue^{(i)}\\&=D^{-1}U(D^{-1}Ue^{(i-1)})=(D^{-1}U)^{2}e^{(i-1)}\\&\vdots \\&=(D^{-1}U)^{i+1}e^{(0)}\end{aligned}}}
In our problem,
D
−
1
{\displaystyle D^{-1}\!\,}
is diagonal and
U
{\displaystyle U\!\,}
is upper triangular with zeros along the diagonal. Notice that the product
D
−
1
U
{\displaystyle D^{-1}U\!\,}
will also be upper triangular with zeros along the diagonal.
Let
R
=
D
−
1
U
{\displaystyle R=D^{-1}U\!\,}
,
R
∈
R
n
×
n
{\displaystyle R\in \mathbb {R} ^{n\times n}\!\,}
R
=
(
0
∗
∗
⋯
∗
0
0
∗
⋯
∗
⋮
⋮
⋱
⋱
⋮
0
⋯
⋯
0
∗
0
⋯
⋯
0
0
)
{\displaystyle R={\begin{pmatrix}0&*&*&\cdots &*\\0&0&*&\cdots &*\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&\cdots &\cdots &0&*\\0&\cdots &\cdots &0&0\end{pmatrix}}}
Also, let
R
~
∈
R
n
×
n
{\displaystyle {\tilde {R}}\in \mathbb {R} ^{n\times n}\!\,}
be the related matrix
R
~
=
(
0
⋯
0
∗
∗
⋯
∗
0
⋯
0
0
∗
⋯
∗
⋮
⋮
⋮
⋮
⋱
⋱
⋮
0
⋯
0
0
⋯
0
∗
0
⋯
0
0
⋯
0
0
⋮
⋮
⋮
⋮
⋯
⋮
⋮
0
⋯
0
0
⋯
0
0
)
=
(
a
T
0
a
T
)
{\displaystyle {\begin{aligned}{\tilde {R}}&=\left({\begin{array}{ccc|cccc}0&\cdots &0&*&*&\cdots &*\\0&\cdots &0&0&*&\cdots &*\\\vdots &\vdots &\vdots &\vdots &\ddots &\ddots &\vdots \\0&\cdots &0&0&\cdots &0&*\\\hline 0&\cdots &0&0&\cdots &0&0\\\vdots &\vdots &\vdots &\vdots &\cdots &\vdots &\vdots \\0&\cdots &0&0&\cdots &0&0\\\end{array}}\right)\\&=\left({\begin{array}{c|c}\mathbf {a} &T\\\hline \mathbf {0} &\mathbf {a} ^{T}\\\end{array}}\right)\end{aligned}}}
Where
a
{\displaystyle \mathbf {a} \!\,}
is
k
×
(
n
−
k
)
{\displaystyle k\times (n-k)\!\,}
,
T
{\displaystyle T\!\,}
is
k
×
k
{\displaystyle k\times k\!\,}
, and
0
{\displaystyle \mathbf {0} \!\,}
is
(
n
−
k
)
×
(
n
−
k
)
{\displaystyle (n-k)\times (n-k)\!\,}
.
Finally, the product
R
R
~
{\displaystyle R{\tilde {R}}\!\,}
(call it (1)) is
R
R
~
=
(
0
∗
∗
⋯
∗
0
0
∗
⋯
∗
⋮
⋮
⋱
⋱
⋮
0
⋯
⋯
0
∗
0
⋯
⋯
0
0
)
(
0
⋯
0
∗
∗
⋯
∗
0
⋯
0
0
∗
⋯
∗
⋮
⋮
⋮
⋮
⋱
⋱
⋮
0
⋯
0
0
⋯
0
∗
0
⋯
0
0
⋯
0
0
⋮
⋮
⋮
⋮
⋯
⋮
⋮
0
⋯
0
0
⋯
0
0
)
=
(
0
⋯
0
0
∗
⋯
⋯
∗
0
⋯
0
0
0
∗
⋯
∗
⋮
⋮
⋮
⋮
⋯
⋯
⋱
⋮
0
⋯
0
0
⋯
⋯
0
∗
0
⋯
0
0
⋯
⋯
0
0
⋮
⋮
⋮
⋮
⋯
⋯
⋮
⋮
0
⋯
0
0
⋯
⋯
0
0
)
=
(
a
T
~
0
a
T
)
{\displaystyle {\begin{aligned}R{\tilde {R}}&={\begin{pmatrix}0&*&*&\cdots &*\\0&0&*&\cdots &*\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&\cdots &\cdots &0&*\\0&\cdots &\cdots &0&0\end{pmatrix}}\left({\begin{array}{ccc|cccc}0&\cdots &0&*&*&\cdots &*\\0&\cdots &0&0&*&\cdots &*\\\vdots &\vdots &\vdots &\vdots &\ddots &\ddots &\vdots \\0&\cdots &0&0&\cdots &0&*\\\hline 0&\cdots &0&0&\cdots &0&0\\\vdots &\vdots &\vdots &\vdots &\cdots &\vdots &\vdots \\0&\cdots &0&0&\cdots &0&0\\\end{array}}\right)\\&=\left({\begin{array}{ccc|ccccc}0&\cdots &0&0&*&\cdots &\cdots &*\\0&\cdots &0&0&0&*&\cdots &*\\\vdots &\vdots &\vdots &\vdots &\cdots &\cdots &\ddots &\vdots \\0&\cdots &0&0&\cdots &\cdots &0&*\\\hline 0&\cdots &0&0&\cdots &\cdots &0&0\\\vdots &\vdots &\vdots &\vdots &\cdots &\cdots &\vdots &\vdots \\0&\cdots &0&0&\cdots &\cdots &0&0\\\end{array}}\right)\\&=\left({\begin{array}{c|c}\mathbf {a} &{\tilde {T}}\\\hline \mathbf {0} &\mathbf {a} ^{T}\\\end{array}}\right)\end{aligned}}}
Here
T
~
{\displaystyle {\tilde {T}}\!\,}
is almost identical in structure to
T
{\displaystyle T\!\,}
, except that its diagonal elements are zeros.
At this point the convergence in
n
{\displaystyle n\!\,}
steps (the size of the starting matrix) should be apparent since
R
{\displaystyle R\!\,}
is just
R
~
{\displaystyle {\tilde {R}}\!\,}
where
k
=
0
{\displaystyle k=0\!\,}
and each time
R
{\displaystyle R\!\,}
is multiplied by
R
~
{\displaystyle {\tilde {R}}\!\,}
, the k-th super-diagonal is zeroed out (where k=0 is the diagonal itself). After
n
−
1
{\displaystyle n-1\!\,}
applications of
R
{\displaystyle R\!\,}
, the result will be the zero matrix of size
n
{\displaystyle n\!\,}
.
In brief,
R
n
{\displaystyle R^{n}\!\,}
is the zero matrix of size
n
{\displaystyle n\!\,}
.
Therefore
e
(
n
)
=
(
D
−
1
U
)
n
e
(
0
)
≡
0
{\displaystyle e^{(n)}=(D^{-1}U)^{n}e^{(0)}\equiv 0\!\,}
, i.e. the Jacobi and Gauss-Seidel Methods used to solve
A
x
=
b
{\displaystyle A\mathbf {x} =b\!\,}
converge in
n
{\displaystyle n\!\,}
steps when
A
∈
R
n
×
n
{\displaystyle A\in \mathbb {R} ^{n\times n}\!\,}
is upper triangular.
n
=
3
{\displaystyle n=3\!\,}
R
=
(
0
∗
∗
0
0
∗
0
0
0
)
{\displaystyle R={\begin{pmatrix}0&*&*\\0&0&*\\0&0&0\\\end{pmatrix}}}
R
2
=
(
0
∗
∗
0
0
∗
0
0
0
)
(
0
∗
∗
0
0
∗
0
0
0
)
=
(
0
0
∗
0
0
0
0
0
0
)
{\displaystyle R^{2}={\begin{pmatrix}0&*&*\\0&0&*\\0&0&0\\\end{pmatrix}}{\begin{pmatrix}0&*&*\\0&0&*\\0&0&0\\\end{pmatrix}}={\begin{pmatrix}0&0&*\\0&0&0\\0&0&0\\\end{pmatrix}}}
R
3
=
(
0
∗
∗
0
0
∗
0
0
0
)
(
0
0
∗
0
0
0
0
0
0
)
=
(
0
0
0
0
0
0
0
0
0
)
{\displaystyle R^{3}={\begin{pmatrix}0&*&*\\0&0&*\\0&0&0\\\end{pmatrix}}{\begin{pmatrix}0&0&*\\0&0&0\\0&0&0\\\end{pmatrix}}={\begin{pmatrix}0&0&0\\0&0&0\\0&0&0\\\end{pmatrix}}}
n
=
4
{\displaystyle n=4\!\,}
R
=
(
0
∗
∗
∗
0
0
∗
∗
0
0
0
∗
0
0
0
0
)
{\displaystyle R={\begin{pmatrix}0&*&*&*\\0&0&*&*\\0&0&0&*\\0&0&0&0\\\end{pmatrix}}}
R
2
=
(
0
∗
∗
∗
0
0
∗
∗
0
0
0
∗
0
0
0
0
)
(
0
∗
∗
∗
0
0
∗
∗
0
0
0
∗
0
0
0
0
)
=
(
0
0
∗
∗
0
0
0
∗
0
0
0
0
0
0
0
0
)
{\displaystyle R^{2}={\begin{pmatrix}0&*&*&*\\0&0&*&*\\0&0&0&*\\0&0&0&0\\\end{pmatrix}}{\begin{pmatrix}0&*&*&*\\0&0&*&*\\0&0&0&*\\0&0&0&0\\\end{pmatrix}}={\begin{pmatrix}0&0&*&*\\0&0&0&*\\0&0&0&0\\0&0&0&0\\\end{pmatrix}}}
R
3
=
(
0
∗
∗
∗
0
0
∗
∗
0
0
0
∗
0
0
0
0
)
(
0
0
∗
∗
0
0
0
∗
0
0
0
0
0
0
0
0
)
=
(
0
0
0
∗
0
0
0
0
0
0
0
0
0
0
0
0
)
{\displaystyle R^{3}={\begin{pmatrix}0&*&*&*\\0&0&*&*\\0&0&0&*\\0&0&0&0\\\end{pmatrix}}{\begin{pmatrix}0&0&*&*\\0&0&0&*\\0&0&0&0\\0&0&0&0\\\end{pmatrix}}={\begin{pmatrix}0&0&0&*\\0&0&0&0\\0&0&0&0\\0&0&0&0\\\end{pmatrix}}}
R
4
=
(
0
∗
∗
∗
0
0
∗
∗
0
0
0
∗
0
0
0
0
)
(
0
0
0
∗
0
0
0
0
0
0
0
0
0
0
0
0
)
=
(
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
)
{\displaystyle R^{4}={\begin{pmatrix}0&*&*&*\\0&0&*&*\\0&0&0&*\\0&0&0&0\\\end{pmatrix}}{\begin{pmatrix}0&0&0&*\\0&0&0&0\\0&0&0&0\\0&0&0&0\\\end{pmatrix}}={\begin{pmatrix}0&0&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0\\\end{pmatrix}}}