Definition
edit
Lemma 1.1
edit
The set of solutions to an LMI is convex.
That is, the set
S
=
{
x
∈
R
m
∣
F
(
x
)
≤
0
}
{\displaystyle {\mathcal {S}}=\{x\in \mathbb {R} ^{m}\mid F(x)\leq 0\}}
is a convex set, where
F
:
R
m
⟶
S
n
{\displaystyle F:\mathbb {R} ^{m}\longrightarrow \mathbb {S} ^{n}}
is an LMI.
Lemma 1.2
edit
Convexity of LMI
edit
From Lemma 1.1, it is known that an optimization problem with a convex objective function and LMI constraints is convex.
The following is a non-exhaustive list of scalar convex objective functions involving matrix variables that can be minimized in conjunction with LMI constraints to yield a semi-definite programming (SDP) problem.
J
(
x
)
=
1
2
x
T
P
X
+
q
T
x
+
r
{\displaystyle {\mathcal {J}}(x)={\frac {1}{2}}x^{T}PX+q^{T}x+r}
, where
x
,
q
∈
R
n
{\displaystyle x,q\in \mathbb {R} ^{n}}
,
P
∈
S
n
{\displaystyle P\in \mathbb {S} ^{n}}
,
P
>
0
{\displaystyle P>0}
, and
r
∈
R
{\displaystyle r\in \mathbb {R} }
.
Special case when
q
=
0
{\displaystyle q=0}
and
r
=
0
:
J
(
x
)
=
1
2
x
T
P
x
{\displaystyle r=0:{\mathcal {J}}(x)={\frac {1}{2}}x^{T}Px}
, where
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
,
P
∈
S
n
{\displaystyle P\in \mathbb {S} ^{n}}
, and
P
>
0
{\displaystyle P>0}
.
Special case when
P
=
{\displaystyle P=}
2
⋅
1
{\displaystyle \cdot 1}
,
q
=
0
{\displaystyle q=0}
, and
r
=
0
:
J
(
x
)
=
x
T
x
=
‖
x
‖
2
2
{\displaystyle r=0:{\mathcal {J}}(x)=x^{T}x=\|x\|_{2}^{2}}
, where
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
.
J
(
X
)
=
t
r
(
X
T
P
X
+
Q
T
X
+
X
T
R
+
S
)
{\displaystyle {\mathcal {J}}(X)=tr(X^{T}PX+Q^{T}X+X^{T}R+S)}
, where
X
{\displaystyle X}
,
Q
{\displaystyle Q}
,
R
∈
R
n
×
m
{\displaystyle R\in \mathbb {R} ^{n\times m}}
,
P
∈
S
n
{\displaystyle P\in \mathbb {S} ^{n}}
,
S
∈
R
n
×
n
{\displaystyle S\in \mathbb {R} ^{n\times n}}
, and
P
≥
0
{\displaystyle P\geq 0}
.
Special case when
Q
=
R
=
0
{\displaystyle Q=R=0}
and
S
=
0
:
J
(
X
)
=
t
r
(
X
T
P
X
)
{\displaystyle S=0:{\mathcal {J}}(X)=tr(X^{T}PX)}
, where
X
∈
R
n
×
m
{\displaystyle X\in \mathbb {R} ^{n\times m}}
,
P
∈
S
n
{\displaystyle P\in \mathbb {S} ^{n}}
, and
P
≥
0
{\displaystyle P\geq 0}
.
Special case when
P
=
1
{\displaystyle P=1}
,
Q
=
R
=
0
{\displaystyle Q=R=0}
and
S
=
0
:
J
(
X
)
=
t
r
(
X
T
X
)
=
‖
X
‖
F
2
{\displaystyle S=0:{\mathcal {J}}(X)=tr(X^{T}X)=\|X\|_{F}^{2}}
, where
X
∈
R
n
×
m
{\displaystyle X\in \mathbb {R} ^{n\times m}}
.
Special case when
P
=
0
{\displaystyle P=0}
,
R
=
0
{\displaystyle R=0}
and
S
=
0
:
J
(
X
)
=
t
r
(
Q
T
X
)
{\displaystyle S=0:{\mathcal {J}}(X)=tr(Q^{T}X)}
, where
X
{\displaystyle X}
,
Q
∈
R
n
×
m
{\displaystyle Q\in \mathbb {R} ^{n\times m}}
.
Special case when
P
=
1
{\displaystyle P=1}
,
Q
=
R
=
0
{\displaystyle Q=R=0}
,
S
=
0
{\displaystyle S=0}
, and
X
∈
S
n
:
J
(
X
)
=
t
r
(
X
2
)
{\displaystyle X\in \mathbb {S} ^{n}:{\mathcal {J}}(X)=tr(X^{2})}
, where
X
∈
S
n
{\displaystyle X\in \mathbb {S} ^{n}}
.
J
(
X
)
=
log
(
det
(
X
−
1
)
)
=
log
(
det
(
X
)
)
{\displaystyle {\mathcal {J}}(X)=\log(\det(X^{-1}))=\log(\det(X))}
, where
X
∈
S
n
{\displaystyle X\in \mathbb {S} ^{n}}
and
X
>
0
{\displaystyle X>0}
.
Relative Definition of a Matrix
edit
The definiteness of a matrix can be found relative to another matrix.
For example,
Consider the matrices
A
∈
S
n
{\displaystyle A\in \mathbb {S} ^{n}}
and
B
∈
S
n
{\displaystyle B\in \mathbb {S} ^{n}}
. The matrix inequality
A
<
B
{\displaystyle A<B}
is equivalent to
A
−
B
<
0
{\displaystyle A-B<0}
or
B
−
A
<
0
{\displaystyle B-A<0}
.
Knowing the relative definiteness of matrices can be useful.
For example,
If in the previous example we have
A
<
B
{\displaystyle A<B}
and also know that
A
>
0
{\displaystyle A>0}
, when we know that
B
>
0
{\displaystyle B>0}
.
This follows from
0
<
A
<
B
{\displaystyle 0<A<B}
.
Strict and Non-strict Matrix Inequalities
edit
A strict matrix inequality can be converted to a non-strict matrix inequality.
For example,
A
>
0
{\displaystyle A>0}
is implied by
A
≥
ϵ
1
{\displaystyle A\geq \epsilon 1}
, where
ϵ
∈
R
>
0
{\displaystyle \epsilon \in \mathbb {R} _{>0}}
. Similarly,
B
<
0
{\displaystyle B<0}
is implied by
B
≤
−
ϵ
1
{\displaystyle B\leq -\epsilon 1}
, where
ϵ
∈
R
>
0
{\displaystyle \epsilon \in \mathbb {R} _{>0}}
Converting a strict matrix inequality into a non-strict matrix inequality is useful when working with LMI solvers that cannot handle strict constraints.
Concatenation of LMIs
edit
A useful property of LMIs is that multiple LMIs can be concatenated together to form a single LMI.
For example,
satisfying the LMIs
A
<
0
{\displaystyle A<0}
and
B
<
0
{\displaystyle B<0}
is equivalent to satisfying the concatenated LMI
[
A
0
0
B
]
<
0
{\displaystyle {\begin{bmatrix}A&0\\0&B\end{bmatrix}}<0}
More generally, satisfying the LMIs
A
i
<
0
{\displaystyle A_{i}<0}
,
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
is equivalent to satisfying the concatenated LMI
d
i
a
g
{
A
1
,
…
,
A
n
}
<
0
{\displaystyle diag\{A_{1},\ldots ,A_{n}\}<0}
.
External Links
edit