Definition-1
A Matrix Inequality ,
G
:
R
m
→
S
n
{\displaystyle G:\mathbb {R} ^{m}\to \mathbb {S} ^{n}}
, in the variable
x
∈
R
m
{\displaystyle x\in \mathbb {R} ^{m}}
is an expression of the form
G
(
x
)
=
G
0
+
∑
i
=
1
p
f
i
(
x
)
G
i
≤
0
{\displaystyle {\begin{aligned}G(x)=G_{0}+\sum _{i=1}^{p}f_{i}(x)G_{i}\leq 0\end{aligned}}}
,
where
x
T
=
[
x
1
⋯
x
m
]
,
G
0
∈
S
n
{\displaystyle x^{T}=[x_{1}\cdots x_{m}],G_{0}\in \mathbb {S} ^{n}}
and
G
i
∈
R
n
×
n
{\displaystyle G_{i}\in \mathbb {R} ^{n\times n}}
,
i
=
1
,
…
,
p
.
{\displaystyle i=1,\ldots ,p.}
Linear Matrix Inequality
edit
Bilinear Matrix Inequality
edit
Definition-3
A Bilinear Matrix Inequality (BMI),
H
:
R
m
→
S
n
{\displaystyle H:\mathbb {R} ^{m}\to \mathbb {S} ^{n}}
, in the variable
x
∈
R
m
{\displaystyle x\in \mathbb {R} ^{m}}
is an expression of the form
H
(
x
)
=
H
0
+
∑
i
=
1
m
x
i
H
i
+
∑
i
=
1
m
∑
j
=
1
m
x
i
x
j
H
i
,
j
≤
0
,
{\displaystyle {\begin{aligned}H(x)=H_{0}+\sum _{i=1}^{m}x_{i}H_{i}+\sum _{i=1}^{m}\sum _{j=1}^{m}x_{i}x_{j}H_{i,j}\leq 0,\end{aligned}}}
where
x
T
=
[
x
1
⋯
x
m
]
{\displaystyle x^{T}=[x_{1}\cdots x_{m}]}
, and
H
i
{\displaystyle H_{i}}
,
H
i
,
j
∈
S
n
,
{\displaystyle H_{i,j}\in \mathbb {S} ^{n},}
i
=
0
,
…
,
m
{\displaystyle i=0,\ldots ,m}
,
j
=
0
…
,
m
.
{\displaystyle j=0\ldots ,m.}
Consider the matrices
A
∈
R
n
×
n
{\displaystyle A\in \mathbb {R} ^{n\times n}}
and
Q
∈
S
n
{\displaystyle Q\in \mathbb {S} ^{n}}
, where
Q
>
0
{\displaystyle Q>0}
. It is desired to find a symmetric matrix
P
∈
S
n
{\displaystyle P\in \mathbb {S} ^{n}}
satisfying the inequality
P
A
+
A
T
P
+
Q
<
0
,
(
1
)
{\displaystyle {\begin{aligned}PA+A^{T}P+Q<0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (1)\end{aligned}}}
where
P
>
0
{\displaystyle P>0}
. The elements of
P
{\displaystyle P}
are the design variables in this problem, and although equation
(
1
)
{\displaystyle (1)}
is indeed an LMI in the matrix
P
{\displaystyle P}
, it does not look like the LMI in definition 3. For simplicity, let us consider the case of
n
=
2
{\displaystyle n=2}
so that each matrix is of dimension
2
×
2
{\displaystyle 2\times 2}
, and
x
=
[
p
1
p
2
p
3
]
T
.
{\displaystyle x=[p_{1}\quad p_{2}\quad p_{3}]^{T}.}
Writing the matrix
P
{\displaystyle P}
in terms of a basis
E
i
∈
S
2
,
{\displaystyle E_{i}\in \mathbb {S} ^{2},}
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
, yields
P
=
[
p
1
p
2
p
2
p
3
]
=
p
1
[
1
0
0
0
]
⏟
E
1
+
p
2
[
0
1
1
0
]
⏟
E
2
+
p
3
[
0
0
0
1
]
⏟
E
3
{\displaystyle {\begin{aligned}P={\begin{bmatrix}p_{1}&p_{2}\\p_{2}&p_{3}\end{bmatrix}}=p_{1}\underbrace {\begin{bmatrix}1&0\\0&0\end{bmatrix}} _{E_{1}}+p_{2}\underbrace {\begin{bmatrix}0&1\\1&0\end{bmatrix}} _{E_{2}}+p_{3}\underbrace {\begin{bmatrix}0&0\\0&1\end{bmatrix}} _{E_{3}}\end{aligned}}}
Note that the matrices
E
i
{\displaystyle E_{i}}
are linearly independent and symmetric, thus forming a basis for the symmetric matrix
P
{\displaystyle P}
. The matrix inequality in equation
(
1
)
{\displaystyle (1)}
can be written as
p
1
(
E
1
A
+
A
T
E
1
)
+
p
2
(
E
2
A
+
A
T
E
2
)
+
p
3
(
E
3
A
+
A
T
E
3
)
.
{\displaystyle {\begin{aligned}p_{1}(E_{1}A+A^{T}E_{1})+p_{2}(E_{2}A+A^{T}E_{2})+p_{3}(E_{3}A+A^{T}E_{3}).\end{aligned}}}
Defining
F
0
=
Q
{\displaystyle F_{0}=Q}
and
F
i
=
E
i
A
+
A
T
E
i
,
{\displaystyle F_{i}=E_{i}A+A^{T}E_{i},}
i
=
1
,
2
,
3
,
{\displaystyle i=1,2,3,}
yields
F
0
+
∑
i
=
1
3
p
i
F
i
<
0
,
{\displaystyle {\begin{aligned}F_{0}+\sum {i=1}^{3}p_{i}F_{i}<0,\end{aligned}}}
which now resembles the definition of LMI given in definition 2. Through out this wiki book, LMIs are typically written in the matrix form of equation
(
1
)
{\displaystyle (1)}
rather than the scalar form of definition 2.