Quadratic Polytopic Full State Feedback Optimal
H
∞
{\displaystyle H_{\infty }}
Control
edit
For a system having polytopic uncertainties,
Full State Feedback is a control technique that attempts to place the system's closed-loop system poles in specified locations based off of performance specifications given.
H
∞
{\displaystyle H_{\infty }}
methods formulate this task as an optimization problem and attempt to minimize the
H
∞
{\displaystyle H_{\infty }}
norm of the system.
Consider System with following state-space representation.
x
˙
(
t
)
=
A
x
(
t
)
+
B
1
q
(
t
)
+
B
2
w
(
t
)
p
(
t
)
=
C
1
x
(
t
)
+
D
11
q
(
t
)
+
D
12
w
(
t
)
z
(
t
)
=
C
2
x
(
t
)
+
D
21
q
(
t
)
+
D
22
w
(
t
)
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=Ax(t)+B_{1}q(t)+B_{2}w(t)\\p(t)&=C_{1}x(t)+D_{11}q(t)+D_{12}w(t)\\z(t)&=C_{2}x(t)+D_{21}q(t)+D_{22}w(t)\\\end{aligned}}}
where
x
∈
R
m
{\displaystyle x\in \mathbb {R} ^{m}}
,
q
∈
R
n
{\displaystyle q\in \mathbb {R} ^{n}}
,
w
∈
R
g
{\displaystyle w\in \mathbb {R} ^{g}}
,
A
∈
R
m
x
m
{\displaystyle A\in \mathbb {R} ^{mxm}}
,
B
1
∈
R
m
x
n
{\displaystyle B_{1}\in \mathbb {R} ^{mxn}}
,
B
2
∈
R
m
x
g
{\displaystyle B_{2}\in \mathbb {R} ^{mxg}}
,
p
∈
R
p
{\displaystyle p\in \mathbb {R} ^{p}}
,
C
1
∈
R
p
x
m
{\displaystyle C_{1}\in \mathbb {R} ^{pxm}}
,
D
11
∈
R
p
x
n
{\displaystyle D_{11}\in \mathbb {R} ^{pxn}}
,
D
12
∈
R
p
x
g
{\displaystyle D_{12}\in \mathbb {R} ^{pxg}}
,
z
∈
R
s
{\displaystyle z\in \mathbb {R} ^{s}}
,
C
2
∈
R
s
x
m
{\displaystyle C_{2}\in \mathbb {R} ^{sxm}}
,
D
21
∈
R
s
x
n
{\displaystyle D_{21}\in \mathbb {R} ^{sxn}}
,
D
22
∈
R
s
x
g
{\displaystyle D_{22}\in \mathbb {R} ^{sxg}}
for any
t
∈
R
{\displaystyle t\in \mathbb {R} }
.
Add uncertainty to system matrices
A
,
B
1
,
B
2
,
C
1
,
C
2
,
D
11
,
D
12
{\displaystyle A,B_{1},B_{2},C_{1},C_{2},D_{11},D_{12}}
New state-space representation
x
˙
(
t
)
=
(
A
+
A
i
)
x
(
t
)
+
(
B
1
+
B
i
)
q
(
t
)
+
(
B
2
+
B
i
)
w
(
t
)
p
(
t
)
=
(
C
1
+
C
i
)
x
(
t
)
+
(
D
11
+
D
i
)
q
(
t
)
+
(
D
12
+
D
i
)
w
(
t
)
z
(
t
)
=
C
2
x
(
t
)
+
D
21
q
(
t
)
+
D
22
w
(
t
)
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=(A+A_{i})x(t)+(B_{1}+B_{i})q(t)+(B_{2}+B_{i})w(t)\\p(t)&=(C_{1}+C_{i})x(t)+(D_{11}+D_{i})q(t)+(D_{12}+D_{i})w(t)\\z(t)&=C_{2}x(t)+D_{21}q(t)+D_{22}w(t)\\\end{aligned}}}
The Optimization Problem:
edit
Recall the closed-loop in state feedback is:
S
(
P
,
K
)
=
{\displaystyle S(P,K)=}
[
A
+
B
2
F
B
1
C
1
+
D
12
F
D
11
]
{\displaystyle {\begin{aligned}{\begin{bmatrix}A+B_{2}F&&B_{1}\\C_{1}+D_{12}F&&D_{11}\end{bmatrix}}\\\end{aligned}}}
This problem can be formulated as
H
∞
{\displaystyle H\infty }
optimal state-feedback, where K is a controller gain matrix.
An LMI for Quadratic Polytopic
H
∞
{\displaystyle H\infty }
Optimal
State-Feedback Control
|
|
S
(
P
(
Δ
)
,
K
(
0
,
0
,
0
,
F
)
)
|
|
H
∞
≤
γ
{\displaystyle ||S(P(\Delta ),K(0,0,0,F))||_{H\infty }\leq \gamma }
Y
>
0
{\displaystyle Y>0}
[
Y
(
A
+
A
i
)
T
+
(
A
+
A
i
)
Y
+
Z
T
(
B
2
+
B
1
,
i
)
T
+
(
B
2
+
B
1
,
i
)
Z
∗
T
∗
T
(
B
1
+
B
1
,
i
)
T
−
γ
I
∗
T
(
C
1
+
C
1
,
i
)
Y
+
(
D
12
+
D
12
,
i
)
Z
(
D
11
+
D
11
,
i
)
−
γ
I
]
<
0
{\displaystyle {\begin{aligned}{\begin{bmatrix}Y(A+A_{i})^{T}+(A+A_{i})Y+Z^{T}(B_{2}+B_{1,i})^{T}+(B_{2}+B_{1,i})Z&&*^{T}&&*^{T}\\(B_{1}+B_{1,i})^{T}&&-\gamma I&&*^{T}\\(C_{1}+C_{1,i})Y+(D_{12}+D_{12,i})Z&&(D_{11}+D_{11,i})&&-\gamma I\end{bmatrix}}<0\end{aligned}}}