# Control Systems/State Feedback

## State Observation edit

The state space model of a system is the model of a single plant, not a true feedback system. The feedback mechanism that relates *x' * to *x* is a representation of the mechanism internal to the plant, where the state of the plant is related to its derivative. As such, we do not have an *A* "component" in the sense that we can swap one *A* "chip" with another *A* "chip". The entire state-space model, incorporating *A*, *B*, *C*, and *D* are all part of one device. Frequently, these matrices are immutable, that is that they cannot be altered by the engineer, because they are intrinsic parts of the plant. However, these matrices can change if the plant itself is altered, such as through thermal effects and RF interference.

If the system can be treated as basically immutable (except for effects out of the engineers control), then we need to find a way to modify the system *externally*. From our studies in classical controls, we know that the best system for such modifications is a feedback loop. What we would like to do, ultimately, is to add an additional feedback element, *K* that can be used to move the poles of the system to any desired location. Using a technique called "state feedback" on a controllable system, we can do just that.

## State Feedback edit

In **state feedback**, the value of the state vector is fed back to the input of the system. We define a new input, *r*, and define the following relationship:

*K* is a constant matrix that is external to the system, and therefore can be modified to adjust the locations of the poles of the system. This technique can only work if the system is controllable.

### Closed-Loop System edit

If we have an external feedback element *K*, the system is said to be a **closed-loop system**. Without this feedback element, the system is said to be an **open-loop system**. Using the relationship we've outlined above between *r* and *u*, we can write the equations for the closed-loop system:

Now, our closed-loop state equation appears to have the same form as our open loop state equation, except that the sum *(A + BK)* replaces the matrix *A*. We can define the closed-loop state matrix as:

*A _{cl}* is the closed-loop state matrix, and

*A*is the open-loop state matrix. By altering

_{ol}*K*, we can change the eigenvalues of this matrix, and therefore change the locations of the poles of the system. If the system is controllable, we can find the characteristic equation of this system as:

Computing the determinant is not a trivial task, the determinant of that matrix can be very complicated, especially for larger systems. However, if we transform the system into **controllable canonical form**, the calculations become much easier. Another alternative to compute *K* is by **Ackermann's Formula**.

### Controllable Canonical Form edit

### Ackermann's Formula edit

Consider a linear feedback system with no reference input:

where *K* is a vector of gain elements. Systems of this form are typically referred to as **regulators**. Notice that this system is a simplified version of the one we introduced above, except that we are ignoring the reference input. Substituting this into the state equation gives us:

**Ackermann's Formula** (by Jürgen Ackermann) gives us a way to select these gain values *K* in order to control the location's of the system poles. Using Ackermann's formula, if the system is controllable, we can select arbitrary poles for our regulator system.

[Ackermann's Formula]

where *a(z)* is the desired characteristic equation of the system and ζ is the controllability matrix of the original system.

The gain *K* can be computed in MATLAB using Ackermann's formula with the following command:

K=acker(A, B, p);

where K is the state feedback gain and *p* is the desired closed-loop pole locations. The goal of this type of regulator is to drive the state vector to zero. By using a reference input instead of a linear state feedback, we can use the same kind of idea to drive the state vector to any arbitrary state, and to give the system arbitrary poles.

### Reference Inputs edit

The idea of the system above with a linear feedback and no reference input is to drive the system state vector to zero. If we have a system reference input *r*, we can define a vector *N* that is the desired value for our state. This combined input is equal to:

where *x _{r}* is the reference state we want our state

*x*to reach. Here is a block diagram of a system that uses this kind of state reference:

We have our gain matrix, *K*, and our reference input *rN*. Mathematically, we can show that:

In this system, assuming the system is type 1 or higher, we can prove that

The state will approach the reference state as time approaches infinity.

The Reference Input is calculated in the continuous domain using the below equations:

and

**Control Systems**book is a stub. You can help by expanding this page, but make sure to follow the

**local manual of style**. If you would like to help, but you don't know how, you can ask on the

**main discussion page**.

**(All Stubs)**