Control Systems/State-Space Equations

Time-Domain Approach

The "Classical" method of controls (what we have been studying so far) has been based mostly in the transform domain. When we want to control the system in general we use the Laplace transform (Z-Transform for digital systems) to represent the system, and when we want to examine the frequency characteristics of a system, we use the Fourier Transform. The question arises, why do we do this?

Let's look at a basic second-order Laplace Transform transfer function:

${\frac {Y(s)}{X(s)}}=G(s)={\frac {1+s}{1+2s+5s^{2}}}$

And we can decompose this equation in terms of the system inputs and outputs:

$(1+2s+5s^{2})Y(s)=(1+s)X(s)$

Now, when we take the inverse Laplace transform of our equation, we can see that:

$y(t)+2{\frac {dy(t)}{dt}}+5{\frac {d^{2}y(t)}{dt^{2}}}=x(t)+{\frac {dx(t)}{dt}}$

The Laplace transform is transforming the fact that we are dealing with second-order differential equations. The Laplace transform moves a system out of the time-domain into the complex frequency domain, to study and manipulate our systems as algebraic polynomials instead of linear ODEs. Given the complexity of differential equations, why would we ever want to work in the time domain?

It turns out that to decompose our higher-order differential equations into multiple first-order equations, one can find a new method for easily manipulating the system without having to use integral transforms. The solution to this problem is state variables . By taking our multiple first-order differential equations, and analyzing them in vector form, we can not only do the same things we were doing in the time domain using simple matrix algebra, but now we can easily account for systems with multiple inputs and multiple outputs, without adding much unnecessary complexity. This demonstrates why the "modern" state-space approach to controls has become popular.

State-Space

In a state space system, the internal state of the system is explicitly accounted for by an equation known as the state equation. The system output is given in terms of a combination of the current system state, and the current system input, through the output equation. These two equations form a system of equations known collectively as state-space equations. The state-space is the vector space that consists of all the possible internal states of the system.

For a system to be modeled using the state-space method, the system must meet this requirement:

1. The system must be "lumped"

"Lumped" in this context, means that we can find a finite-dimensional state-space vector which fully characterises all such internal states of the system.

This text mostly considers linear state space systems, where the state and output equations satisfy the superposition principle and the state space is linear. However, the state-space approach is equally valid for nonlinear systems although some specific methods are not applicable to nonlinear systems.

State

Central to the state-space notation is the idea of a state. A state of a system is the current value of internal elements of the system, that change separately (but not completely unrelated) to the output of the system. In essence, the state of a system is an explicit account of the values of the internal system components. Here are some examples:

Consider an electric circuit with both an input and an output terminal. This circuit may contain any number of inductors and capacitors. The state variables may represent the magnetic and electric fields of the inductors and capacitors, respectively.

Consider a spring-mass-dashpot system. The state variables may represent the compression of the spring, or the acceleration at the dashpot.

Consider a chemical reaction where certain reagents are poured into a mixing container, and the output is the amount of the chemical product produced over time. The state variables may represent the amounts of un-reacted chemicals in the container, or other properties such as the quantity of thermal energy in the container (that can serve to facilitate the reaction).

State Variables

When modeling a system using a state-space equation, we first need to define three vectors:

Input variables
A SISO (Single Input Single Output) system will only have a single input value, but a MIMO system may have multiple inputs. We need to define all the inputs to the system, and we need to arrange them into a vector.
Output variables
This is the system output value, and in the case of MIMO systems, we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector.
State Variables
The state variables represent values from inside the system, that can change over time. In an electric circuit, for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables.

We denote the input variables with u, the output variables with y, and the state variables with x. In essence, we have the following relationship:

$y=f(x,u)$

Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the system input:

$x'=g(x,u)$

Where x' is the rate of change of the state variables. We will define f(u, x) and g(u, x) in the next chapter.

Multi-Input, Multi-Output

In the Laplace domain, if we want to account for systems with multiple inputs and multiple outputs, we are going to need to rely on the principle of superposition to create a system of simultaneous Laplace equations for each output and each input. For such systems, the classical approach not only doesn't simplify the situation, but because the systems of equations need to be transformed into the frequency domain first, manipulated, and then transformed back into the time domain, they can actually be more difficult to work with. However, the Laplace domain technique can be combined with the State-Space techniques discussed in the next few chapters to bring out the best features of both techniques. We will discuss MIMO systems in the MIMO Systems Chapter.

State-Space Equations

In a state-space system representation, we have a system of two equations: an equation for determining the state of the system, and another equation for determining the output of the system. We will use the variable y(t) as the output of the system, x(t) as the state of the system, and u(t) as the input of the system. We use the notation x'(t) (note the prime) for the first derivative of the state vector of the system, as dependent on the current state of the system and the current input. Symbolically, we say that there are transforms g and h, that display this relationship:

$x'(t)=g[t_{0},t,x(t),x(0),u(t)]$
$y(t)=h[t,x(t),u(t)]$
Note:
If x'(t) and y(t) are not linear combinations of x(t) and u(t), the system is said to be nonlinear. We will attempt to discuss non-linear systems in a later chapter.

The first equation shows that the system state change is dependent on the previous system state, the initial state of the system, the time, and the system inputs. The second equation shows that the system output is dependent on the current system state, the system input, and the current time.

If the system state change x'(t) and the system output y(t) are linear combinations of the system state and input vectors, then we can say the systems are linear systems, and we can rewrite them in matrix form:

[State Equation]

$x'=A(t)x(t)+B(t)u(t)$

[Output Equation]

$y(t)=C(t)x(t)+D(t)u(t)$

If the systems themselves are time-invariant, we can re-write this as follows:

$x'=Ax(t)+Bu(t)$
$y(t)=Cx(t)+Du(t)$

The State Equation shows the relationship between the system's current state and its input, and the future state of the system. The Output Equation shows the relationship between the system state and its input, and the output. These equations show that in a given system, the current output is dependent on the current input and the current state. The future state is also dependent on the current state and the current input.

It is important to note at this point that the state space equations of a particular system are not unique, and there are an infinite number of ways to represent these equations by manipulating the A, B, C and D matrices using row operations. There are a number of "standard forms" for these matrices, however, that make certain computations easier. Converting between these forms will require knowledge of linear algebra.

State-Space Basis Theorem
Any system that can be described by a finite number of nth order differential equations or nth order difference equations, or any system that can be approximated by them, can be described using state-space equations. The general solutions to the state-space equations, therefore, are solutions to all such sets of equations.

Matrices: A B C D

Our system has the form:

$\mathbf {x} '(t)=\mathbf {g} [t_{0},t,\mathbf {x} (t),x(0),\mathbf {u} (t)]$
$\mathbf {y} (t)=\mathbf {h} [t,\mathbf {x} (t),\mathbf {u} (t)]$

We've bolded several quantities to try and reinforce the fact that they can be vectors, not just scalar quantities. If these systems are time-invariant, we can simplify them by removing the time variables:

$\mathbf {x} '(t)=\mathbf {g} [\mathbf {x} (t),x(0),\mathbf {u} (t)]$
$\mathbf {y} (t)=\mathbf {h} [\mathbf {x} (t),\mathbf {u} (t)]$

Now, if we take the partial derivatives of these functions with respect to the input and the state vector at time t0, we get our system matrices:

$A=\mathbf {g} _{x}[x(0),x(0),u(0)]$
$B=\mathbf {g} _{u}[x(0),x(0),u(0)]$
$C=\mathbf {h} _{x}[x(0),u(0)]$
$D=\mathbf {h} _{u}[x(0),u(0)]$

In our time-invariant state space equations, we write these matrices and their relationships as:

$x'(t)=Ax(t)+Bu(t)$
$y(t)=Cx(t)+Du(t)$

We have four constant matrices: A, B, C, and D. We will explain these matrices below:

Matrix A
Matrix A is the system matrix, and relates how the current state affects the state change x' . If the state change is not dependent on the current state, A will be the zero matrix. The exponential of the state matrix, eAt is called the state transition matrix, and is an important function that we will describe below.
Matrix B
Matrix B is the control matrix, and determines how the system input affects the state change. If the state change is not dependent on the system input, then B will be the zero matrix.
Matrix C
Matrix C is the output matrix, and determines the relationship between the system state and the system output.
Matrix D
Matrix D is the feed-forward matrix, and allows for the system input to affect the system output directly. A basic feedback system like those we have previously considered do not have a feed-forward element, and therefore for most of the systems we have already considered, the D matrix is the zero matrix.

Matrix Dimensions

Because we are adding and multiplying multiple matrices and vectors together, we need to be absolutely certain that the matrices have compatible dimensions, or else the equations will be undefined. For integer values p, q, and r, the dimensions of the system matrices and vectors are defined as follows:

Vectors Matrices
• $x:p\times 1$
• $x':p\times 1$
• $u:q\times 1$
• $y:r\times 1$
• $A:p\times p$
• $B:p\times q$
• $C:r\times p$
• $D:r\times q$

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

If the matrix and vector dimensions do not agree with one another, the equations are invalid and the results will be meaningless. Matrices and vectors must have compatible dimensions or they cannot be combined using matrix operations.

For the rest of the book, we will be using the small template on the right as a reminder about the matrix dimensions, so that we can keep a constant notation throughout the book.

Notational Shorthand

The state equations and the output equations of systems can be expressed in terms of matrices A, B, C, and D. Because the form of these equations is always the same, we can use an ordered quadruplet to denote a system. We can use the shorthand (A, B, C, D) to denote a complete state-space representation. Also, because the state equation is very important for our later analyis, we can write an ordered pair (A, B) to refer to the state equation:

$(A,B)\to x'=Ax+Bu$
$(A,B,C,D)\to \left\{{\begin{matrix}x'=Ax+Bu\\y=Cx+Du\end{matrix}}\right.$

Obtaining the State-Space Equations

The beauty of state equations, is that they can be used to transparently describe systems that are both continuous and discrete in nature. Some texts will differentiate notation between discrete and continuous cases, but this text will not make such a distinction. Instead we will opt to use the generic coefficient matrices A, B, C and D for both continuous and discrete systems. Occasionally this book may employ the subscript C to denote a continuous-time version of the matrix, and the subscript D to denote the discrete-time version of the same matrix. Other texts may use the letters F, H, and G for continuous systems and Γ, and Θ for use in discrete systems. However, if we keep track of our time-domain system, we don't need to worry about such notations.

From Differential Equations

Let's say that we have a general 3rd order differential equation in terms of input u(t) and output y(t):

${\frac {d^{3}y(t)}{dt^{3}}}+a_{2}{\frac {d^{2}y(t)}{dt^{2}}}+a_{1}{\frac {dy(t)}{dt}}+a_{0}y(t)=u(t)$

We can create the state variable vector x in the following manner:

$x_{1}=y(t)$
$x_{2}={\frac {dy(t)}{dt}}$
$x_{3}={\frac {d^{2}y(t)}{dt^{2}}}$

Which now leaves us with the following 3 first-order equations:

$x_{1}'=x_{2}$
$x_{2}'=x_{3}$
$x_{3}'={\frac {d^{3}y(t)}{dt^{3}}}$

Now, we can define the state vector x in terms of the individual x components, and we can create the future state vector as well:

$x={\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}}$ , $x'={\begin{bmatrix}x_{1}'\\x_{2}'\\x_{3}'\end{bmatrix}}$

And with that, we can assemble the state-space equations for the system:

$x'={\begin{bmatrix}0&1&0\\0&0&1\\-a_{0}&-a_{1}&-a_{2}\end{bmatrix}}x(t)+{\begin{bmatrix}0\\0\\1\end{bmatrix}}u(t)$
$y(t)={\begin{bmatrix}1&0&0\end{bmatrix}}x(t)$

Granted, this is only a simple example, but the method should become apparent to most readers.

From Transfer Functions

The method of obtaining the state-space equations from the Laplace domain transfer functions are very similar to the method of obtaining them from the time-domain differential equations. We call the process of converting a system description from the Laplace domain to the state-space domain realization. We will discuss realization in more detail in a later chapter. In general, let's say that we have a transfer function of the form:

$T(s)={\frac {s^{m}+a_{m-1}s^{m-1}+\cdots +a_{0}}{s^{n}+b_{n-1}s^{n-1}+\cdots +b_{0}}}$

We can write our A, B, C, and D matrices as follows:

 $A={\begin{bmatrix}0&1&0&\cdots &0\\0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\\-b_{0}&-b_{1}&-b_{2}&\cdots &-b_{n-1}\end{bmatrix}}$ $B={\begin{bmatrix}0\\0\\\vdots \\1\end{bmatrix}}$ $C={\begin{bmatrix}a_{0}&a_{1}&\cdots &a_{m-1}\end{bmatrix}}$ $D=0$ This form of the equations is known as the controllable canonical form of the system matrices, and we will discuss this later.

Notice that to perform this method, the denominator and numerator polynomials must be monic, the coefficients of the highest-order term must be 1. If the coefficient of the highest order term is not 1, you must divide your equation by that coefficient to make it 1.

State-Space Representation

As an important note, remember that the state variables x are user-defined and therefore are arbitrary. There are any number of ways to define x for a particular problem, each of which are going to lead to different state space equations.

Note: There are an infinite number of equivalent ways to represent a system using state-space equations. Some ways are better than others. Once these state-space equations are obtained, they can be manipulated to take a particular form if needed.

Consider the previous continuous-time example. We can rewrite the equation in the form

${\frac {d}{dt}}\left[{\frac {d^{2}y(t)}{dt^{2}}}+a_{2}{\frac {dy(t)}{dt}}+a_{1}y(t)\right]+a_{0}y(t)=u(t)$ .

We now define the state variables

$x_{1}=y(t)$
$x_{2}={\frac {dy(t)}{dt}}$
$x_{3}={\frac {d^{2}y(t)}{dt^{2}}}+a_{2}{\frac {dy(t)}{dt}}+a_{1}y(t)$

with first-order derivatives

$x_{1}'={\frac {dy(t)}{dt}}=x_{2}$
$x_{2}'={\frac {d^{2}y(t)}{dt^{2}}}=-a_{1}x_{1}-a_{2}x_{2}+x_{3}$
$x_{3}'=-a_{0}y(t)+u(t)$

The state-space equations for the system will then be given by

$x'={\begin{bmatrix}0&1&0\\-a_{1}&-a_{2}&1\\-a_{0}&0&0\end{bmatrix}}x(t)+{\begin{bmatrix}0\\0\\1\end{bmatrix}}u(t)$
$y(t)={\begin{bmatrix}1&0&0\end{bmatrix}}x(t)$

x may also be used in any number of variable transformations, as a matter of mathematical convenience. However, the variables y and u correspond to physical signals, and may not be arbitrarily selected, redefined, or transformed as x can be.

Example: Dummy Variables

The altitude control of a particular manned aircraft can be given by:

$\theta ''(t)=\alpha +\delta$

Where α is the direction the aircraft is traveling in, θ is the direction the aircraft is facing (the attitude), and δ is the angle of the ailerons (the control input from the pilot). This equation is not in a proper format, so we need to produce some dummy-variables:

$\theta _{1}=\theta$
$\theta _{1}'=\theta _{2}$
$\theta _{2}'=\alpha +\delta$

This in turn will provide us with our state equation:

${\begin{bmatrix}\theta _{1}\\\theta _{2}\end{bmatrix}}'={\begin{bmatrix}0&1\\0&0\end{bmatrix}}{\begin{bmatrix}\theta _{1}\\\theta _{2}\end{bmatrix}}+{\begin{bmatrix}0&0\\1&1\end{bmatrix}}{\begin{bmatrix}\alpha \\\delta \end{bmatrix}}$

As we can see from this equation, even though we have a valid state-equation, the variables θ1 and θ2 don't necessarily correspond to any measurable physical event, but are instead dummy variables constructed by the user to help define the system. Note, however, that the variables α and δ do correspond to physical values, and cannot be changed.

Discretization

If we have a system (A, B, C, D) that is defined in continuous time, we can discretize the system so that an equivalent process can be performed using a digital computer. We can use the definition of the derivative, as such:

$x'(t)=\lim _{T\to 0}{\frac {x(t+T)-x(t)}{T}}$

And substituting this into the state equation with some approximation (and ignoring the limit for now) gives us:

$\lim _{T\to 0}{\frac {x(t+T)-x(t)}{T}}=Ax(t)+Bu(t)$
$x(t+T)=x(t)+Ax(t)T+Bu(t)T$
$x(t+T)=(1+AT)x(t)+(BT)u(t)$

We are able to remove that limit because in a discrete system, the time interval between samples is positive and non-negligible. By definition, a discrete system is only defined at certain time points, and not at all time points as the limit would have indicated. In a discrete system, we are interested only in the value of the system at discrete points. If those points are evenly spaced by every T seconds (the sampling time), then the samples of the system occur at t = kT, where k is an integer. Substituting kT for t into our equation above gives us:

$x(kT+T)=(1+AT)x(kT)+TBu(kT)$

Or, using the square-bracket shorthand that we've developed earlier, we can write:

$x[k+1]=(1+AT)x[k]+TBu[k]$

In this form, the state-space system can be implemented quite easily into a digital computer system using software, not complicated analog hardware. We will discuss this relationship and digital systems more specifically in a later chapter.

We will write out the discrete-time state-space equations as:

$x[n+1]=A_{d}x[n]+B_{d}u[n]$
$y[n]=C_{d}x[n]+D_{d}u[n]$

Note on Notations

The variable T is a common variable in control systems, especially when talking about the beginning and end points of a continuous-time system, or when discussing the sampling time of a digital system. However, another common use of the letter T is to signify the transpose operation on a matrix. To alleviate this ambiguity, we will denote the transpose of a matrix with a prime:

$A^{T}\to A'$

Where A' is the transpose of matrix A.

The prime notation is also frequently used to denote the time-derivative. Most of the matrices that we will be talking about are time-invariant; there is no ambiguity because we will never take the time derivative of a time-invariant matrix. However, for a time-variant matrix we will use the following notations to distinguish between the time-derivative and the transpose:

$A(t)'$  the transpose.
$A'(t)$  the time-derivative.

Note that certain variables which are time-variant are not written with the (t) postscript, such as the variables x, y, and u. For these variables, the default behavior of the prime is the time-derivative, such as in the state equation. If the transpose needs to be taken of one of these vectors, the (t)' postfix will be added explicitly to correspond to our notation above.

For instances where we need to use the Hermitian transpose, we will use the notation:

$A^{H}$

This notation is common in other literature, and raises no obvious ambiguities here.

MATLAB Representation

This operation can be performed using this MATLAB command:
ss

State-space systems can be represented in MATLAB using the 4 system matrices, A, B, C, and D. We can create a system data structure using the ss function:

sys = ss(A, B, C, D);

Systems created in this way can be manipulated in the same way that the transfer function descriptions (described earlier) can be manipulated. To convert a transfer function to a state-space representation, we can use the tf2ss function:

[A, B, C, D] = tf2ss(num, den);

And to perform the opposite operation, we can use the ss2tf function:

[num, den] = ss2tf(A, B, C, D);