Control Systems/Linear System Solutions
State Equation Solutions
editThe state equation is a first-order linear differential equation, or (more precisely) a system of linear differential equations. Because this is a first-order equation, we can use results from Ordinary Differential Equations to find a general solution to the equation in terms of the state-variable x. Once the state equation has been solved for x, that solution can be plugged into the output equation. The resulting equation will show the direct relationship between the system input and the system output, without the need to account explicitly for the internal state of the system. The sections in this chapter will discuss the solutions to the state-space equations, starting with the easiest case (Time-invariant, no input), and ending with the most difficult case (Time-variant systems).
Solving for x(t) With Zero Input
editLooking again at the state equation:
We can see that this equation is a first-order differential equation, except that the variables are vectors, and the coefficients are matrices. However, because of the rules of matrix calculus, these distinctions don't matter. We can ignore the input term (for now), and rewrite this equation in the following form:
And we can separate out the variables as such:
Integrating both sides, and raising both sides to a power of e, we obtain the result:
Where C is a constant. We can assign D = eC to make the equation easier, but we also know that D will then be the initial conditions of the system. This becomes obvious if we plug the value zero into the variable t. The final solution to this equation then is given as:
We call the matrix exponential eAt the state-transition matrix, and calculating it, while difficult at times, is crucial to analyzing and manipulating systems. We will talk more about calculating the matrix exponential below.
Solving for x(t) With Non-Zero Input
editIf, however, our input is non-zero (as is generally the case with any interesting system), our solution is a little bit more complicated. Notice that now that we have our input term in the equation, we will no longer be able to separate the variables and integrate both sides easily.
We subtract to get the on the left side, and then we do something curious; we premultiply both sides by the inverse state transition matrix:
The rationale for this last step may seem fuzzy at best, so we will illustrate the point with an example:
Example
editTake the derivative of the following with respect to time:
The product rule from differentiation reminds us that if we have two functions multiplied together:
and we differentiate with respect to t, then the result is:
If we set our functions accordingly:
Then the output result is:
If we look at this result, it is the same as from our equation above.
Using the result from our example, we can condense the left side of our equation into a derivative:
Now we can integrate both sides, from the initial time (t0) to the current time (t), using a dummy variable τ, we will get closer to our result. Finally, if we premultiply by eAt, we get our final result:
[General State Equation Solution]
If we plug this solution into the output equation, we get:
[General Output Equation Solution]
This is the general Time-Invariant solution to the state space equations, with non-zero input. These equations are important results, and students who are interested in a further study of control systems would do well to memorize these equations.
State-Transition Matrix
editEngineering Analysis
The state transition matrix, eAt, is an important part of the general state-space solutions for the time-invariant cases listed above. Calculating this matrix exponential function is one of the very first things that should be done when analyzing a new system, and the results of that calculation will tell important information about the system in question.
The matrix exponential can be calculated directly by using a Taylor-Series expansion:
Engineering Analysis
Also, we can attempt to diagonalize the matrix A into a diagonal matrix or a Jordan Canonical matrix. The exponential of a diagonal matrix is simply the diagonal elements individually raised to that exponential. The exponential of a Jordan canonical matrix is slightly more complicated, but there is a useful pattern that can be exploited to find the solution quickly. Interested readers should read the relevant passages in Engineering Analysis.
The state transition matrix, and matrix exponentials in general are very important tools in control engineering.
Diagonal Matrices
editIf a matrix is diagonal, the state transition matrix can be calculated by raising each diagonal entry of the matrix raised as a power of e.
Jordan Canonical Form
editIf the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:
Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
Inverse Laplace Method
editWe can calculate the state-transition matrix (or any matrix exponential function) by taking the following inverse Laplace transform:
If A is a high-order matrix, this inverse can be difficult to solve.
If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:
Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
Spectral Decomposition
editIf we know all the eigenvalues of A, we can create our transition matrix T, and our inverse transition matrix T-1 These matrices will be the matrices of the right and left eigenvectors, respectively. If we have both the left and the right eigenvectors, we can calculate the state-transition matrix as:
[Spectral Decomposition]
Note that wi' is the transpose of the ith left-eigenvector, not the derivative of it. We will discuss the concepts of "eigenvalues", "eigenvectors", and the technique of spectral decomposition in more detail in a later chapter.
Cayley-Hamilton Theorem
editEngineering Analysis
The Cayley-Hamilton Theorem can also be used to find a solution for a matrix exponential. For any eigenvalue of the system matrix A, λ, we can show that the two equations are equivalent:
Once we solve for the coefficients of the equation, a, we can then plug those coefficients into the following equation:
Example: Off-Diagonal Matrix
editGiven the following matrix A, find the state-transition matrix:
We can find the eigenvalues of this matrix as λ = i, -i. If we plug these values into our eigenvector equation, we get:
And we can solve for our eigenvectors:
With our eigenvectors, we can solve for our left-eigenvectors:
Now, using spectral decomposition, we can construct the state-transition matrix:
If we remember Euler's Identity, we can decompose the complex exponentials into sinusoids. Performing the vector multiplications, all the imaginary terms cancel out, and we are left with our result:
The reader is encouraged to perform the multiplications, and attempt to derive this result.
Example: Sympy Calculation
editWith the freely available python library 'sympy' we can very easily calculate the state-transition matrix automatically:
>>> from sympy import * >>> t = symbols('t', positive = true) >>> A = Matrix([[0,1],[-1,0]]) >>> exp(A*t).expand(complex=True) ⎡cos(t) sin(t)⎤ ⎢ ⎥ ⎣-sin(t) cos(t)⎦
You can also try it out yourself on this website:
Example: MATLAB Calculation
editUsing the symbolic toolbox in MATLAB, we can write MATLAB code to automatically generate the state-transition matrix for a given input matrix A. Here is an example of MATLAB code that can perform this task:
function [phi] = statetrans(A) t = sym('t'); phi = expm(A * t); end
Use this MATLAB function to find the state-transition matrix for the following matrices (warning, calculation may take some time):
Matrix 1 is a diagonal matrix, Matrix 2 has complex eigenvalues, and Matrix 3 is Jordan canonical form. These three matrices should be representative of some of the common forms of system matrices. The following code snippets are the input commands into MATLAB to produce these matrices, and the output results:
- Matrix A1
>> A1 = [2 0 ; 0 2]; >> statetrans(A1) ans = [ exp(2*t), 0] [ 0, exp(2*t)]
- Matrix A2
>> A2 = [0 1 ; -1 0]; >> statetrans(A1) ans = [ cos(t), sin(t)] [ -sin(t), cos(t)]
- Matrix A3
>> A1 = [2 1 ; 0 2]; >> statetrans(A1) ans = [ exp(2*t), t*exp(2*t)] [ 0, exp(2*t)]
Example: Multiple Methods in MATLAB
editThere are multiple methods in MATLAB to compute the state transtion matrix, from a scalar (time-invariant) matrix A. The following methods are all going to rely on the Symbolic Toolbox to perform the equation manipulations. At the end of each code snippet, the variable eAt contains the state-transition matrix of matrix A.
- Direct Method
t = sym('t'); eAt = expm(A * t);
- Laplace Transform Method
s = sym('s'); [n,n] = size(A); in = inv(s*eye(n) - A); eAt = ilaplace(in);
- Spectral Decomposition
t = sym('t'); [n,n] = size(A); [V, e] = eig(A); W = inv(V); sum = [0 0;0 0]; for I = 1:n sum = sum + expm(e(I,I)*t)*V(:,I)*W(I,:); end; eAt = sum;
All three of these methods should produce the same answers. The student is encouraged to verify this.