Control Systems/Transforms
Transforms
editThere are a number of transforms that we will be discussing throughout this book, and the reader is assumed to have at least a small prior knowledge of them. It is not the intention of this book to teach the topic of transforms to an audience that has had no previous exposure to them. However, we will include a brief refresher here to refamiliarize people who maybe cannot remember the topic perfectly. If you do not know what the Laplace Transform or the Fourier Transform are yet, it is highly recommended that you use this page as a simple guide, and look the information up on other sources. Specifically, Wikipedia has lots of information on these subjects.
Transform Basics
editA transform is a mathematical tool that converts an equation from one variable (or one set of variables) into a new variable (or a new set of variables). To do this, the transform must remove all instances of the first variable, the "Domain Variable", and add a new "Range Variable". Integrals are excellent choices for transforms, because the limits of the definite integral will be substituted into the domain variable, and all instances of that variable will be removed from the equation. An integral transform that converts from a domain variable a to a range variable b will typically be formatted as such:
Where the function f(a) is the function being transformed, and g(a,b) is known as the kernel of the transform. Typically, the only difference between the various integral transforms is the kernel.
Laplace Transform
editThe Laplace Transform converts an equation from the time-domain into the so-called "S-domain", or the Laplace domain, or even the "Complex domain". These are all different names for the same mathematical space and they all may be used interchangeably in this book and in other texts on the subject. The Transform can only be applied under the following conditions:
- The system or signal in question is analog.
- The system or signal in question is Linear.
- The system or signal in question is Time-Invariant.
- The system or signal in question is causal.
The transform is defined as such:
[Laplace Transform]
Laplace transform results have been tabulated extensively. More information on the Laplace transform, including a transform table can be found in the Appendix.
If we have a linear differential equation in the time domain:
With zero initial conditions, we can take the Laplace transform of the equation as such:
And separating, we get:
Inverse Laplace Transform
editThe inverse Laplace Transform is defined as such:
[Inverse Laplace Transform]
The inverse transform converts a function from the Laplace domain back into the time domain.
Matrices and Vectors
editThe Laplace Transform can be used on systems of linear equations in an intuitive way. Let's say that we have a system of linear equations:
We can arrange these equations into matrix form, as shown:
And write this symbolically as:
We can take the Laplace transform of both sides:
Which is the same as taking the transform of each individual equation in the system of equations.
Example: RL Circuit
editCircuit Theory
Here, we are going to show a common example of a first-order system, an RL Circuit. In an inductor, the relationship between the current, I, and the voltage, V, in the time domain is expressed as a derivative:
Where L is a special quantity called the "Inductance" that is a property of inductors.
Let's say that we have a 1st order RL series electric circuit. The resistor has resistance R, the inductor has inductance L, and the voltage source has input voltage Vin. The system output of our circuit is the voltage over the inductor, Vout. In the time domain, we have the following first-order differential equations to describe the circuit:
However, since the circuit is essentially acting as a voltage divider, we can put the output in terms of the input as follows:
This is a very complicated equation, and will be difficult to solve unless we employ the Laplace transform:
We can divide top and bottom by L, and move Vin to the other side:
And using a simple table look-up, we can solve this for the time-domain relationship between the circuit input and the circuit output:
Partial Fraction Expansion
editCalculus
Laplace transform pairs are extensively tabulated, but frequently we have transfer functions and other equations that do not have a tabulated inverse transform. If our equation is a fraction, we can often utilize Partial Fraction Expansion (PFE) to create a set of simpler terms that will have readily available inverse transforms. This section is going to give a brief reminder about PFE, for those who have already learned the topic. This refresher will be in the form of several examples of the process, as it relates to the Laplace Transform. People who are unfamiliar with PFE are encouraged to read more about it in Calculus.
Example: Second-Order System
editIf we have a given equation in the S-domain:
We can expand it into several smaller fractions as such:
This looks impossible, because we have a single equation with 3 unknowns (s, A, B), but in reality s can take any arbitrary value, and we can "plug in" values for s to solve for A and B, without needing other equations. For instance, in the above equation, we can multiply through by the denominator, and cancel terms:
Now, when we set s → -2, the A term disappears, and we are left with B → 3. When we set s → -1, we can solve for A → -1. Putting these values back into our original equation, we have:
Remember, since the Laplace transform is a linear operator, the following relationship holds true:
Finding the inverse transform of these smaller terms should be an easier process then finding the inverse transform of the whole function. Partial fraction expansion is a useful, and oftentimes necessary tool for finding the inverse of an S-domain equation.
Example: Fourth-Order System
editIf we have a given equation in the S-domain:
We can expand it into several smaller fractions as such:
Canceling terms wouldn't be enough here, we will open the brackets (separated onto multiple lines):
Let's compare coefficients:
- A + D = 0
- 30A + C + 20D = 79
- 300A + B + 10C + 100D = 916
- 1000A = 1000
And solving gives us:
- A = 1
- B = 26
- C = 69
- D = -1
We know from the Laplace Transform table that the following relation holds:
We can plug in our values for A, B, C, and D into our expansion, and try to convert it into the form above.
Example: Complex Roots
editGiven the following transfer function:
When the solution of the denominator is a complex number, we use a complex representation A + iB, like 3+i4 as opposed to the use of a single letter (e.g. D) - which is for real numbers:
- As + B = 7s + 26
- A = 7
- B = 26
We will need to reform it into two fractions that look like this (without changing its value):
- →
- →
Let's start with the denominator (for both fractions):
The roots of s2 - 80s + 1681 are 40 + j9 and 40 - j9.
- →
And now the numerators:
Inverse Laplace Transform:
Example: Sixth-Order System
editGiven the following transfer function:
We multiply through by the denominators to make the equation rational:
And then we combine terms:
Comparing coefficients:
- A + B + C = 0
- -15A - 12B - 3C + D = 90
- 73A + 37B - 3D = 0
- -111A = -1110
Now, we can solve for A, B, C and D:
- A = 10
- B = -10
- C = 0
- D = 120
And now for the "fitting":
The roots of s2 - 12s + 37 are 6 + j and 6 - j
No need to fit the fraction of D, because it is complete; no need to bother fitting the fraction of C, because C is equal to zero.
Final Value Theorem
editThe Final Value Theorem allows us to determine the value of the time domain equation, as the time approaches infinity, from the S domain equation. In Control Engineering, the Final Value Theorem is used most frequently to determine the steady-state value of a system. The real part of the poles of the function must be <0.
[Final Value Theorem (Laplace)]
From our chapter on system metrics, you may recognize the value of the system at time infinity as the steady-state time of the system. The difference between the steady state value and the expected output value we remember as being the steady-state error of the system. Using the Final Value Theorem, we can find the steady-state value and the steady-state error of the system in the Complex S domain.
Example: Final Value Theorem
editFind the final value of the following polynomial:
We can apply the Final Value Theorem:
We obtain the value:
Initial Value Theorem
editAkin to the final value theorem, the Initial Value Theorem allows us to determine the initial value of the system (the value at time zero) from the S-Domain Equation. The initial value theorem is used most frequently to determine the starting conditions, or the "initial conditions" of a system.
[Initial Value Theorem (Laplace)]
Common Transforms
editWe will now show you the transforms of the three functions we have already learned about: The unit step, the unit ramp, and the unit parabola. The transform of the unit step function is given by:
And since the unit ramp is the integral of the unit step, we can multiply the above result times 1/s to get the transform of the unit ramp:
Again, we can multiply by 1/s to get the transform of the unit parabola:
Fourier Transform
editThe Fourier Transform is very similar to the Laplace transform. The fourier transform uses the assumption that any finite time-domain signal can be broken into an infinite sum of sinusoidal (sine and cosine waves) signals. Under this assumption, the Fourier Transform converts a time-domain signal into its frequency-domain representation, as a function of the radial frequency, ω, The Fourier Transform is defined as such:
[Fourier Transform]
We can now show that the Fourier Transform is equivalent to the Laplace transform, when the following condition is true:
Because the Laplace and Fourier Transforms are so closely related, it does not make much sense to use both transforms for all problems. This book, therefore, will concentrate on the Laplace transform for nearly all subjects, except those problems that deal directly with frequency values. For frequency problems, it makes life much easier to use the Fourier Transform representation.
Like the Laplace Transform, the Fourier Transform has been extensively tabulated. Properties of the Fourier transform, in addition to a table of common transforms is available in the Appendix.
Inverse Fourier Transform
editThe inverse Fourier Transform is defined as follows:
[Inverse Fourier Transform]
This transform is nearly identical to the Fourier Transform.
Complex Plane
editUsing the above equivalence, we can show that the Laplace transform is always equal to the Fourier Transform, if the variable s is an imaginary number. However, the Laplace transform is different if s is a real or a complex variable. As such, we generally define s to have both a real part and an imaginary part, as such:
And we can show that s = jω if σ = 0.
Since the variable s can be broken down into 2 independent values, it is frequently of some value to graph the variable s on its own special "S-plane". The S-plane graphs the variable σ on the horizontal axis, and the value of jω on the vertical axis. This axis arrangement is shown at right.
Euler's Formula
editThere is an important result from calculus that is known as Euler's Formula, or "Euler's Relation". This important formula relates the important values of e, j, π, 1 and 0:
However, this result is derived from the following equation, setting ω to π:
[Euler's Formula]
This formula will be used extensively in some of the chapters of this book, so it is important to become familiar with it now.
MATLAB
editThe MATLAB symbolic toolbox contains functions to compute the Laplace and Fourier transforms automatically. The function laplace, and the function fourier can be used to calculate the Laplace and Fourier transforms of the input functions, respectively. For instance, the code:
t = sym('t'); fx = 30*t^2 + 20*t; laplace(fx)
produces the output:
ans = 60/s^3+20/s^2
We will discuss these functions more in The Appendix.