# Control Systems/Transforms

## Transforms

There are a number of transforms that we will be discussing throughout this book, and the reader is assumed to have at least a small prior knowledge of them. It is not the intention of this book to teach the topic of transforms to an audience that has had no previous exposure to them. However, we will include a brief refresher here to refamiliarize people who maybe cannot remember the topic perfectly. If you do not know what the Laplace Transform or the Fourier Transform are yet, it is highly recommended that you use this page as a simple guide, and look the information up on other sources. Specifically, Wikipedia has lots of information on these subjects.

### Transform Basics

A transform is a mathematical tool that converts an equation from one variable (or one set of variables) into a new variable (or a new set of variables). To do this, the transform must remove all instances of the first variable, the "Domain Variable", and add a new "Range Variable". Integrals are excellent choices for transforms, because the limits of the definite integral will be substituted into the domain variable, and all instances of that variable will be removed from the equation. An integral transform that converts from a domain variable a to a range variable b will typically be formatted as such:

${\displaystyle {\mathcal {T}}[f(a)]=F(b)=\int _{C}f(a)g(a,b)da}$

Where the function f(a) is the function being transformed, and g(a,b) is known as the kernel of the transform. Typically, the only difference between the various integral transforms is the kernel.

## Laplace Transform

This operation can be performed using this MATLAB command:

The Laplace Transform converts an equation from the time-domain into the so-called "S-domain", or the Laplace domain, or even the "Complex domain". These are all different names for the same mathematical space and they all may be used interchangeably in this book and in other texts on the subject. The Transform can only be applied under the following conditions:

1. The system or signal in question is analog.
2. The system or signal in question is Linear.
3. The system or signal in question is Time-Invariant.
4. The system or signal in question is causal.

The transform is defined as such:

[Laplace Transform]

${\displaystyle {\begin{matrix}F(s)={\mathcal {L}}[f(t)]=\int _{0}^{\infty }f(t)e^{-st}dt\end{matrix}}}$

Laplace transform results have been tabulated extensively. More information on the Laplace transform, including a transform table can be found in the Appendix.

If we have a linear differential equation in the time domain:

${\displaystyle {\begin{matrix}y(t)=ax(t)+bx'(t)+cx''(t)\end{matrix}}}$

With zero initial conditions, we can take the Laplace transform of the equation as such:

${\displaystyle {\begin{matrix}Y(s)=aX(s)+bsX(s)+cs^{2}X(s)\end{matrix}}}$

And separating, we get:

${\displaystyle {\begin{matrix}Y(s)=X(s)[a+bs+cs^{2}]\end{matrix}}}$

### Inverse Laplace Transform

This operation can be performed using this MATLAB command:

The inverse Laplace Transform is defined as such:

[Inverse Laplace Transform]

${\displaystyle {\begin{matrix}f(t)={\mathcal {L}}^{-1}\left\{F(s)\right\}={1 \over {2\pi i}}\int _{c-i\infty }^{c+i\infty }e^{st}F(s)\,ds\end{matrix}}}$

The inverse transform converts a function from the Laplace domain back into the time domain.

### Matrices and Vectors

The Laplace Transform can be used on systems of linear equations in an intuitive way. Let's say that we have a system of linear equations:

${\displaystyle {\begin{matrix}y_{1}(t)=a_{1}x_{1}(t)\end{matrix}}}$
${\displaystyle {\begin{matrix}y_{2}(t)=a_{2}x_{2}(t)\end{matrix}}}$

We can arrange these equations into matrix form, as shown:

${\displaystyle {\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix}}={\begin{bmatrix}a_{1}&0\\0&a_{2}\end{bmatrix}}{\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}}}$

And write this symbolically as:

${\displaystyle \mathbf {y} (t)=A\mathbf {x} (t)}$

We can take the Laplace transform of both sides:

${\displaystyle {\mathcal {L}}[\mathbf {y} (t)]=\mathbf {Y} (s)={\mathcal {L}}[A\mathbf {x} (t)]=A{\mathcal {L}}[\mathbf {x} (t)]=A\mathbf {X} (s)}$

Which is the same as taking the transform of each individual equation in the system of equations.

### Example: RL Circuit

Circuit Theory

Here, we are going to show a common example of a first-order system, an RL Circuit. In an inductor, the relationship between the current, I, and the voltage, V, in the time domain is expressed as a derivative:

${\displaystyle V(t)=L{\frac {dI(t)}{dt}}}$

Where L is a special quantity called the "Inductance" that is a property of inductors.

Circuit diagram for the RL circuit example problem. VL is the voltage over the inductor, and is the quantity we are trying to find.

Let's say that we have a 1st order RL series electric circuit. The resistor has resistance R, the inductor has inductance L, and the voltage source has input voltage Vin. The system output of our circuit is the voltage over the inductor, Vout. In the time domain, we have the following first-order differential equations to describe the circuit:

${\displaystyle V_{out}(t)=V_{L}(t)=L{\frac {dI(t)}{dt}}}$
${\displaystyle V_{in}(t)=RI(t)+L{\frac {dI(t)}{dt}}}$

However, since the circuit is essentially acting as a voltage divider, we can put the output in terms of the input as follows:

${\displaystyle V_{out}(t)={\frac {L{\frac {dI(t)}{dt}}}{RI(t)+L{\frac {dI(t)}{dt}}}}V_{in}(t)}$

This is a very complicated equation, and will be difficult to solve unless we employ the Laplace transform:

${\displaystyle V_{out}(s)={\frac {Ls}{R+Ls}}V_{in}(s)}$

We can divide top and bottom by L, and move Vin to the other side:

${\displaystyle {\frac {V_{out}}{V_{in}}}={\frac {s}{{\frac {R}{L}}+s}}}$

And using a simple table look-up, we can solve this for the time-domain relationship between the circuit input and the circuit output:

${\displaystyle {\frac {V_{out}}{V_{in}}}={\frac {d}{dt}}e^{\left({\frac {-Rt}{L}}\right)}u(t)}$

### Partial Fraction Expansion

Calculus

Laplace transform pairs are extensively tabulated, but frequently we have transfer functions and other equations that do not have a tabulated inverse transform. If our equation is a fraction, we can often utilize Partial Fraction Expansion (PFE) to create a set of simpler terms that will have readily available inverse transforms. This section is going to give a brief reminder about PFE, for those who have already learned the topic. This refresher will be in the form of several examples of the process, as it relates to the Laplace Transform. People who are unfamiliar with PFE are encouraged to read more about it in Calculus.

### Example: Second-Order System

If we have a given equation in the S-domain:

${\displaystyle F(s)={\frac {2s+1}{s^{2}+3s+2}}}$

We can expand it into several smaller fractions as such:

${\displaystyle F(s)={\frac {2s+1}{(s+1)(s+2)}}={\frac {A}{(s+1)}}+{\frac {B}{(s+2)}}={\frac {A(s+2)+B(s+1)}{(s+1)(s+2)}}}$

This looks impossible, because we have a single equation with 3 unknowns (s, A, B), but in reality s can take any arbitrary value, and we can "plug in" values for s to solve for A and B, without needing other equations. For instance, in the above equation, we can multiply through by the denominator, and cancel terms:

${\displaystyle {\frac {}{}}(2s+1)=A(s+2)+B(s+1)}$

Now, when we set s → -2, the A term disappears, and we are left with B → 3. When we set s → -1, we can solve for A → -1. Putting these values back into our original equation, we have:

${\displaystyle F(s)={\frac {-1}{(s+1)}}+{\frac {3}{(s+2)}}}$

Remember, since the Laplace transform is a linear operator, the following relationship holds true:

${\displaystyle {\mathcal {L}}^{-1}[F(s)]={\mathcal {L}}^{-1}\left[{\frac {-1}{(s+1)}}+{\frac {3}{(s+2)}}\right]={\mathcal {L}}^{-1}\left[{\frac {-1}{s+1}}\right]+{\mathcal {L}}^{-1}\left[{\frac {3}{(s+2)}}\right]}$

Finding the inverse transform of these smaller terms should be an easier process then finding the inverse transform of the whole function. Partial fraction expansion is a useful, and oftentimes necessary tool for finding the inverse of an S-domain equation.

### Example: Fourth-Order System

If we have a given equation in the S-domain:

${\displaystyle F(s)={\frac {79s^{2}+916s+1000}{s(s+10)^{3}}}}$

We can expand it into several smaller fractions as such:

${\displaystyle F(s)={\frac {A}{s}}+{\frac {B}{(s+10)^{3}}}+{\frac {C}{(s+10)^{2}}}+{\frac {D}{s+10}}}$
${\displaystyle F(s)={\frac {A(s+10)^{3}+Bs+Cs(s+10)+Ds(s+10)^{2}}{s(s+10)^{3}}}}$
${\displaystyle {\frac {}{}}A(s+10)^{3}+Bs+Cs(s+10)+Ds(s+10)^{2}=79s^{2}+916s+1000}$

Canceling terms wouldn't be enough here, we will open the brackets (separated onto multiple lines):

${\displaystyle As^{3}+30As^{2}+300As+1000A+Bs+}$
${\displaystyle Cs^{2}+10Cs+Ds^{3}+20Ds^{2}+100Ds}$
${\displaystyle =79s^{2}+916s+1000}$

Let's compare coefficients:

A + D = 0
30A + C + 20D = 79
300A + B + 10C + 100D = 916
1000A = 1000

And solving gives us:

A = 1
B = 26
C = 69
D = -1

We know from the Laplace Transform table that the following relation holds:

${\displaystyle {\frac {1}{(s+\alpha )^{n+1}}}\to {\frac {t^{n}}{n!}}e^{-\alpha t}\cdot u(t)}$

We can plug in our values for A, B, C, and D into our expansion, and try to convert it into the form above.

${\displaystyle F(s)={\frac {A}{s}}+{\frac {B}{(s+10)^{3}}}+{\frac {C}{(s+10)^{2}}}+{\frac {D}{s+10}}}$
${\displaystyle F(s)=A{\frac {1}{s}}+B{\frac {1}{(s+10)^{3}}}+C{\frac {1}{(s+10)^{2}}}+D{\frac {1}{s+10}}}$
${\displaystyle F(s)=1{\frac {1}{s}}+26{\frac {1}{(s+10)^{3}}}+69{\frac {1}{(s+10)^{2}}}-1{\frac {1}{s+10}}}$
${\displaystyle f(t)=u(t)+13t^{2}e^{-10t}+69te^{-10t}-e^{-10t}}$

### Example: Complex Roots

Given the following transfer function:

${\displaystyle F(s)={\frac {7s+26}{s^{2}-80s+1681}}={\frac {As+B}{s^{2}-80s+1681}}}$

When the solution of the denominator is a complex number, we use a complex representation A + iB, like 3+i4 as opposed to the use of a single letter (e.g. D) - which is for real numbers:

As + B = 7s + 26
A = 7
B = 26

We will need to reform it into two fractions that look like this (without changing its value):

${\displaystyle e^{-\alpha t}\sin(\omega t)\cdot u(t)\ }$ ${\displaystyle {\omega \over (s+\alpha )^{2}+\omega ^{2}}}$
${\displaystyle e^{-\alpha t}\cos(\omega t)\cdot u(t)\ }$ ${\displaystyle {s+\alpha \over (s+\alpha )^{2}+\omega ^{2}}}$

The roots of s2 - 80s + 1681 are 40 + j9 and 40 - j9.

${\displaystyle (s+a)^{2}+\omega ^{2}=(s-40)^{2}+9^{2}}$ ${\displaystyle {\frac {As+B}{(s-40)^{2}+9^{2}}}}$

And now the numerators:

${\displaystyle {\frac {As+40A-40A+B}{(s-40)^{2}+9^{2}}}}$
${\displaystyle {\frac {As-40A}{(s-40)^{2}+9^{2}}}+{\frac {B+40A}{(s-40)^{2}+9^{2}}}}$
${\displaystyle A{\frac {(s-40)}{(s-40)^{2}+9^{2}}}+{\frac {B+40A}{9}}{\frac {9}{(s-40)^{2}+9^{2}}}}$

Inverse Laplace Transform:

${\displaystyle f(t)=7e^{40t}cos(9t)+34e^{40t}sin(9t)}$

### Example: Sixth-Order System

Given the following transfer function:

${\displaystyle F(s)={\frac {90s^{2}-1110}{s(s-3)(s^{2}-12s+37)}}={\frac {A}{s}}+{\frac {B}{s-3}}+{\frac {Cs+D}{s^{2}-12s+37}}}$

We multiply through by the denominators to make the equation rational:

${\displaystyle A(s-3)(s^{2}-12s+37)+Bs(s^{2}-12s+37)+(Cs+D)s(s-3)}$
${\displaystyle =90s^{2}-1110}$

And then we combine terms:

${\displaystyle As^{3}-15As^{2}+73As-111A+Bs^{3}-12Bs^{2}+37Bs+Cs^{3}-3Cs^{2}+Ds^{2}-3Ds}$
${\displaystyle =90s^{2}-1110}$

Comparing coefficients:

A + B + C = 0
-15A - 12B - 3C + D = 90
73A + 37B - 3D = 0
-111A = -1110

Now, we can solve for A, B, C and D:

A = 10
B = -10
C = 0
D = 120

And now for the "fitting":

The roots of s2 - 12s + 37 are 6 + j and 6 - j

${\displaystyle A{\frac {1}{s}}+B{\frac {1}{s-3}}+C{\frac {s}{(s-6)^{2}+1^{2}}}+D{\frac {1}{(s-6)^{2}+1^{2}}}}$

No need to fit the fraction of D, because it is complete; no need to bother fitting the fraction of C, because C is equal to zero.

${\displaystyle 10{\frac {1}{s}}-10{\frac {1}{s-3}}+0{\frac {s}{(s-6)^{2}+1^{2}}}+120{\frac {1}{(s-6)^{2}+1^{2}}}}$
${\displaystyle {\frac {}{}}f(t)=10u(t)-10e^{3t}+120e^{6t}sin(t)}$

### Final Value Theorem

The Final Value Theorem allows us to determine the value of the time domain equation, as the time approaches infinity, from the S domain equation. In Control Engineering, the Final Value Theorem is used most frequently to determine the steady-state value of a system. The real part of the poles of the function must be <0.

[Final Value Theorem (Laplace)]

${\displaystyle \lim _{t\to \infty }x(t)=\lim _{s\to 0}sX(s)}$

From our chapter on system metrics, you may recognize the value of the system at time infinity as the steady-state time of the system. The difference between the steady state value and the expected output value we remember as being the steady-state error of the system. Using the Final Value Theorem, we can find the steady-state value and the steady-state error of the system in the Complex S domain.

### Example: Final Value Theorem

Find the final value of the following polynomial:

${\displaystyle T(s)={\frac {1+s}{1+2s+s^{2}}}}$

We can apply the Final Value Theorem:

${\displaystyle \lim _{s\to \ 0}s{\frac {1+s}{1+2s+s^{2}}}}$

We obtain the value:

${\displaystyle \lim _{s\to \ 0}s{\frac {1+s}{1+2s+s^{2}}}=0\cdot {\frac {1+0}{1+2\cdot 0+0^{2}}}=0\cdot 1=0}$

### Initial Value Theorem

Akin to the final value theorem, the Initial Value Theorem allows us to determine the initial value of the system (the value at time zero) from the S-Domain Equation. The initial value theorem is used most frequently to determine the starting conditions, or the "initial conditions" of a system.

[Initial Value Theorem (Laplace)]

${\displaystyle x(0)=\lim _{s\to \infty }sX(s)}$

### Common Transforms

We will now show you the transforms of the three functions we have already learned about: The unit step, the unit ramp, and the unit parabola. The transform of the unit step function is given by:

${\displaystyle {\mathcal {L}}[u(t)]={\frac {1}{s}}}$

And since the unit ramp is the integral of the unit step, we can multiply the above result times 1/s to get the transform of the unit ramp:

${\displaystyle {\mathcal {L}}[r(t)]={\frac {1}{s^{2}}}}$

Again, we can multiply by 1/s to get the transform of the unit parabola:

${\displaystyle {\mathcal {L}}[p(t)]={\frac {1}{s^{3}}}}$

## Fourier Transform

The Fourier Transform is very similar to the Laplace transform. The fourier transform uses the assumption that any finite time-domain signal can be broken into an infinite sum of sinusoidal (sine and cosine waves) signals. Under this assumption, the Fourier Transform converts a time-domain signal into its frequency-domain representation, as a function of the radial frequency, ω, The Fourier Transform is defined as such:

[Fourier Transform]

${\displaystyle F(j\omega )={\mathcal {F}}[f(t)]=\int _{0}^{\infty }f(t)e^{-j\omega t}dt}$
This operation can be performed using this MATLAB command:

We can now show that the Fourier Transform is equivalent to the Laplace transform, when the following condition is true:

${\displaystyle {\begin{matrix}s=j\omega \end{matrix}}}$

Because the Laplace and Fourier Transforms are so closely related, it does not make much sense to use both transforms for all problems. This book, therefore, will concentrate on the Laplace transform for nearly all subjects, except those problems that deal directly with frequency values. For frequency problems, it makes life much easier to use the Fourier Transform representation.

Like the Laplace Transform, the Fourier Transform has been extensively tabulated. Properties of the Fourier transform, in addition to a table of common transforms is available in the Appendix.

### Inverse Fourier Transform

This operation can be performed using this MATLAB command:

The inverse Fourier Transform is defined as follows:

[Inverse Fourier Transform]

${\displaystyle f(t)={\mathcal {F}}^{-1}\left\{F(j\omega )\right\}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }F(j\omega )e^{j\omega t}d\omega }$

This transform is nearly identical to the Fourier Transform.

## Complex Plane

Using the above equivalence, we can show that the Laplace transform is always equal to the Fourier Transform, if the variable s is an imaginary number. However, the Laplace transform is different if s is a real or a complex variable. As such, we generally define s to have both a real part and an imaginary part, as such:

${\displaystyle {\begin{matrix}s=\sigma +j\omega \end{matrix}}}$

And we can show that s = jω if σ = 0.

Since the variable s can be broken down into 2 independent values, it is frequently of some value to graph the variable s on its own special "S-plane". The S-plane graphs the variable σ on the horizontal axis, and the value of jω on the vertical axis. This axis arrangement is shown at right.

## Euler's Formula

There is an important result from calculus that is known as Euler's Formula, or "Euler's Relation". This important formula relates the important values of e, j, π, 1 and 0:

${\displaystyle {\begin{matrix}e^{j\pi }+1=0\end{matrix}}}$

However, this result is derived from the following equation, setting ω to π:

[Euler's Formula]

${\displaystyle {\begin{matrix}e^{j\omega }=\cos(\omega )+j\sin(\omega )\end{matrix}}}$

This formula will be used extensively in some of the chapters of this book, so it is important to become familiar with it now.

## MATLAB

The MATLAB symbolic toolbox contains functions to compute the Laplace and Fourier transforms automatically. The function laplace, and the function fourier can be used to calculate the Laplace and Fourier transforms of the input functions, respectively. For instance, the code:

t = sym('t');
fx = 30*t^2 + 20*t;
laplace(fx)


produces the output:

ans =

60/s^3+20/s^2


We will discuss these functions more in The Appendix.