Control Systems/Print version





The Wikibook of automatic

Control Systems

And Control Systems Engineering
With
Classical and Modern Techniques
And
Advanced Concepts


Preface

This book will discuss the topic of Control Systems, which is an interdisciplinary engineering topic. Methods considered here will consist of both "Classical" control methods, and "Modern" control methods. Also, discretely sampled systems (digital/computer systems) will be considered in parallel with the more common analog methods. This book will not focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although readers should have a solid foundation in the fundamentals of at least one discipline.

This book will require prior knowledge of linear algebra, integral and differential calculus, and at least some exposure to ordinary differential equations. In addition, a prior knowledge of integral transforms, specifically the Laplace and Z transforms will be very beneficial. Also, prior knowledge of the Fourier Transform will shed more light on certain subjects. Wikibooks with information on calculus topics or transformation topics required for this book will be listed below:



Table of Contents


Table of Contents

Special Pages

Print Version: Full Print version
(edit)
Warning: Print version is over 230 pages long as of 10 Feb, 2014.
PDF Version: PDF Version Warning: PDF version is over 5.4MB, as of 21 Jan, 2014.
Cover Page: Cover Page Cover Image
All Pages: Page Listing All Versions
Book Policy: Policy Local Manual of Style
Search This Book: (links to an external site)

Controls Introduction

Classical Control Methods

Modern Control Methods

System Representation

Stability

Controllers and Compensators

Adaptive Control

Nonlinear Systems

Noisy Systems

Introduction to Digital Controls

Linear Matrix Inequalities in Control

Examples

Appendices

Resources, Glossary, and License



Introduction to Control Systems

What are control systems? Why do we study them? How do we identify them? The chapters in this section should answer these questions and more.


Introduction

This Wikibook

This book was written at Wikibooks, a free online community where people write open-content textbooks. Any person with internet access is welcome to participate in the creation and improvement of this book. Because this book is continuously evolving, there are no finite "versions" or "editions" of this book. Permanent links to known good versions of the pages may be provided.

What are Control Systems?

The study and design of automatic Control Systems, a field known as control engineering, has become important in modern technical society. From devices as simple as a toaster or a toilet, to complex machines like space shuttles and power steering, control engineering is a part of our everyday life. This book introduces the field of control engineering and explores some of the more advanced topics in the field. Note, however, that control engineering is a very large field and this book serves only as a foundation of control engineering and an introduction to selected advanced topics in the field. Topics in this book are added at the discretion of the authors and represent the available expertise of our contributors.

Control systems are components that are added to other components to increase functionality or meet a set of design criteria. For example:

We have a particular electric motor that is supposed to turn at a rate of 40 RPM. To achieve this speed, we must supply 10 Volts to the motor terminals. However, with 10 volts supplied to the motor at rest, it takes 30 seconds for our motor to get up to speed. This is valuable time lost.

This simple example can be complex to both users and designers of the motor system. It may seem obvious that the motor should start at a higher voltage so that it accelerates faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed.

This is clearly a simplistic example but it illustrates an important point: we can add special "Controller units" to preexisting systems to improve performance and meet new system specifications.

Here are some formal definitions of terms used throughout this book:

Control System
A Control System is a device, or a collection of devices that manage the behavior of other devices. Some devices are not controllable. A control system is an interconnection of components connected or related in such a manner as to command, direct, or regulate itself or another system.

Control System is a conceptual framework for designing systems with capabilities of regulation and/or tracking to give a desired performance. For this there must be a set of signals measurable to know the performance, another set of signals measurable to influence the evolution of the system in time and a third set which is not measurable but disturb the evolution.

Controller
A controller is a control system that manages the behavior of another device or system (using Actuators). The controller is usually fed with some input signal from outside the system which commands the system to provide desired output. In a closed loop system, the signal is preprocessed with the sensor's signal from inside the system.
Actuator
An actuator is a device that takes in a signal form the controller and carries some action to affect the system accordingly.
Compensator
A compensator is a control system that regulates another system, usually by conditioning the input or the output to that system. Compensators are typically employed to correct a single design flaw with the intention of minimizing effects on other aspects of the design.

There are essentially two methods to approach the problem of designing a new control system: the Classical Approach and the Modern Approach.

Classical and Modern

Classical and Modern control methodologies are named in a misleading way, because the group of techniques called "Classical" were actually developed later than the techniques labeled "Modern". However, in terms of developing control systems, Modern methods have been used to great effect more recently, while the Classical methods have been gradually falling out of favor. Most recently, it has been shown that Classical and Modern methods can be combined to highlight their respective strengths and weaknesses.

Classical Methods, which this book will consider first, are methods involving the Laplace Transform domain. Physical systems are modeled in the so-called "time domain", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently. To counteract this problem, integral transforms, such as the Laplace Transform and the Fourier Transform, can be employed to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the transform domain. Once a given system has been converted into the transform domain it can be manipulated with greater ease and analyzed quickly by humans and computers alike.

Modern Control Methods, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called State Equations, which can then be manipulated using techniques from linear algebra. This book will consider Modern Methods second.

A third distinction that is frequently made in the realm of control systems is to divide analog methods (classical and modern, described above) from digital methods. Digital Control Methods were designed to try and incorporate the emerging power of computer systems into previous control methodologies. A special transform, known as the Z-Transform, was developed that can adequately describe digital systems, but at the same time can be converted (with some effort) into the Laplace domain. Once in the Laplace domain, the digital system can be manipulated and analyzed in a very similar manner to Classical analog systems. For this reason, this book will not make a hard and fast distinction between Analog and Digital systems, and instead will attempt to study both paradigms in parallel.

Who is This Book For?

This book is intended to accompany a course of study in under-graduate and graduate engineering. As has been mentioned previously, this book is not focused on any particular discipline within engineering, however any person who wants to make use of this material should have some basic background in the Laplace transform (if not other transforms), calculus, etc. The material in this book may be used to accompany several semesters of study, depending on the program of your particular college or university. The study of control systems is generally a topic that is reserved for students in their 3rd or 4th year of a 4 year undergraduate program, because it requires so much previous information. Some of the more advanced topics may not be covered until later in a graduate program.

Many colleges and universities only offer one or two classes specifically about control systems at the undergraduate level. Some universities, however, do offer more than that, depending on how the material is broken up, and how much depth that is to be covered. Also, many institutions will offer a handful of graduate-level courses on the subject. This book will attempt to cover the topic of control systems from both a graduate and undergraduate level, with the advanced topics built on the basic topics in a way that is intuitive. As such, students should be able to begin reading this book in any place that seems an appropriate starting point, and should be able to finish reading where further information is no longer needed.

What are the Prerequisites?

Understanding of the material in this book will require a solid mathematical foundation. This book does not currently explain, nor will it ever try to fully explain most of the necessary mathematical tools used in this text. For that reason, the reader is expected to have read the following wikibooks, or have background knowledge comparable to them:

Algebra
Calculus
The reader should have a good understanding of differentiation and integration. Partial differentiation, multiple integration, and functions of multiple variables will be used occasionally, but the students are not necessarily required to know those subjects well. These advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite.
Linear Algebra
State-space system representation draws heavily on linear algebra techniques. Students should know how to operate on matrices. Students should understand basic matrix operations (addition, multiplication, determinant, inverse, transpose). Students would also benefit from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are covered in this text.
Ordinary Differential Equations
All linear systems can be described by a linear ordinary differential equation. It is beneficial, therefore, for students to understand these equations. Much of this book describes methods to analyze these equations. Students should know what a differential equation is, and they should also know how to find the general solutions of first and second order ODEs.
Engineering Analysis
This book reinforces many of the advanced mathematical concepts used in the Engineering Analysis book, and we will refer to the relevant sections in the aforementioned text for further information on some subjects. This is essentially a math book, but with a focus on various engineering applications. It relies on a previous knowledge of the other math books in this list.
Signals and Systems
The Signals and Systems book will provide a basis in the field of systems theory, of which control systems is a subset. Readers who have not read the Signals and Systems book will be at a severe disadvantage when reading this book.

How is this Book Organized?

This book will be organized following a particular progression. First this book will discuss the basics of system theory, and it will offer a brief refresher on integral transforms. Section 2 will contain a brief primer on digital information, for students who are not necessarily familiar with them. This is done so that digital and analog signals can be considered in parallel throughout the rest of the book. Next, this book will introduce the state-space method of system description and control. After section 3, topics in the book will use state-space and transform methods interchangeably (and occasionally simultaneously). It is important, therefore, that these three chapters be well read and understood before venturing into the later parts of the book.

After the "basic" sections of the book, we will delve into specific methods of analyzing and designing control systems. First we will discuss Laplace-domain stability analysis techniques (Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots). After the classical methods are discussed, this book will then discuss Modern methods of stability analysis. Finally, a number of advanced topics will be touched upon, depending on the knowledge level of the various contributors.

As the subject matter of this book expands, so too will the prerequisites. For instance, when this book is expanded to cover nonlinear systems, a basic background knowledge of nonlinear mathematics will be required.

Versions

This wikibook has been expanded to include multiple versions of its text, differentiated by the material covered, and the order in which the material is presented. Each different version is composed of the chapters of this book, included in a different order. This book covers a wide range of information, so if you don't need all the information that this book has to offer, perhaps one of the other versions would be right for you and your educational needs.

Each separate version has a table of contents outlining the different chapters that are included in that version. Also, each separate version comes complete with a printable version, and some even come with PDF versions as well.

Take a look at the All Versions Listing Page to find the version of the book that is right for you and your needs.

Differential Equations Review

Implicit in the study of control systems is the underlying use of differential equations. Even if they aren't visible on the surface, all of the continuous-time systems that we will be looking at are described in the time domain by ordinary differential equations (ODE), some of which are relatively high-order.

Let's review some differential equation basics. Consider the topic of interest from a bank. The amount of interest accrued on a given principal balance (the amount of money you put into the bank) P, is given by:

Where is the interest (rate of change of the principal), and r is the interest rate. Notice in this case that P is a function of time (t), and can be rewritten to reflect that:

To solve this basic, first-order equation, we can use a technique called "separation of variables", where we move all instances of the letter P to one side, and all instances of t to the other:

And integrating both sides gives us:

This is all fine and good, but generally, we like to get rid of the logarithm, by raising both sides to a power of e:

Where we can separate out the constant as such:

D is a constant that represents the initial conditions of the system, in this case the starting principal.

Differential equations are particularly difficult to manipulate, especially once we get to higher-orders of equations. Luckily, several methods of abstraction have been created that allow us to work with ODEs, but at the same time, not have to worry about the complexities of them. The classical method, as described above, uses the Laplace, Fourier, and Z Transforms to convert ODEs in the time domain into polynomials in a complex domain. These complex polynomials are significantly easier to solve than the ODE counterparts. The Modern method instead breaks differential equations into systems of low-order equations, and expresses this system in terms of matrices. It is a common precept in ODE theory that an ODE of order N can be broken down into N equations of order 1.

Readers who are unfamiliar with differential equations might be able to read and understand the material in this book reasonably well. However, all readers are encouraged to read the related sections in Calculus.

History

The field of control systems started essentially in the ancient world. Early civilizations, notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement of time, the result of which were several "water clocks" that were designed and implemented.

However, there was very little in the way of actual progress made in the field of engineering until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler's Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace used the transform (later called the Laplace Transform) to solve complex problems in probability theory.

Joseph Fourier was a court mathematician in France under Napoleon I. He created a special function decomposition called the Fourier Series, that was later generalized into an integral transform, and named in his honor (the Fourier Transform).

Pierre-Simon Laplace

1749-1827

Joseph Fourier

1768-1840

The "golden age" of control engineering occurred between 1910-1945, where mass communication methods were being created and two world wars were being fought. During this period, some of the most famous names in controls engineering were doing their work: Nyquist and Bode.

Hendrik Wade Bode and Harry Nyquist, especially in the 1930's while working with Bell Laboratories, created the bulk of what we now call "Classical Control Methods". These methods were based off the results of the Laplace and Fourier Transforms, which had been previously known, but were made popular by Oliver Heaviside around the turn of the century. Previous to Heaviside, the transforms were not widely used, nor respected mathematical tools.

Bode is credited with the "discovery" of the closed-loop feedback system, and the logarithmic plotting technique that still bears his name (bode plots). Harry Nyquist did extensive research in the field of system stability and information theory. He created a powerful stability criteria that has been named for him (The Nyquist Criteria).

Modern control methods were introduced in the early 1950's, as a way to bypass some of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in modern control theory, and an adaptive controller called the Kalman Filter was named in his honor. Modern control methods became increasingly popular after 1957 with the invention of the computer, and the start of the space program. Computers created the need for digital control methodologies, and the space program required the creation of some "advanced" control techniques, such as "optimal control", "robust control", and "nonlinear control". These last subjects, and several more, are still active areas of study among research engineers.

Branches of Control Engineering

Here we are going to give a brief listing of the various different methodologies within the sphere of control engineering. Oftentimes, the lines between these methodologies are blurred, or even erased completely.

Classical Controls
Control methodologies where the ODEs that describe a system are transformed using the Laplace, Fourier, or Z Transforms, and manipulated in the transform domain.
Modern Controls
Methods where high-order differential equations are broken into a system of first-order equations. The input, output, and internal states of the system are described by vectors called "state variables".
Robust Control
Control methodologies where arbitrary outside noise/disturbances are accounted for, as well as internal inaccuracies caused by the heat of the system itself, and the environment.
Optimal Control
In a system, performance metrics are identified, and arranged into a "cost function". The cost function is minimized to create an operational system with the lowest cost.
Adaptive Control
In adaptive control, the control changes its response characteristics over time to better control the system.
Nonlinear Control
The youngest branch of control engineering, nonlinear control encompasses systems that cannot be described by linear equations or ODEs, and for which there is often very little supporting theory available.
Game Theory
Game Theory is a close relative of control theory, and especially robust control and optimal control theories. In game theory, the external disturbances are not considered to be random noise processes, but instead are considered to be "opponents". Each player has a cost function that they attempt to minimize, and that their opponents attempt to maximize.

This book will definitely cover the first two branches, and will hopefully be expanded to cover some of the later branches, if time allows.

MATLAB

Information about using MATLAB for control systems can be found in
the Appendix

MATLAB ® is a programming tool that is commonly used in the field of control engineering. We will discuss MATLAB in specific sections of this book devoted to that purpose. MATLAB will not appear in discussions outside these specific sections, although MATLAB may be used in some example problems. An overview of the use of MATLAB in control engineering can be found in the appendix at: Control Systems/MATLAB.

For more information on MATLAB in general, see: MATLAB Programming.

For more information about properly referencing MATLAB, see:
Resources

Nearly all textbooks on the subject of control systems, linear systems, and system analysis will use MATLAB as an integral part of the text. Students who are learning this subject at an accredited university will certainly have seen this material in their textbooks, and are likely to have had MATLAB work as part of their classes. It is from this perspective that the MATLAB appendix is written.

In the future, this book may be expanded to include information on Simulink ®, as well as MATLAB.

There are a number of other software tools that are useful in the analysis and design of control systems. Additional information can be added in the appendix of this book, depending on the experience and prior knowledge of contributors.

About Formatting

This book will use some simple conventions throughout.

Mathematical Conventions

Mathematical equations will be labeled with the {{eqn}} template, to give them names. Equations that are labeled in such a manner are important, and should be taken special note of. For instance, notice the label to the right of this equation:

[Inverse Laplace Transform]

Equations that are named in this manner will also be copied into the List of Equations Glossary in the end of the book, for an easy reference.

Italics will be used for English variables, functions, and equations that appear in the main text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX mathematics formatting engine, although an attempt will be made not to employ formatted mathematical equations inline with other text because of the difference in size and font. Greek letters, and other non-English characters will not be italicized in the text unless they appear in the midst of multiple variables which are italicized (as a convenience to the editor).

Scalar time-domain functions and variables will be denoted with lower-case letters, along with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written in a similar manner, except with an [n] instead of a (t).

Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters followed by the appropriate variable in parenthesis. For example: F(s), X(jω), Y(z), and F*(s).

Matrices will be denoted with capital letters. Matrices which are functions of time will be denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix, a(t) is a scalar function of time.

Transforms of time-variant matrices will be displayed in uppercase bold letters, such as H(s).

Math equations rendered using LaTeX will appear on separate lines, and will be indented from the rest of the text.

Text Conventions

Information which is tangent or auxiliary to the main text will be placed in these "sidebox" templates.

Examples will appear in TextBox templates, which show up as large grey boxes filled with text and equations.

Important Definitions
Will appear in TextBox templates as well, except we will use this formatting to show that it is a definition.



System Identification

Control Systems/System Identifications

Digitally and Analogy

Control Systems/Digital and Analogy

System Metrics

System Metrics

When a system is being designed and analyzed, it doesn't make any sense to test the system with all manner of strange input functions, or to measure all sorts of arbitrary performance metrics. Instead, it is in everybody's best interest to test the system with a set of standard, simple reference functions. Once the system is tested with the reference functions, there are a number of different metrics that we can use to determine the system performance.

It is worth noting that the metrics presented in this chapter represent only a small number of possible metrics that can be used to evaluate a given system. This wikibook will present other useful metrics along the way, as their need becomes apparent.

Standard Inputs

Note:
All of the standard inputs are zero before time zero. All the standard inputs are causal.

There are a number of standard inputs that are considered simple enough and universal enough that they are considered when designing a system. These inputs are known as a unit step, a ramp, and a parabolic input.

Unit Step
A unit step function is defined piecewise as such:


[Unit Step Function]

The unit step function is a highly important function, not only in control systems engineering, but also in signal processing, systems analysis, and all branches of engineering. If the unit step function is input to a system, the output of the system is known as the step response. The step response of a system is an important tool, and we will study step responses in detail in later chapters.
Ramp
A unit ramp is defined in terms of the unit step function, as such:


[Unit Ramp Function]

It is important to note that the unit step function is simply the differential of the unit ramp function:
This definition will come in handy when we learn about the Laplace Transform.
Parabolic
A unit parabolic input is similar to a ramp input:


[Unit Parabolic Function]

Notice also that the unit parabolic input is equal to the integral of the ramp function:
Again, this result will become important when we learn about the Laplace Transform.

Also, sinusoidal and exponential functions are considered basic, but they are too difficult to use in initial analysis of a system.

Steady State

Note:
To be more precise, we should have taken the limit as t approaches infinity. However, as a shorthand notation, we will typically say "t equals infinity", and assume the reader understands the shortcut that is being used.

When a unit-step function is input to a system, the steady-state value of that system is the output value at time . Since it is impractical (if not completely impossible) to wait till infinity to observe the system, approximations and mathematical calculations are used to determine the steady-state value of the system. Most system responses are asymptotic, that is that the response approaches a particular value. Systems that are asymptotic are typically obvious from viewing the graph of that response.

Step Response

The step response of a system is most frequently used to analyze systems, and there is a large amount of terminology involved with step responses. When exposed to the step input, the system will initially have an undesirable output period known as the transient response. The transient response occurs because a system is approaching its final output value. The steady-state response of the system is the response after the transient response has ended.

The amount of time it takes for the system output to reach the desired value (before the transient response has ended, typically) is known as the rise time. The amount of time it takes for the transient response to end and the steady-state response to begin is known as the settling time.

It is common for a systems engineer to try and improve the step response of a system. In general, it is desired for the transient response to be reduced, the rise and settling times to be shorter, and the steady-state to approach a particular desired "reference" output.

An arbitrary step function with
A step response graph of input x(t) to a made-up system


Target Value

The target output value is the value that our system attempts to obtain for a given input. This is not the same as the steady-state value, which is the actual value that the system does obtain. The target value is frequently referred to as the reference value, or the "reference function" of the system. In essence, this is the value that we want the system to produce. When we input a "5" into an elevator, we want the output (the final position of the elevator) to be the fifth floor. Pressing the "5" button is the reference input, and is the expected value that we want to obtain. If we press the "5" button, and the elevator goes to the third floor, then our elevator is poorly designed.

Rise Time

Rise time is the amount of time that it takes for the system response to reach the target value from an initial state of zero. Many texts on the subject define the rise time as being the time it takes to rise between the initial position and 80% of the target value. This is because some systems never rise to 100% of the expected, target value, and therefore they would have an infinite rise-time. This book will specify which convention to use for each individual problem. Rise time is typically denoted tr, or trise.

Percent Overshoot

Underdamped systems frequently overshoot their target value initially. This initial surge is known as the "overshoot value". The ratio of the amount of overshoot to the target steady-state value of the system is known as the percent overshoot. Percent overshoot represents an overcompensation of the system, and can output dangerously large output signals that can damage a system. Percent overshoot is typically denoted with the term PO.

Example: Refrigerator

Consider an ordinary household refrigerator. The refrigerator has cycles where it is on and when it is off. When the refrigerator is on, the coolant pump is running, and the temperature inside the refrigerator decreases. The temperature decreases to a much lower level than is required, and then the pump turns off.

When the pump is off, the temperature slowly increases again as heat is absorbed into the refrigerator. When the temperature gets high enough, the pump turns back on. Because the pump cools down the refrigerator more than it needs to initially, we can say that it "overshoots" the target value by a certain specified amount.

Example: Refrigerator

Another example concerning a refrigerator concerns the electrical demand of the heat pump when it first turns on. The pump is an inductive mechanical motor, and when the motor first activates, a special counter-acting force known as "back EMF" resists the motion of the motor, and causes the pump to draw more electricity until the motor reaches its final speed. During the startup time for the pump, lights on the same electrical circuit as the refrigerator may dim slightly, as electricity is drawn away from the lamps, and into the pump. This initial draw of electricity is a good example of overshoot.

Steady-State Error

Usually, the letter e or E will be used to denote error values.

Sometimes a system might never achieve the desired steady-state value, but instead will settle on an output value that is not desired. The difference between the steady-state output value to the reference input value at steady state is called the steady-state error of the system. We will use the variable ess to denote the steady-state error of the system.

Settling Time

After the initial rise time of the system, some systems will oscillate and vibrate for an amount of time before the system output settles on the final value. The amount of time it takes to reach steady state after the initial rise time is known as the settling time. Notice that damped oscillating systems may never settle completely, so we will define settling time as being the amount of time for the system to reach, and stay in, a certain acceptable range. The acceptable range for settling time is typically determined on a per-problem basis, although common values are 20%, 10%, or 5% of the target value. The settling time will be denoted as ts.

System Order

The order of the system is defined by the number of independent energy storage elements in the system, and intuitively by the highest order of the linear differential equation that describes the system. In a transfer function representation, the order is the highest exponent in the transfer function. In a proper system, the system order is defined as the degree of the denominator polynomial. In a state-space equation, the system order is the number of state-variables used in the system. The order of a system will frequently be denoted with an n or N, although these variables are also used for other purposes. This book will make clear distinction on the use of these variables.

Proper Systems

A proper system is a system where the degree of the denominator is larger than or equal to the degree of the numerator polynomial. A strictly proper system is a system where the degree of the denominator polynomial is larger than (but never equal to) the degree of the numerator polynomial. A biproper system is a system where the degree of the denominator polynomial equals the degree of the numerator polynomial.

It is important to note that only proper systems can be physically realized. In other words, a system that is not proper cannot be built. It makes no sense to spend a lot of time designing and analyzing imaginary systems.

Example: System Order

Find the order of this system:

The highest exponent in the denominator is s2, so the system is order 2. Also, since the denominator is a higher degree than the numerator, this system is strictly proper.

In the above example, G(s) is a second-order transfer function because in the denominator one of the s variables has an exponent of 2. Second-order functions are the easiest to work with.

System Type

Let's say that we have a process transfer function (or combination of functions, such as a controller feeding in to a process), all in the forward branch of a unity feedback loop. Say that the overall forward branch transfer function is in the following generalized form (known as pole-zero form):


[Pole-Zero Form]

Poles at the origin are called integrators, because they have the effect of performing integration on the input signal.

we call the parameter M the system type. Note that increased system type number correspond to larger numbers of poles at s = 0. More poles at the origin generally have a beneficial effect on the system, but they increase the order of the system, and make it increasingly difficult to implement physically. System type will generally be denoted with a letter like N, M, or m. Because these variables are typically reused for other purposes, this book will make clear distinction when they are employed.

Now, we will define a few terms that are commonly used when discussing system type. These new terms are Position Error, Velocity Error, and Acceleration Error. These names are throwbacks to physics terms where acceleration is the derivative of velocity, and velocity is the derivative of position. Note that none of these terms are meant to deal with movement, however.

Position Error
The position error, denoted by the position error constant . This is the amount of steady-state error of the system when stimulated by a unit step input. We define the position error constant as follows:


[Position Error Constant]

Where G(s) is the transfer function of our system.
Velocity Error
The velocity error is the amount of steady-state error when the system is stimulated with a ramp input. We define the velocity error constant as such:


[Velocity Error Constant]

Acceleration Error
The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We define the acceleration error constant to be:


[Acceleration Error Constant]

Now, this table will show briefly the relationship between the system type, the kind of input (step, ramp, parabolic), and the steady-state error of the system:

Unit System Input
Type, M Au(t) Ar(t) Ap(t)
0
1
2
> 2

Z-Domain Type

Likewise, we can show that the system order can be found from the following generalized transfer function in the Z domain:

Where the constant M is the type of the digital system. Now, we will show how to find the various error constants in the Z-Domain:


[Z-Domain Error Constants]

Error Constant Equation
Kp
Kv
Ka

Visually

Here is an image of the various system metrics, acting on a system in response to a step input:

The target value is the value of the input step response. The rise time is the time at which the waveform first reaches the target value. The overshoot is the amount by which the waveform exceeds the target value. The settling time is the time it takes for the system to settle into a particular bounded region. This bounded region is denoted with two short dotted lines above and below the target value.



System Modeling

The Control Process

It is the job of a control engineer to analyze existing systems, and to design new systems to meet specific needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps:

  1. Model the system mathematically
  2. Analyze the mathematical model
  3. Design system/controller
  4. Implement system/controller and test

The vast majority of this book is going to be focused on (2), the analysis of the mathematical systems. This chapter alone will be devoted to a discussion of the mathematical modeling of the systems.

External Description

An external description of a system relates the system input to the system output without explicitly taking into account the internal workings of the system. The external description of a system is sometimes also referred to as the Input-Output Description of the system, because it only deals with the inputs and the outputs to the system.

If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral:


[General System Description]

This integral form holds for all linear systems, and every linear system can be described by such an equation.

If a system is causal (i.e. an input at t=r affects system behaviour only for ) and there is no input of the system before t=0, we can change the limits of the integration:

Time-Invariant Systems

If furthermore a system is time-invariant, we can rewrite the system description equation as follows:

This equation is known as the convolution integral, and we will discuss it more in the next chapter.

Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform, a powerful tool that allows us to convert an equation from the time domain into the S-Domain, where many calculations are easier. Time-variant systems cannot be used with the Laplace Transform.

Internal Description

If a system is linear and lumped, it can also be described using a system of equations known as state-space equations. In state-space equations, we use the variable x to represent the internal state of the system. We then use u as the system input, and we continue to use y as the system output. We can write the state-space equations as such:

We will discuss the state-space equations more when we get to the section on modern controls.

Complex Descriptions

Systems which are LTI and Lumped can also be described using a combination of the state-space equations, and the Laplace Transform. If we take the Laplace Transform of the state equations that we listed above, we can get a set of functions known as the Transfer Matrix Functions. We will discuss these functions in a later chapter.

Representations

To recap, we will prepare a table with the various system properties, and the available methods for describing the system:

Properties State-Space
Equations
Laplace
Transform
Transfer
Matrix
Linear, Time-Variant, Distributed no no no
Linear, Time-Variant, Lumped yes no no
Linear, Time-Invariant, Distributed no yes no
Linear, Time-Invariant, Lumped yes yes yes

We will discuss all these different types of system representation later in the book.

Analysis

Once a system is modeled using one of the representations listed above, the system needs to be analyzed. We can determine the system metrics and then we can compare those metrics to our specification. If our system meets the specifications we are finished with the design process. However if the system does not meet the specifications (as is typically the case), then suitable controllers and compensators need to be designed and added to the system.

Once the controllers and compensators have been designed, the job isn't finished: we need to analyze the new composite system to ensure that the controllers work properly. Also, we need to ensure that the systems are stable: unstable systems can be dangerous.

Frequency Domain

For proposals, early stage designs, and quick turn around analyses a frequency domain model is often superior to a time domain model. Frequency domain models take disturbance PSDs (Power Spectral Densities) directly, use transfer functions directly, and produce output or residual PSDs directly. The answer is a steady-state response. Oftentimes the controller is shooting for 0 so the steady-state response is also the residual error that will be the analysis output or metric for report.

Table 1: Frequency Domain Model Inputs and Outputs
Input Model Output
PSD Transfer Function PSD

Brief Overview of the Math

Frequency domain modeling is a matter of determining the impulse response of a system to a random process.

Figure 1: Frequency Domain System
[1]

where

is the one-sided input PSD in
is the frequency response function of the system and
is the one-sided output PSD or auto power spectral density function.

The frequency response function, , is related to the impulse response function (transfer function) by

Note some texts will state that this is only valid for random processes which are stationary. Other texts suggest stationary and ergodic while still others state weakly stationary processes. Some texts do not distinguish between strictly stationary and weakly stationary. From practice, the rule of thumb is if the PSD of the input process is the same from hour to hour and day to day then the input PSD can be used and the above equation is valid.

Notes

  1. Sun, Jian-Qiao (2006). Stochastic Dynamics and Control, Volume 4. Amsterdam: Elsevier Science. ISBN 0444522301.

See a full explanation with example at ControlTheoryPro.com

Modeling Examples

Modeling in Control Systems is oftentimes a matter of judgement. This judgement is developed by creating models and learning from other people's models. ControlTheoryPro.com is a site with a lot of examples. Here are links to a few of them

Manufacture

Once the system has been properly designed we can prototype our system and test it. Assuming our analysis was correct and our design is good, the prototype should work as expected. Now we can move on to manufacture and distribute our completed systems.



print unit page|Classical Controls|The classical method of controls involves analysis and manipulation of systems in the complex frequency domain. This domain, entered into by applying the Laplace or Fourier Transforms, is useful in examining the characteristics of the system, and determining the system response.}}

Transform's

Transforms

There are a number of transforms that we will be discussing throughout this book, and the reader is assumed to have at least a small prior knowledge of them. It is not the intention of this book to teach the topic of transforms to an audience that has had no previous exposure to them. However, we will include a brief refresher here to refamiliarize people who maybe cannot remember the topic perfectly. If you do not know what the Laplace Transform or the Fourier Transform are yet, it is highly recommended that you use this page as a simple guide, and look the information up on other sources. Specifically, Wikipedia has lots of information on these subjects.

Transform Basics

A transform is a mathematical tool that converts an equation from one variable (or one set of variables) into a new variable (or a new set of variables). To do this, the transform must remove all instances of the first variable, the "Domain Variable", and add a new "Range Variable". Integrals are excellent choices for transforms, because the limits of the definite integral will be substituted into the domain variable, and all instances of that variable will be removed from the equation. An integral transform that converts from a domain variable a to a range variable b will typically be formatted as such:

Where the function f(a) is the function being transformed, and g(a,b) is known as the kernel of the transform. Typically, the only difference between the various integral transforms is the kernel.

Laplace Transform

This operation can be performed using this MATLAB command:

The Laplace Transform converts an equation from the time-domain into the so-called "S-domain", or the Laplace domain, or even the "Complex domain". These are all different names for the same mathematical space and they all may be used interchangeably in this book and in other texts on the subject. The Transform can only be applied under the following conditions:

  1. The system or signal in question is analog.
  2. The system or signal in question is Linear.
  3. The system or signal in question is Time-Invariant.
  4. The system or signal in question is causal.

The transform is defined as such:


[Laplace Transform]

Laplace transform results have been tabulated extensively. More information on the Laplace transform, including a transform table can be found in the Appendix.

If we have a linear differential equation in the time domain:

With zero initial conditions, we can take the Laplace transform of the equation as such:

And separating, we get:

Inverse Laplace Transform

This operation can be performed using this MATLAB command:

The inverse Laplace Transform is defined as such:

[Inverse Laplace Transform]

The inverse transform converts a function from the Laplace domain back into the time domain.

Matrices and Vectors

The Laplace Transform can be used on systems of linear equations in an intuitive way. Let's say that we have a system of linear equations:

We can arrange these equations into matrix form, as shown:

And write this symbolically as:

We can take the Laplace transform of both sides:

Which is the same as taking the transform of each individual equation in the system of equations.

Example: RL Circuit

For more information about electric circuits, see:
Circuit Theory

Here, we are going to show a common example of a first-order system, an RL Circuit. In an inductor, the relationship between the current, I, and the voltage, V, in the time domain is expressed as a derivative:

Where L is a special quantity called the "Inductance" that is a property of inductors.

Circuit diagram for the RL circuit example problem. VL is the voltage over the inductor, and is the quantity we are trying to find.

Let's say that we have a 1st order RL series electric circuit. The resistor has resistance R, the inductor has inductance L, and the voltage source has input voltage Vin. The system output of our circuit is the voltage over the inductor, Vout. In the time domain, we have the following first-order differential equations to describe the circuit:

However, since the circuit is essentially acting as a voltage divider, we can put the output in terms of the input as follows:

This is a very complicated equation, and will be difficult to solve unless we employ the Laplace transform:

We can divide top and bottom by L, and move Vin to the other side:

And using a simple table look-up, we can solve this for the time-domain relationship between the circuit input and the circuit output:

Partial Fraction Expansion

For more information about Partial Fraction Expansion, see:
Calculus

Laplace transform pairs are extensively tabulated, but frequently we have transfer functions and other equations that do not have a tabulated inverse transform. If our equation is a fraction, we can often utilize Partial Fraction Expansion (PFE) to create a set of simpler terms that will have readily available inverse transforms. This section is going to give a brief reminder about PFE, for those who have already learned the topic. This refresher will be in the form of several examples of the process, as it relates to the Laplace Transform. People who are unfamiliar with PFE are encouraged to read more about it in Calculus.

Example: Second-Order System

If we have a given equation in the S-domain:

We can expand it into several smaller fractions as such:

This looks impossible, because we have a single equation with 3 unknowns (s, A, B), but in reality s can take any arbitrary value, and we can "plug in" values for s to solve for A and B, without needing other equations. For instance, in the above equation, we can multiply through by the denominator, and cancel terms:

Now, when we set s → -2, the A term disappears, and we are left with B → 3. When we set s → -1, we can solve for A → -1. Putting these values back into our original equation, we have:

Remember, since the Laplace transform is a linear operator, the following relationship holds true:

Finding the inverse transform of these smaller terms should be an easier process then finding the inverse transform of the whole function. Partial fraction expansion is a useful, and oftentimes necessary tool for finding the inverse of an S-domain equation.

Example: Fourth-Order System

If we have a given equation in the S-domain:

We can expand it into several smaller fractions as such:

Canceling terms wouldn't be enough here, we will open the brackets (separated onto multiple lines):

Let's compare coefficients:

A + D = 0
30A + C + 20D = 79
300A + B + 10C + 100D = 916
1000A = 1000

And solving gives us:

A = 1
B = 26
C = 69
D = -1

We know from the Laplace Transform table that the following relation holds:

We can plug in our values for A, B, C, and D into our expansion, and try to convert it into the form above.

Example: Complex Roots

Given the following transfer function:

When the solution of the denominator is a complex number, we use a complex representation A + iB, like 3+i4 as opposed to the use of a single letter (e.g. D) - which is for real numbers:

As + B = 7s + 26
A = 7
B = 26

We will need to reform it into two fractions that look like this (without changing its value):

Let's start with the denominator (for both fractions):

The roots of s2 - 80s + 1681 are 40 + j9 and 40 - j9.

And now the numerators:

Inverse Laplace Transform:

Example: Sixth-Order System

Given the following transfer function:

We multiply through by the denominators to make the equation rational:

And then we combine terms:

Comparing coefficients:

A + B + C = 0
-15A - 12B - 3C + D = 90
73A + 37B - 3D = 0
-111A = -1110

Now, we can solve for A, B, C and D:

A = 10
B = -10
C = 0
D = 120

And now for the "fitting":

The roots of s2 - 12s + 37 are 6 + j and 6 - j

No need to fit the fraction of D, because it is complete; no need to bother fitting the fraction of C, because C is equal to zero.

Final Value Theorem

The Final Value Theorem allows us to determine the value of the time domain equation, as the time approaches infinity, from the S domain equation. In Control Engineering, the Final Value Theorem is used most frequently to determine the steady-state value of a system. The real part of the poles of the function must be <0.


[Final Value Theorem (Laplace)]

From our chapter on system metrics, you may recognize the value of the system at time infinity as the steady-state time of the system. The difference between the steady state value and the expected output value we remember as being the steady-state error of the system. Using the Final Value Theorem, we can find the steady-state value and the steady-state error of the system in the Complex S domain.

Example: Final Value Theorem

Find the final value of the following polynomial:

We can apply the Final Value Theorem:

We obtain the value:

Initial Value Theorem

Akin to the final value theorem, the Initial Value Theorem allows us to determine the initial value of the system (the value at time zero) from the S-Domain Equation. The initial value theorem is used most frequently to determine the starting conditions, or the "initial conditions" of a system.


[Initial Value Theorem (Laplace)]

Common Transforms

We will now show you the transforms of the three functions we have already learned about: The unit step, the unit ramp, and the unit parabola. The transform of the unit step function is given by:

And since the unit ramp is the integral of the unit step, we can multiply the above result times 1/s to get the transform of the unit ramp:

Again, we can multiply by 1/s to get the transform of the unit parabola:

Fourier Transform

The Fourier Transform is very similar to the Laplace transform. The fourier transform uses the assumption that any finite time-domain signal can be broken into an infinite sum of sinusoidal (sine and cosine waves) signals. Under this assumption, the Fourier Transform converts a time-domain signal into its frequency-domain representation, as a function of the radial frequency, ω, The Fourier Transform is defined as such:


[Fourier Transform]

This operation can be performed using this MATLAB command:

We can now show that the Fourier Transform is equivalent to the Laplace transform, when the following condition is true:

Because the Laplace and Fourier Transforms are so closely related, it does not make much sense to use both transforms for all problems. This book, therefore, will concentrate on the Laplace transform for nearly all subjects, except those problems that deal directly with frequency values. For frequency problems, it makes life much easier to use the Fourier Transform representation.

Like the Laplace Transform, the Fourier Transform has been extensively tabulated. Properties of the Fourier transform, in addition to a table of common transforms is available in the Appendix.

Inverse Fourier Transform

This operation can be performed using this MATLAB command:

The inverse Fourier Transform is defined as follows:

[Inverse Fourier Transform]

This transform is nearly identical to the Fourier Transform.

Complex Plane

Using the above equivalence, we can show that the Laplace transform is always equal to the Fourier Transform, if the variable s is an imaginary number. However, the Laplace transform is different if s is a real or a complex variable. As such, we generally define s to have both a real part and an imaginary part, as such:

And we can show that s = jω if σ = 0.

Since the variable s can be broken down into 2 independent values, it is frequently of some value to graph the variable s on its own special "S-plane". The S-plane graphs the variable σ on the horizontal axis, and the value of jω on the vertical axis. This axis arrangement is shown at right.


Euler's Formula

There is an important result from calculus that is known as Euler's Formula, or "Euler's Relation". This important formula relates the important values of e, j, π, 1 and 0:

However, this result is derived from the following equation, setting ω to π:


[Euler's Formula]

This formula will be used extensively in some of the chapters of this book, so it is important to become familiar with it now.

MATLAB

The MATLAB symbolic toolbox contains functions to compute the Laplace and Fourier transforms automatically. The function laplace, and the function fourier can be used to calculate the Laplace and Fourier transforms of the input functions, respectively. For instance, the code:

t = sym('t');
fx = 30*t^2 + 20*t;
laplace(fx)

produces the output:

ans =

60/s^3+20/s^2

We will discuss these functions more in The Appendix.

Further reading



Transfer Functions

Transfer Functions

This operation can be performed using this MATLAB command:

A Transfer Function is the ratio of the output of a system to the input of a system, in the Laplace domain considering its initial conditions and equilibrium point to be zero. This assumption is relaxed for systems observing transience. If we have an input function of X(s), and an output function Y(s), we define the transfer function H(s) to be:


[Transfer Function]

Readers who have read the Circuit Theory book will recognize the transfer function as being the impedance, admittance, impedance ratio of a voltage divider or the admittance ratio of a current divider.

Impulse Response

Note:
Time domain variables are generally written with lower-case letters. Laplace-Domain, and other transform domain variables are generally written using upper-case letters.

For comparison, we will consider the time-domain equivalent to the above input/output relationship. In the time domain, we generally denote the input to a system as x(t), and the output of the system as y(t). The relationship between the input and the output is denoted as the impulse response, h(t).

We define the impulse response as being the relationship between the system output to its input. We can use the following equation to define the impulse response:

Impulse Function

It would be handy at this point to define precisely what an "impulse" is. The Impulse Function, denoted with δ(t) is a special function defined piece-wise as follows:


[Impulse Function]

The impulse function is also known as the delta function because it's denoted with the Greek lower-case letter δ. The delta function is typically graphed as an arrow towards infinity, as shown below:

It is drawn as an arrow because it is difficult to show a single point at infinity in any other graphing method. Notice how the arrow only exists at location 0, and does not exist for any other time t. The delta function works with regular time shifts just like any other function. For instance, we can graph the function δ(t - N) by shifting the function δ(t) to the right, as such:

An examination of the impulse function will show that it is related to the unit-step function as follows:

and

The impulse function is not defined at point t = 0, but the impulse must always satisfy the following condition, or else it is not a true impulse function:

The response of a system to an impulse input is called the impulse response. Now, to get the Laplace Transform of the impulse function, we take the derivative of the unit step function, which means we multiply the transform of the unit step function by s:

This result can be verified in the transform tables in The Appendix.

Step Response

This operation can be performed using this MATLAB command:

Similar to the impulse response, the step response of a system is the output of the system when a unit step function is used as the input. The step response is a common analysis tool used to determine certain metrics about a system. Typically, when a new system is designed, the step response of the system is the first characteristic of the system to be analyzed.

Convolution

This operation can be performed using this MATLAB command:

However, the impulse response cannot be used to find the system output from the system input in the same manner as the transfer function. If we have the system input and the impulse response of the system, we can calculate the system output using the convolution operation as such:

Remember: an asterisk means convolution, not multiplication!

Where " * " (asterisk) denotes the convolution operation. Convolution is a complicated combination of multiplication, integration and time-shifting. We can define the convolution between two functions, a(t) and b(t) as the following:


[Convolution]

(The variable τ (Greek tau) is a dummy variable for integration). This operation can be difficult to perform. Therefore, many people prefer to use the Laplace Transform (or another transform) to convert the convolution operation into a multiplication operation, through the Convolution Theorem.

Time-Invariant System Response

If the system in question is time-invariant, then the general description of the system can be replaced by a convolution integral of the system's impulse response and the system input. We can call this the convolution description of a system, and define it below:


[Convolution Description]

Convolution Theorem

This method of solving for the output of a system is quite tedious, and in fact it can waste a large amount of time if you want to solve a system for a variety of input signals. Luckily, the Laplace transform has a special property, called the Convolution Theorem, that makes the operation of convolution easier:

Convolution Theorem
Convolution in the time domain becomes multiplication in the complex Laplace domain. Multiplication in the time domain becomes convolution in the complex Laplace domain.

The Convolution Theorem can be expressed using the following equations:


[Convolution Theorem]

This also serves as a good example of the property of Duality.

Using the Transfer Function

The Transfer Function fully describes a control system. The Order, Type and Frequency response can all be taken from this specific function. Nyquist and Bode plots can be drawn from the open loop Transfer Function. These plots show the stability of the system when the loop is closed. Using the denominator of the transfer function, called the characteristic equation, roots of the system can be derived.

For all these reasons and more, the Transfer function is an important aspect of classical control systems. Let's start out with the definition:

Transfer Function
The Transfer function of a system is the relationship of the system's output to its input, represented in the complex Laplace domain.

If the complex Laplace variable is s, then we generally denote the transfer function of a system as either G(s) or H(s). If the system input is X(s), and the system output is Y(s), then the transfer function can be defined as such:

If we know the input to a given system, and we have the transfer function of the system, we can solve for the system output by multiplying:


[Transfer Function Description]

Example: Impulse Response

From a Laplace transform table, we know that the Laplace transform of the impulse function, δ(t) is:

So, when we plug this result into our relationship between the input, output, and transfer function, we get:

In other words, the "impulse response" is the output of the system when we input an impulse function.

Example: Step Response

From the Laplace Transform table, we can also see that the transform of the unit step function, u(t) is given by:

Plugging that result into our relation for the transfer function gives us:

And we can see that the step response is simply the impulse response divided by s.

Example: MATLAB Step Response

Use MATLAB to find the step response of the following transfer function:

We can separate out our numerator and denominator polynomials as such:

num = [79 916 1000];
den = [1 30 300 1000 0];
sys = tf(num, den);
% if you are using the System Identification Toolbox instead of the Control System Tooolbox:
sys = idtf(num, den);

Now, we can get our step response from the step function, and plot it for time from 1 to 10 seconds:

T = 1:0.001:10;
step(sys, T);

Frequency Response

The Frequency Response is similar to the Transfer function, except that it is the relationship between the system output and input in the complex Fourier Domain, not the Laplace domain. We can obtain the frequency response from the transfer function, by using the following change of variables:

Frequency Response
The frequency response of a system is the relationship of the system's output to its input, represented in the Fourier Domain.

Because the frequency response and the transfer function are so closely related, typically only one is ever calculated, and the other is gained by simple variable substitution. However, despite the close relationship between the two representations, they are both useful individually, and are each used for different purposes.



Sampled Data Systems

Ideal Sampler

In this chapter, we are going to introduce the ideal sampler and the Star Transform. First, we need to introduce (or review) the Geometric Series infinite sum. The results of this sum will be very useful in calculating the Star Transform, later.

Consider a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals:

Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship works for any fractional value.

Taking the Laplace Transform of this infinite sequence will yield us with a special result called the Star Transform. The Star Transform is also occasionally called the "Starred Transform" in some texts.

Geometric Series

Before we talk about the Star Transform or even the Z-Transform, it is useful for us to review the mathematical background behind solving infinite series. Specifically, because of the nature of these transforms, we are going to look at methods to solve for the sum of a geometric series.

A geometric series is a sum of values with increasing exponents, as such:

In the equation above, notice that each term in the series has a coefficient value, a. We can optionally factor out this coefficient, if the resulting equation is easier to work with:

Once we have an infinite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation:

Let's say that we start our series off at a number that isn't zero. Let's say for instance that we start our series off at n = 1 or n = 100. Let's see:

We can generalize the sum to this series as follows:


[Geometric Series]

With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching infinity (because this is an infinite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series.

To satisfy this equation, we must satisfy the following condition:


[Geometric convergence condition]


Therefore, we come to the final result: The geometric series converges if and only if the value of r is less than one.

The Star Transform

The Star Transform is defined as such:


[Star Transform]

The Star Transform depends on the sampling time T and is different for a single signal depending on the frequency at which the signal is sampled. Since the Star Transform is defined as an infinite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform.

Star ↔ Laplace

For more information about residues, see:
Complex Analysis/Residue Theory

The Laplace Transform and the Star Transform are clearly related, because we obtained the Star Transform by using the Laplace Transform on a time-domain signal. However, the method to convert between the two results can be a slightly difficult one. To find the Star Transform of a Laplace function, we must take the residues of the Laplace equation, as such:

This math is advanced for most readers, so we can also use an alternate method, as follows:

Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suffice it to say, however, that the Laplace transform and the Star Transform are related mathematically.

Star + Laplace

In some systems, we may have components that are both continuous and discrete in nature. For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by a computer (for processing), and then a Digital-To-Analog converter. In this case, the computer is acting on a digital signal, but the rest of the system is acting on continuous signals. Star transforms can interact with Laplace transforms in some of the following ways:

Given:

Then:

Given:

Then:

Where is the Star Transform of the product of X(s)H(s).

Convergence of the Star Transform

The Star Transform is defined as being an infinite series, so it is critically important that the series converge (not reach infinity), or else the result will be nonsensical. Since the Star Transform is a geometic series (for many input signals), we can use geometric series analysis to show whether the series converges, and even under what particular conditions the series converges. The restrictions on the star transform that allow it to converge are known as the region of convergence (ROC) of the transform. Typically a transform must be accompanied by the explicit mention of the ROC.

The Z-Transform

Let us say now that we have a discrete data set that is sampled at regular intervals. We can call this set x[n]:

x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ]
This is also known as the Bilateral Z-Transform. We will only discuss this version of the transform in this book

we can utilize a special transform, called the Z-transform, to make dealing with this set more easy:


[Z Transform]

Z-Transform properties, and a table of common transforms can be found in:
the Appendix.

Like the Star Transform the Z Transform is defined as an infinite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information.


Z Transfer Functions

Like the Laplace Transform, in the Z-domain we can use the input-output relationship of the system to define a transfer function.

The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain:

Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the definition of an "impulse" is different in the analog and digital domains.

Inverse Z Transform

The inverse Z Transform is defined by the following path integral:


[Inverse Z Transform]

Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z).

There is more information about complex integrals in the book Engineering Analysis.

This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix.

Final Value Theorem

Like the Laplace Transform, the Z Transform also has an associated final value theorem:


[Final Value Theorem (Z)]

This equation can be used to find the steady-state response of a system, and also to calculate the steady-state error of the system.

Star ↔ Z

The Z transform is related to the Star transform though the following change of variables:

Notice that in the Z domain, we don't maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available.

Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0.

Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming's book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of different transforms can make problems more easy to solve, and we will therefore use a multi-transform approach.

Z plane

Note:
The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane.

z is a complex variable with a real part and an imaginary part. In other words, we can define z as such:

Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z.

Notice also that if we define z in terms of the star-transform relation:

we can separate out s into real and imaginary parts:

We can plug this into our equation for z:

Through Euler's formula, we can separate out the complex exponential as such:

If we define two new variables, M and φ:

We can write z in terms of M and φ. Notice that it is Euler's equation:

Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary part of s.

Region of Convergence

To best teach the region of convergance (ROC) for the Z-transform, we will do a quick example.

We have the following discrete series or a decaying exponential:

Now, we can plug this function into the Z transform equation:

Note that we can remove the unit step function, and change the limits of the sum:

This is because the series is 0 for all time less than n → 0. If we try to combine the n terms, we get the following result:

Once we have our series in this term, we can break this down to look like our geometric series:

And finally, we can find our final value, using the geometric series formula:

Again, we know that to make this series converge, we need to make the r value less than 1:

And finally we obtain the region of convergance for this Z-transform:

Laplace ↔ Z

There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix.

However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems:

This means that when we combine two functions in one domain multiplicatively, we must find a combined transform in the other domain. Here is how we denote this combined transform:

Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format:

Then we can put everything in terms of the Star Transform:

and once we are in the star domain, we can do a direct change of variables to reach the Z domain:

Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain.

Example

Let's say that we have the following equation in the Laplace domain:

And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each:

And

And when we add them together, we get our result:

Z ↔ Fourier

By substituting variables, we can relate the Star transform to the Fourier Transform as well:

If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable.

There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing.

Reconstruction

Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system.

If we have a sampled signal denoted by the Star Transform , we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplace-transform techniques.

Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows:

Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal.

Zero order Hold

Zero-Order Hold impulse response

A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform.

The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such:


[Zero Order Hold]

The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog.

A continuous input signal (gray) and the sampled signal with a zero-order hold (red)

First Order Hold

Impulse response of a first-order hold.

The zero-order hold creates a step output waveform, but this isn't always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The first-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform.


[First Order Hold]

Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the first-order hold may have a number of discontinuities.

An input signal (grey) and the first-order hold circuit output (red)

Fractional Order Hold

The Zero-Order hold outputs the current value onto the output, and keeps it level throughout the entire bit time. The first-order hold uses the function derivative to predict the next value, and produces a series of ramp outputs to produce a fluctuating waveform. Sometimes however, neither of these solutions are desired, and therefore we have a compromise: Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding circuits, and takes a fractional number k as an argument. Notice that k must be between 0 and 1 for this circuit to work correctly.


[Fractional Order Hold]

This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit.

Other Reconstruction Circuits

Impulse response to a linear-approximation circuit.

Another type of circuit that can be used is a linear approximation circuit.

An input signal (grey) and the output signal through a linear approximation circuit

Further reading


System Delays

Delays

A system can be built with an inherent delay. Delays are units that cause a time-shift in the input signal, but that don't affect the signal characteristics. An ideal delay is a delay system that doesn't affect the signal characteristics at all, and that delays the signal for an exact amount of time. Some delays, like processing delays or transmission delays, are unintentional. Other delays however, such as synchronization delays, are an integral part of a system. This chapter will talk about how delays are utilized and represented in the Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter, through change of variables, to express delays in other domains.

Ideal Delays

An ideal delay causes the input function to be shifted forward in time by a certain specified amount of time. Systems with an ideal delay cause the system output to be delayed by a finite, predetermined amount of time.

Time Shifts

Let's say that we have a function in time that is time-shifted by a certain constant time period T. For convenience, we will denote this function as x(t - T). Now, we can show that the Laplace transform of x(t - T) is the following:

What this demonstrates is that time-shifts in the time-domain become exponentials in the complex Laplace domain.

Shifts in the Z-Domain

Since we know the following general relationship between the Z Transform and the Star Transform:

We can show what a time shift in a discrete time domain becomes in the Z domain:

Delays and Stability

A time-shift in the time domain becomes an exponential increase in the Laplace domain. This would seem to show that a time shift can have an effect on the stability of a system, and occasionally can cause a system to become unstable. We define a new parameter called the time margin as the amount of time that we can shift an input function before the system becomes unstable. If the system can survive any arbitrary time shift without going unstable, we say that the time margin of the system is infinite.

Delay Margin

When speaking of sinusoidal signals, it doesn't make sense to talk about "time shifts", so instead we talk about "phase shifts". Therefore, it is also common to refer to the time margin as the phase margin of the system. The phase margin denotes the amount of phase shift that we can apply to the system input before the system goes unstable.

We denote the phase margin for a system with a lowercase Greek letter φ (phi). Phase margin is defined as such for a second-order system:


[Delay Margin]

Oftentimes, the phase margin is approximated by the following relationship:


[Delay Margin (approx)]

The Greek letter zeta (ζ) is a quantity called the damping ratio, and we discuss this quantity in more detail in the next chapter.

Transform-Domain Delays

The ordinary Z-Transform does not account for a system which experiences an arbitrary time delay, or a processing delay. The Z-Transform can, however, be modified to account for an arbitrary delay. This new version of the Z-transform is frequently called the Modified Z-Transform, although in some literature (notably in Wikipedia), it is known as the Advanced Z-Transform.

Delayed Star Transform

To demonstrate the concept of an ideal delay, we will show how the star transform responds to a time-shifted input with a specified delay of time T. The function : is the delayed star transform with a delay parameter Δ. The delayed star transform is defined in terms of the star transform as such:


[Delayed Star Transform]

As we can see, in the star transform, a time-delayed signal is multiplied by a decaying exponential value in the transform domain.

Delayed Z-Transform

Since we know that the Star Transform is related to the Z Transform through the following change of variables:

We can interpret the above result to show how the Z Transform responds to a delay:

This result is expected.

Now that we know how the Z transform responds to time shifts, it is often useful to generalize this behavior into a form known as the Delayed Z-Transform. The Delayed Z-Transform is a function of two variables, z and Δ, and is defined as such:

And finally:


[Delayed Z Transform]

Modified Z-Transform

The Delayed Z-Transform has some uses, but mathematicians and engineers have decided that a more useful version of the transform was needed. The new version of the Z-Transform, which is similar to the Delayed Z-transform with a change of variables, is known as the Modified Z-Transform. The Modified Z-Transform is defined in terms of the delayed Z transform as follows:

And it is defined explicitly:


[Modified Z Transform]



Poles and Zeros

System/page|system {{control Equation {{space-state|Delays

Poles and Zeros

Poles and Zeros of a transfer function are the values of the complex frequency variable ( s ) for which the transfer function becomes infinite (poles) or zero (zeros), respectively. Specifically, zeros are the values of ( s ) that make the numerator of the transfer function zero, and poles are the values of ( s ) that make the denominator of the transfer function zero. The values of the poles and the zeros of a system determine whether the system is stable, and how well the system performs. Control systems, in the most simple sense, can be designed simply by assigning specific values to the poles and zeros of the system.

Physically realizable control systems must have a number of poles greater than the number of zeros. Systems that satisfy this relationship are called Proper. We will elaborate on this below.

Time-Domain Relationships

Let's say that we have a transfer function with 3 poles:

The poles are located at s = l, m, n. Now, we can use partial fraction expansion to separate out the transfer function:

Using the inverse transform on each of these component fractions (looking up the transforms in our table), we get the following:

But, since s is a complex variable, l m and n can all potentially be complex numbers, with a real part (σ) and an imaginary part (jω). If we just look at the first term:

Using Euler's Equation on the imaginary exponent, we get:

If a complex pole is present it is always accompanied by another pole that is its complex conjugate. The imaginary parts of their time domain representations thus cancel and we are left with 2 of the same real parts. Assuming that the complex conjugate pole of the first term is present, we can take 2 times the real part of this equation and we are left with our final result:

We can see from this equation that every pole will have an exponential part, and a sinusoidal part to its response. We can also go about constructing some rules:

  1. if σl = 0, the response of the pole is a perfect sinusoidal (an oscillator)
  2. if ωl = 0, the response of the pole is a perfect exponential.
  3. if σl < 0, the exponential part of the response will decay towards zero.
  4. if σl > 0, the exponential part of the response will rise towards infinity.

From the last two rules, we can see that all poles of the system must have negative real parts, and therefore they must all have the form (s + l) for the system to be stable. We will discuss stability in later chapters.

What are Poles and Zeros

Let's say we have a transfer function defined as a ratio of two polynomials:

Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator of the transfer function) obtained by setting N(s) = 0 and solving for s.

The polynomial order of a function is the value of the highest exponent in the polynomial.

Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must not have more zeros than poles, we can state that the polynomial order of D(s) must be greater than or equal to the polynomial order of N(s).


Example

Consider the transfer function:

We define N(s) and D(s) to be the numerator and denominator polynomials, as such:

We set N(s) to zero, and solve for s:

So we have a zero at s → -2. Now, we set D(s) to zero, and solve for s to obtain the poles of the equation:

And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable, and it can therefore take imaginary and real values.

Effects of Poles and Zeros

As s approaches a zero, the numerator of the transfer function (and therefore the transfer function itself) approaches the value 0. When s approaches a pole, the denominator of the transfer function approaches zero, and the value of the transfer function approaches infinity. An output value of infinity should raise an alarm bell for people who are familiar with BIBO stability. We will discuss this later.

As we have seen above, the locations of the poles, and the values of the real and imaginary parts of the pole determine the response of the system. Real parts correspond to exponentials, and imaginary parts correspond to sinusoidal values. Addition of poles to the transfer function has the effect of pulling the root locus to the right, making the system less stable. Addition of zeros to the transfer function has the effect of pulling the root locus to the left, making the system more stable.

Second-Order Systems

The canonical form for a second order system is as follows:


[Second-order transfer function]

Where K is the system gain, ζ is called the damping ratio of the function, and ω is called the natural frequency of the system. ζ and ω, if exactly known for a second order system, the time responses can be easily plotted and stability can easily be checked. More information on second order systems can be found here.

Damping Ratio

The damping ratio of a second-order system, denoted with the Greek letter zeta (ζ), is a real number that defines the damping properties of the system. More damping has the effect of less percent overshoot, and slower settling time. Damping is the inherent ability of the system to oppose the oscillatory nature of the system's transient response. Larger values of damping coefficient or damping factor produces transient responses with lesser oscillatory nature.

Natural Frequency

The natural frequency is occasionally written with a subscript:

We will omit the subscript when it is clear that we are talking about the natural frequency, but we will include the subscript when we are using other values for the variable ω. Also, when .

Higher-Order Systems


Modern Controls

The modern method of controls uses systems of special state-space equations to model and manipulate systems. The state variable model is broad enough to be useful in describing a wide range of systems, including systems that cannot be adequately described using the Laplace Transform. These chapters will require the reader to have a solid background in linear algebra, and multi-variable calculus.


State-Space Equations

Time-Domain Approach

The "Classical" method of controls (what we have been studying so far) has been based mostly in the transform domain. When we want to control the system in general, we represent it using the Laplace transform (Z-Transform for digital systems) and when we want to examine the frequency characteristics of a system we use the Fourier Transform. The question arises, why do we do this?

Let's look at a basic second-order Laplace Transform transfer function:

We can decompose this equation in terms of the system inputs and outputs:

Now, when we take the inverse Laplace transform of our equation, we can see that:

The Laplace transform is transforming the fact that we are dealing with second-order differential equations. The Laplace transform moves a system out of the time-domain into the complex frequency domain so we can study and manipulate our systems as algebraic polynomials instead of linear ODEs. Given the complexity of differential equations, why would we ever want to work in the time domain?

It turns out that to decompose our higher-order differential equations into multiple first-order equations, one can find a new method for easily manipulating the system without having to use integral transforms. The solution to this problem is state variables . By taking our multiple first-order differential equations and analyzing them in vector form, we can not only do the same things we were doing in the time domain using simple matrix algebra, but now we can easily account for systems with multiple inputs and outputs without adding much unnecessary complexity. This demonstrates why the "modern" state-space approach to controls has become popular.

State-Space

In a state-space system, the internal state of the system is explicitly accounted for by an equation known as the state equation. The system output is given in terms of a combination of the current system state, and the current system input, through the output equation. These two equations form a system of equations known collectively as state-space equations. The state-space is the vector space that consists of all the possible internal states of the system.

For a system to be modeled using the state-space method, the system must meet this requirement:

  1. The system must be "lumped"

"Lumped" in this context, means that we can find a finite-dimensional state-space vector which fully characterises all such internal states of the system.

This text mostly considers linear state-space systems where the state and output equations satisfy the superposition principle. However, the state-space approach is equally valid for nonlinear systems although some specific methods are not applicable to nonlinear systems.

State

Central to the state-space notation is the idea of a state. A state of a system is the current value of internal elements of the system which change separately (but are not completely unrelated) to the output of the system. In essence, the state of a system is an explicit account of the values of the internal system components. Here are some examples:

Consider an electric circuit with both an input and an output terminal. This circuit may contain any number of inductors and capacitors. The state variables may represent the magnetic and electric fields of the inductors and capacitors, respectively.

Consider a spring-mass-dashpot system. The state variables may represent the compression of the spring, or the acceleration at the dashpot.

Consider a chemical reaction where certain reagents are poured into a mixing container, and the output is the amount of the chemical product produced over time. The state variables may represent the amounts of un-reacted chemicals in the container, or other properties such as the quantity of thermal energy in the container that can serve to facilitate the reaction.

State Variables

When modeling a system using a state-space equation, we first need to define three vectors:

Input variables
A SISO (Single-Input Single-Output) system will only have one input value, but a MIMO (Multiple-Input Multiple-Output) system may have multiple inputs. We need to define all the inputs to the system and arrange them into a vector.
Output variables
This is the system output value, and in the case of MIMO systems we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector.
State Variables
The state variables represent values from inside the system that can change over time. In an electric circuit for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables.

We denote the input variables with u, the output variables with y, and the state variables with x. In essence, we have the following relationship:

Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the system input:

Where x' is the rate of change of the state variables. We will define f(u, x) and g(u, x) in the next chapter.

Multi-Input, Multi-Output

In the Laplace domain, if we want to account for systems with multiple inputs and multiple outputs, we are going to need to rely on the principle of superposition to create a system of simultaneous Laplace equations for each input and output. For such systems, the classical approach not only doesn't simplify the situation, but because the systems of equations need to be transformed into the frequency domain first, manipulated, and then transformed back into the time domain, they can actually be more difficult to work with. However, the Laplace domain technique can be combined with the State-Space techniques discussed in the next few chapters to bring out the best features of both techniques. We will discuss MIMO systems in the MIMO Systems Chapter.

State-Space Equations

In a state-space system representation, we have a system of two equations: an equation for determining the state of the system, and another equation for determining the output of the system. We will use the variable y(t) as the output of the system, x(t) as the state of the system, and u(t) as the input of the system. We use the notation x'(t) (note the prime) for the first derivative of the state vector of the system, as dependent on the current state of the system and the current input. Symbolically, we say that there are transforms g and h, that display this relationship:

Note:
If x'(t) and y(t) are not linear combinations of x(t) and u(t), the system is said to be nonlinear. We will attempt to discuss non-linear systems in a later chapter.

The first equation shows that the system state change is dependent on the previous system state, the initial state of the system, the time, and the system inputs. The second equation shows that the system output is dependent on the current system state, the system input, and the current time.

If the system state change x'(t) and the system output y(t) are linear combinations of the system state and input vectors, then we can say the systems are linear systems, and we can rewrite them in matrix form:


[State Equation]


[Output Equation]

If the systems themselves are time-invariant, we can re-write this as follows:

The State Equation shows the relationship between the system's current state and its input, and the future state of the system. The Output Equation shows the relationship between the system state and its input, and the output. These equations show that in a given system, the current output is dependent on the current input and the current state. The future state is also dependent on the current state and the current input.

It is important to note at this point that the state space equations of a particular system are not unique, and there are an infinite number of ways to represent these equations by manipulating the A, B, C and D matrices using row operations. There are a number of "standard forms" for these matrices, however, that make certain computations easier. Converting between these forms will require knowledge of linear algebra.

State-Space Basis Theorem
Any system that can be described by a finite number of nth order differential equations or nth order difference equations, or any system that can be approximated by them, can be described using state-space equations. The general solutions to the state-space equations, therefore, are solutions to all such sets of equations.

Matrices: A B C D

Our system has the form:

We've bolded several quantities to try and reinforce the fact that they can be vectors, not just scalar quantities. If these systems are time-invariant, we can simplify them by removing the time variables:

Now, if we take the partial derivatives of these functions with respect to the input and the state vector at time t0, we get our system matrices:

In our time-invariant state space equations, we write these matrices and their relationships as:

We have four constant matrices: A, B, C, and D. We will explain these matrices below:

Matrix A
Matrix A is the system matrix, and relates how the current state affects the state change x' . If the state change is not dependent on the current state, A will be the zero matrix. The exponential of the state matrix, eAt is called the state transition matrix, and is an important function that we will describe below.
Matrix B
Matrix B is the control matrix, and determines how the system input affects the state change. If the state change is not dependent on the system input, then B will be the zero matrix.
Matrix C
Matrix C is the output matrix, and determines the relationship between the system state and the system output.
Matrix D
Matrix D is the feed-forward matrix, and allows for the system input to affect the system output directly. A basic feedback system like those we have previously considered do not have a feed-forward element, and therefore for most of the systems we have already considered, the D matrix is the zero matrix.

Matrix Dimensions

Because we are adding and multiplying multiple matrices and vectors together, we need to be absolutely certain that the matrices have compatible dimensions, or else the equations will be undefined. For integer values p, q, and r, the dimensions of the system matrices and vectors are defined as follows:

Vectors Matrices

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

If the matrix and vector dimensions do not agree with one another, the equations are invalid and the results will be meaningless. Matrices and vectors must have compatible dimensions or they cannot be combined using matrix operations.

For the rest of the book, we will be using the small template on the right as a reminder about the matrix dimensions, so that we can keep a constant notation throughout the book.

Notational Shorthand

The state equations and the output equations of systems can be expressed in terms of matrices A, B, C, and D. Because the form of these equations is always the same, we can use an ordered quadruplet to denote a system. We can use the shorthand (A, B, C, D) to denote a complete state-space representation. Also, because the state equation is very important for our later analysis, we can write an ordered pair (A, B) to refer to the state equation:

Obtaining the State-Space Equations

The beauty of state equations, is that they can be used to transparently describe systems that are both continuous and discrete in nature. Some texts will differentiate notation between discrete and continuous cases, but this text will not make such a distinction. Instead we will opt to use the generic coefficient matrices A, B, C and D for both continuous and discrete systems. Occasionally this book may employ the subscript C to denote a continuous-time version of the matrix, and the subscript D to denote the discrete-time version of the same matrix. Other texts may use the letters F, H, and G for continuous systems and Γ, and Θ for use in discrete systems. However, if we keep track of our time-domain system, we don't need to worry about such notations.

From Differential Equations

Let's say that we have a general 3rd order differential equation in terms of input u(t) and output y(t):

We can create the state variable vector x in the following manner:

Which now leaves us with the following 3 first-order equations:

Now, we can define the state vector x in terms of the individual x components, and we can create the future state vector as well:

,

And with that, we can assemble the state-space equations for the system:

Granted, this is only a simple example, but the method should become apparent to most readers.

From Transfer Functions

The method of obtaining the state-space equations from the Laplace domain transfer functions are very similar to the method of obtaining them from the time-domain differential equations. We call the process of converting a system description from the Laplace domain to the state-space domain realization. We will discuss realization in more detail in a later chapter. In general, let's say that we have a transfer function of the form:

We can write our A, B, C, and D matrices as follows:

This form of the equations is known as the controllable canonical form of the system matrices, and we will discuss this later.

Notice that to perform this method, the denominator and numerator polynomials must be monic, the coefficients of the highest-order term must be 1. If the coefficient of the highest order term is not 1, you must divide your equation by that coefficient to make it 1.

State-Space Representation

As an important note, remember that the state variables x are user-defined and therefore are arbitrary. There are any number of ways to define x for a particular problem, each of which are going to lead to different state space equations.

Note: There are an infinite number of equivalent ways to represent a system using state-space equations. Some ways are better than others. Once these state-space equations are obtained, they can be manipulated to take a particular form if needed.

Consider the previous continuous-time example. We can rewrite the equation in the form

.

We now define the state variables

with first-order derivatives

(suspected error here. Fails to account that :. encapsulates : five lines earlier.)

The state-space equations for the system will then be given by

x may also be used in any number of variable transformations, as a matter of mathematical convenience. However, the variables y and u correspond to physical signals, and may not be arbitrarily selected, redefined, or transformed as x can be.

Example: Dummy Variables

The altitude control of a particular manned aircraft can be given by:

Where α is the direction the aircraft is traveling in, θ is the direction the aircraft is facing (the attitude), and δ is the angle of the ailerons (the control input from the pilot). This equation is not in a proper format, so we need to produce some dummy-variables:

This in turn will provide us with our state equation:

As we can see from this equation, even though we have a valid state-equation, the variables θ1 and θ2 don't necessarily correspond to any measurable physical event, but are instead dummy variables constructed by the user to help define the system. Note, however, that the variables α and δ do correspond to physical values, and cannot be changed.

Discretization

If we have a system (A, B, C, D) that is defined in continuous time, we can discretize the system so that an equivalent process can be performed using a digital computer. We can use the definition of the derivative, as such:

And substituting this into the state equation with some approximation (and ignoring the limit for now) gives us:

We are able to remove that limit because in a discrete system, the time interval between samples is positive and non-negligible. By definition, a discrete system is only defined at certain time points, and not at all time points as the limit would have indicated. In a discrete system, we are interested only in the value of the system at discrete points. If those points are evenly spaced by every T seconds (the sampling time), then the samples of the system occur at t = kT, where k is an integer. Substituting kT for t into our equation above gives us:

Or, using the square-bracket shorthand that we've developed earlier, we can write:

In this form, the state-space system can be implemented quite easily into a digital computer system using software, not complicated analog hardware. We will discuss this relationship and digital systems more specifically in a later chapter.

We will write out the discrete-time state-space equations as:

Note on Notations

The variable T is a common variable in control systems, especially when talking about the beginning and end points of a continuous-time system, or when discussing the sampling time of a digital system. However, another common use of the letter T is to signify the transpose operation on a matrix. To alleviate this ambiguity, we will denote the transpose of a matrix with a prime:

Where A' is the transpose of matrix A.

The prime notation is also frequently used to denote the time-derivative. Most of the matrices that we will be talking about are time-invariant; there is no ambiguity because we will never take the time derivative of a time-invariant matrix. However, for a time-variant matrix we will use the following notations to distinguish between the time-derivative and the transpose:

the transpose.
the time-derivative.

Note that certain variables which are time-variant are not written with the (t) postscript, such as the variables x, y, and u. For these variables, the default behavior of the prime is the time-derivative, such as in the state equation. If the transpose needs to be taken of one of these vectors, the (t)' postfix will be added explicitly to correspond to our notation above.

For instances where we need to use the Hermitian transpose, we will use the notation:

This notation is common in other literature, and raises no obvious ambiguities here.

MATLAB Representation

This operation can be performed using this MATLAB command:

State-space systems can be represented in MATLAB using the 4 system matrices, A, B, C, and D. We can create a system data structure using the ss function:

sys = ss(A, B, C, D);

Systems created in this way can be manipulated in the same way that the transfer function descriptions (described earlier) can be manipulated. To convert a transfer function to a state-space representation, we can use the tf2ss function:

[A, B, C, D] = tf2ss(num, den);

And to perform the opposite operation, we can use the ss2tf function:

[num, den] = ss2tf(A, B, C, D);


Solutions for Linear Systems

State Equation Solutions

The solutions in this chapter are heavily rooted in prior knowledge of Ordinary Differential Equations. Readers should have a prior knowledge of that subject before reading this chapter.

The state equation is a first-order linear differential equation, or (more precisely) a system of linear differential equations. Because this is a first-order equation, we can use results from Ordinary Differential Equations to find a general solution to the equation in terms of the state-variable x. Once the state equation has been solved for x, that solution can be plugged into the output equation. The resulting equation will show the direct relationship between the system input and the system output, without the need to account explicitly for the internal state of the system. The sections in this chapter will discuss the solutions to the state-space equations, starting with the easiest case (Time-invariant, no input), and ending with the most difficult case (Time-variant systems).

Solving for x(t) With Zero Input

Looking again at the state equation:

We can see that this equation is a first-order differential equation, except that the variables are vectors, and the coefficients are matrices. However, because of the rules of matrix calculus, these distinctions don't matter. We can ignore the input term (for now), and rewrite this equation in the following form:

And we can separate out the variables as such:

Integrating both sides, and raising both sides to a power of e, we obtain the result:

Where C is a constant. We can assign D = eC to make the equation easier, but we also know that D will then be the initial conditions of the system. This becomes obvious if we plug the value zero into the variable t. The final solution to this equation then is given as:

We call the matrix exponential eAt the state-transition matrix, and calculating it, while difficult at times, is crucial to analyzing and manipulating systems. We will talk more about calculating the matrix exponential below.

Solving for x(t) With Non-Zero Input

If, however, our input is non-zero (as is generally the case with any interesting system), our solution is a little bit more complicated. Notice that now that we have our input term in the equation, we will no longer be able to separate the variables and integrate both sides easily.

We subtract to get the on the left side, and then we do something curious; we premultiply both sides by the inverse state transition matrix:

The rationale for this last step may seem fuzzy at best, so we will illustrate the point with an example:

Example

Take the derivative of the following with respect to time:

The product rule from differentiation reminds us that if we have two functions multiplied together:

and we differentiate with respect to t, then the result is:

If we set our functions accordingly:

Then the output result is:

If we look at this result, it is the same as from our equation above.

Using the result from our example, we can condense the left side of our equation into a derivative:

Now we can integrate both sides, from the initial time (t0) to the current time (t), using a dummy variable τ, we will get closer to our result. Finally, if we premultiply by eAt, we get our final result:


[General State Equation Solution]

If we plug this solution into the output equation, we get:


[General Output Equation Solution]

This is the general Time-Invariant solution to the state space equations, with non-zero input. These equations are important results, and students who are interested in a further study of control systems would do well to memorize these equations.

State-Transition Matrix

More information about matrix exponentials can be found in:
Engineering Analysis

The state transition matrix, eAt, is an important part of the general state-space solutions for the time-invariant cases listed above. Calculating this matrix exponential function is one of the very first things that should be done when analyzing a new system, and the results of that calculation will tell important information about the system in question.

The matrix exponential can be calculated directly by using a Taylor-Series expansion:

More information about diagonal matrices and Jordan-form matrices can be found in:
Engineering Analysis

Also, we can attempt to diagonalize the matrix A into a diagonal matrix or a Jordan Canonical matrix. The exponential of a diagonal matrix is simply the diagonal elements individually raised to that exponential. The exponential of a Jordan canonical matrix is slightly more complicated, but there is a useful pattern that can be exploited to find the solution quickly. Interested readers should read the relevant passages in Engineering Analysis.

The state transition matrix, and matrix exponentials in general are very important tools in control engineering.

Diagonal Matrices

If a matrix is diagonal, the state transition matrix can be calculated by raising each diagonal entry of the matrix raised as a power of e.

Jordan Canonical Form

If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:

Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.

Inverse Laplace Method

We can calculate the state-transition matrix (or any matrix exponential function) by taking the following inverse Laplace transform:

If A is a high-order matrix, this inverse can be difficult to solve.

If the A matrix is in the Jordan Canonical form, then the matrix exponential can be generated quickly using the following formula:

   

Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.

Spectral Decomposition

If we know all the eigenvalues of A, we can create our transition matrix T, and our inverse transition matrix T-1 These matrices will be the matrices of the right and left eigenvectors, respectively. If we have both the left and the right eigenvectors, we can calculate the state-transition matrix as:


[Spectral Decomposition]

Note that wi' is the transpose of the ith left-eigenvector, not the derivative of it. We will discuss the concepts of "eigenvalues", "eigenvectors", and the technique of spectral decomposition in more detail in a later chapter.

Cayley-Hamilton Theorem

For more information on the Cayley-Hamilton Theorem, see:
Engineering Analysis

The Cayley-Hamilton Theorem can also be used to find a solution for a matrix exponential. For any eigenvalue of the system matrix A, λ, we can show that the two equations are equivalent:

Once we solve for the coefficients of the equation, a, we can then plug those coefficients into the following equation:

Example: Off-Diagonal Matrix

Given the following matrix A, find the state-transition matrix:

We can find the eigenvalues of this matrix as λ = i, -i. If we plug these values into our eigenvector equation, we get:

And we can solve for our eigenvectors:

With our eigenvectors, we can solve for our left-eigenvectors:

Now, using spectral decomposition, we can construct the state-transition matrix:

If we remember Euler's Identity, we can decompose the complex exponentials into sinusoids. Performing the vector multiplications, all the imaginary terms cancel out, and we are left with our result:

The reader is encouraged to perform the multiplications, and attempt to derive this result.

Example: Sympy Calculation

With the freely available python library 'sympy' we can very easily calculate the state-transition matrix automatically:

>>> from sympy import *
>>> t = symbols('t', positive = true)
>>> A = Matrix([[0,1],[-1,0]])
>>> exp(A*t).expand(complex=True)

⎡cos(t)   sin(t)⎤
⎢               ⎥
⎣-sin(t)  cos(t)⎦

You can also try it out yourself on this website:

sympy live

Example: MATLAB Calculation

Using the symbolic toolbox in MATLAB, we can write MATLAB code to automatically generate the state-transition matrix for a given input matrix A. Here is an example of MATLAB code that can perform this task:

function [phi] = statetrans(A)
   t = sym('t');
   phi = expm(A * t);
end

Use this MATLAB function to find the state-transition matrix for the following matrices (warning, calculation may take some time):

Matrix 1 is a diagonal matrix, Matrix 2 has complex eigenvalues, and Matrix 3 is Jordan canonical form. These three matrices should be representative of some of the common forms of system matrices. The following code snippets are the input commands into MATLAB to produce these matrices, and the output results:

Matrix A1
>> A1 = [2 0 ; 0 2];
>> statetrans(A1)
 
ans =
 
[ exp(2*t),        0]
[        0, exp(2*t)]
Matrix A2
>> A2 = [0 1 ; -1 0];
>> statetrans(A1)
 
ans =
 
[  cos(t),  sin(t)]
[ -sin(t),  cos(t)]
Matrix A3
>> A1 = [2 1 ; 0 2];
>> statetrans(A1)
 
ans =
 
[   exp(2*t), t*exp(2*t)]
[          0,   exp(2*t)]

Example: Multiple Methods in MATLAB

There are multiple methods in MATLAB to compute the state transtion matrix, from a scalar (time-invariant) matrix A. The following methods are all going to rely on the Symbolic Toolbox to perform the equation manipulations. At the end of each code snippet, the variable eAt contains the state-transition matrix of matrix A.

Direct Method
t = sym('t');
eAt = expm(A * t);
Laplace Transform Method
s = sym('s');
[n,n] = size(A);
in = inv(s*eye(n) - A);
eAt = ilaplace(in);
Spectral Decomposition
t = sym('t');
[n,n] = size(A);
[V, e] = eig(A);
W = inv(V);
sum = [0 0;0 0];
for I = 1:n
   sum = sum + expm(e(I,I)*t)*V(:,I)*W(I,:);
end;
eAt = sum;

All three of these methods should produce the same answers. The student is encouraged to verify this.


Time-Variant System Solutions

General Time Variant Solution

The state-space equations can be solved for time-variant systems, but the solution is significantly more complicated than the time-invariant case. Our time-variant state equation is given as follows:

We can say that the general solution to time-variant state-equation is defined as:


[Time-Variant General Solution]

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

The function is called the state-transition matrix, because it (like the matrix exponential from the time-invariant case) controls the change for states in the state equation. However, unlike the time-invariant case, we cannot define this as a simple exponential. In fact, can't be defined in general, because it will actually be a different function for every system. However, the state-transition matrix does follow some basic properties that we can use to determine the state-transition matrix.

In a time-variant system, the general solution is obtained when the state-transition matrix is determined. For that reason, the first thing (and the most important thing) that we need to do here is find that matrix. We will discuss the solution to that matrix below.

State Transition Matrix

Note:
The state transition matrix is a matrix function of two variables (we will say t and τ). Once the form of the matrix is solved, we will plug in the initial time, t0 in place of the variable τ. Because of the nature of this matrix, and the properties that it must satisfy, this matrix typically is composed of exponential or sinusoidal functions. The exact form of the state-transition matrix is dependent on the system itself, and the form of the system's differential equation. There is no single "template solution" for this matrix.

The state transition matrix is not completely unknown, it must always satisfy the following relationships:

And also must have the following properties:

1.
2.
3.
4.

If the system is time-invariant, we can define as:

The reader can verify that this solution for a time-invariant system satisfies all the properties listed above. However, in the time-variant case, there are many different functions that may satisfy these requirements, and the solution is dependent on the structure of the system. The state-transition matrix must be determined before analysis on the time-varying solution can continue. We will discuss some of the methods for determining this matrix below.

Time-Variant, Zero Input

As the most basic case, we will consider the case of a system with zero input. If the system has no input, then the state equation is given as:

And we are interested in the response of this system in the time interval T = (a, b). The first thing we want to do in this case is find a fundamental matrix of the above equation. The fundamental matrix is related

Fundamental Matrix

Here, x is an n × 1 vector, and A is an n × n matrix.

Given the equation:

The solutions to this equation form an n-dimensional vector space in the interval T = (a, b). Any set of n linearly-independent solutions {x1, x2, ..., xn} to the equation above is called a fundamental set of solutions.

Readers who have a background in Linear Algebra may recognize that the fundamental set is a basis set for the solution space. Any basis set that spans the entire solution space is a valid fundamental set.

A fundamental matrix FM is formed by creating a matrix out of the n fundamental vectors. We will denote the fundamental matrix with a script capital X:

The fundamental matrix will satisfy the state equation:

Also, any matrix that solves this equation can be a fundamental matrix if and only if the determinant of the matrix is non-zero for all time t in the interval T. The determinant must be non-zero, because we are going to use the inverse of the fundamental matrix to solve for the state-transition matrix.

State Transition Matrix

Once we have the fundamental matrix of a system, we can use it to find the state transition matrix of the system:

The inverse of the fundamental matrix exists, because we specify in the definition above that it must have a non-zero determinant, and therefore must be non-singular. The reader should note that this is only one possible method for determining the state transition matrix, and we will discuss other methods below.

Example: 2-Dimensional System

Given the following fundamental matrix, Find the state-transition matrix.

the first task is to find the inverse of the fundamental matrix. Because the fundamental matrix is a 2 × 2 matrix, the inverse can be given easily through a common formula:

The state-transition matrix is given by:

Other Methods

There are other methods for finding the state transition matrix besides having to find the fundamental matrix.

Method 1
If A(t) is triangular (upper or lower triangular), the state transition matrix can be determined by sequentially integrating the individual rows of the state equation.
Method 2
If for every τ and t, the state matrix commutes as follows:
Then the state-transition matrix can be given as:
The state transition matrix will commute as described above if any of the following conditions are true:
  1. A is a constant matrix (time-invariant)
  2. A is a diagonal matrix
  3. If , where is a constant matrix, and f(t) is a scalar-valued function (not a matrix).
If none of the above conditions are true, then you must use method 3.
Method 3
If A(t) can be decomposed as the following sum:
Where Mi is a constant matrix such that MiMj = MjMi, and fi is a scalar-valued function. If A(t) can be decomposed in this way, then the state-transition matrix can be given as:

It will be left as an exercise for the reader to prove that if A(t) is time-invariant, that the equation in method 2 above will reduce to the state-transition matrix .

Example: Using Method 3

Use method 3, above, to compute the state-transition matrix for the system if the system matrix A is given by:

We can decompose this matrix as follows:

Where f1(t) = t, and f2(t) = 1. Using the formula described above gives us:

Solving the two integrations gives us:

The first term is a diagonal matrix, and the solution to that matrix function is all the individual elements of the matrix raised as an exponent of e. The second term can be decomposed as:

The final solution is given as:

Time-Variant, Non-zero Input

If the input to the system is not zero, it turns out that all the analysis that we performed above still holds. We can still construct the fundamental matrix, and we can still represent the system solution in terms of the state transition matrix .

We can show that the general solution to the state-space equations is actually the solution:



Digital State-Space

Digital Systems

Digital systems, expressed previously as difference equations or Z-Transform transfer functions, can also be used with the state-space representation. All the same techniques for dealing with analog systems can be applied to digital systems with only minor changes.

Digital Systems

For digital systems, we can write similar equations using discrete data sets:

Zero-Order Hold Derivation

If we have a continuous-time state equation:

We can derive the digital version of this equation that we discussed above. We take the Laplace transform of our equation:

Now, taking the inverse Laplace transform gives us our time-domain system, keeping in mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix, Φ:

Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period:

We were able to remove u(kT) from the integral because it did not rely on τ. We now define a new function, Γ, as follows:

Inserting this new expression into our equation, and setting t = (k + 1)T gives us:

Now Φ(T) and Γ(T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coefficient matrices:

We can use these values in our state equation, converting to our bracket notation instead:

Relating Continuous and Discrete Systems

Continuous and discrete systems that perform similarly can be related together through a set of relationships. It should come as no surprise that a discrete system and a continuous system will have different characteristics and different coefficient matrices. If we consider that a discrete system is the same as a continuous system, except that it is sampled with a sampling time T, then the relationships below will hold. The process of converting an analog system for use with digital hardware is called discretization. We've given a basic introduction to discretization already, but we will discuss it in more detail here.

Discrete Coefficient Matrices

Of primary importance in discretization is the computation of the associated coefficient matrices from the continuous-time counterparts. If we have the continuous system (A, B, C, D), we can use the relationship t = kT to transform the state-space solution into a sampled system:

Now, if we want to analyze the k+1 term, we can solve the equation again:

Separating out the variables, and breaking the integral into two parts gives us:

If we substitute in a new variable β = (k + 1)T + τ, and if we see the following relationship:

We get our final result:

Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system.

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

This operation can be performed using this MATLAB command:
c2d

If the Ac matrix is nonsingular, then we can find its inverse and instead define Bd as:

The differences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are different. Continuous-time systems are represented by linear differential equations, while the digital systems are described by difference equations. High order terms in a difference equation are delayed copies of the signals, while high order terms in the differential equations are derivatives of the analog signal.

If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm.

Notation

Because the coefficient matrices for the discrete systems are computed differently from the continuous-time coefficient matrices, and because the matrices technically represent different things, it is not uncommon in the literature to denote these matrices with different variables. For instance, the following variables are used in place of A and B frequently:

These substitutions would give us a system defined by the ordered quadruple (Ω, R, C, D) for representing our equations.

As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book.

Converting Difference Equations

Now, let's say that we have a 3rd order difference equation, that describes a discrete-time system:

From here, we can define a set of discrete state variables x in the following manner:

Which in turn gives us 3 first-order difference equations:

Again, we say that matrix x is a vertical vector of the 3 state variables we have defined, and we can write our state equation in the same form as if it were a continuous-time system:

Solving for x[n]

We can find a general time-invariant solution for the discrete time difference equations. Let us start working up a pattern. We know the discrete state equation:

Starting from time n = 0, we can start to create a pattern:

With a little algebraic trickery, we can reduce this pattern to a single equation:


[General State Equation Solution]

Substituting this result into the output equation gives us:


[General Output Equation Solution]

Time Variant Solutions

If the system is time-variant, we have a general solution that is similar to the continuous-time case:

Where φ, the state transition matrix, is defined in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are different. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist.

State Transition Matrix

The discrete time state transition matrix is the unique solution of the equation:

Where the following restriction must hold:

From this definition, an obvious way to calculate this state transition matrix presents itself:

Or,

MATLAB Calculations

MATLAB is a computer program, and therefore calculates all systems using digital methods. The MATLAB function lsim is used to simulate a continuous system with a specified input. This function works by calling the c2d, which converts a system (A, B, C, D) into the equivalent discrete system. Once the system model is discretized, the function passes control to the dlsim function, which is used to simulate discrete-time systems with the specified input.

Because of this, simulation programs like MATLAB are subjected to round-off errors associated with the discretization process.



Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors cannot be calculated from time-variant matrices. If the system is time-variant, the methods described in this chapter will not produce valid results.

The eigenvalues and eigenvectors of the system matrix play a key role in determining the response of the system. It is important to note that only square matrices have eigenvalues and eigenvectors associated with them. Non-square matrices cannot be analyzed using the methods below.

The word "eigen" comes from German and means "own" as in "characteristic", so this chapter could also be called "Characteristic values and characteristic vectors". The terms "Eigenvalues" and "Eigenvectors" are most commonly used. Eigenvalues and Eigenvectors have a number of properties that make them valuable tools in analysis, and they also have a number of valuable relationships with the matrix from which they are derived. Computing the eigenvalues and the eigenvectors of the system matrix is one of the most important things that should be done when beginning to analyze a system matrix, second only to calculating the matrix exponential of the system matrix.

The eigenvalues and eigenvectors of the system determine the relationship between the individual system state variables (the members of the x vector), the response of the system to inputs, and the stability of the system. Also, the eigenvalues and eigenvectors can be used to calculate the matrix exponential of the system matrix through spectral decomposition. The remainder of this chapter will discuss eigenvalues, eigenvectors, and the ways that they affect their respective systems.

Characteristic Equation

The characteristic equation of the system matrix A is given as:

[Matrix Characteristic Equation]

Where λ are scalar values called the eigenvalues, and v are the corresponding eigenvectors. To solve for the eigenvalues of a matrix, we can take the following determinant:

To solve for the eigenvectors, we can then add an additional term, and solve for v:

Another value worth finding are the left eigenvectors of a system, defined as w in the modified characteristic equation:

[Left-Eigenvector Equation]

For more information about eigenvalues, eigenvectors, and left eigenvectors, read the appropriate sections in the following books:

Diagonalization

Note:
The transition matrix T should not be confused with the sampling time of a discrete system. If needed, we will use subscripts to differentiate between the two.

If the matrix A has a complete set of distinct eigenvalues, the matrix can be diagonalized. A diagonal matrix is a matrix that only has entries on the diagonal, and all the rest of the entries in the matrix are zero. We can define a transformation matrix, T, that satisfies the diagonalization transformation:

Which in turn will satisfy the relationship:

The right-hand side of the equation may look more complicated, but because D is a diagonal matrix here (not to be confused with the feed-forward matrix from the output equation), the calculations are much easier.

We can define the transition matrix, and the inverse transition matrix in terms of the eigenvectors and the left eigenvectors:

We will further discuss the concept of diagonalization later in this chapter.

Exponential Matrix Decomposition

For more information about spectral decomposition, see:
Spectral Decomposition

A matrix exponential can be decomposed into a sum of the eigenvectors, eigenvalues, and left eigenvectors, as follows:

Notice that this equation only holds in this form if the matrix A has a complete set of n distinct eigenvalues. Since w'i is a row vector, and x(0) is a column vector of the initial system states, we can combine those two into a scalar coefficient α: