Control Systems/Digital Systems/Print version
This is the print version of Control Systems/Digital Systems You won't see this message or any elements not part of the book's content when you print or preview this page. |
The Wikibook of automatic
And Control Systems Engineering
With
Classical and Modern Techniques
And
Advanced Concepts
Preface
This book will discuss the topic of Control Systems, which is an interdisciplinary engineering topic. Methods considered here will consist of both "Classical" control methods, and "Modern" control methods. Also, discretely sampled systems (digital/computer systems) will be considered in parallel with the more common analog methods. This book will not focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although readers should have a solid foundation in the fundamentals of at least one discipline.
This book will require prior knowledge of linear algebra, integral and differential calculus, and at least some exposure to ordinary differential equations. In addition, a prior knowledge of integral transforms, specifically the Laplace and Z transforms will be very beneficial. Also, prior knowledge of the Fourier Transform will shed more light on certain subjects. Wikibooks with information on calculus topics or transformation topics required for this book will be listed below:
Introduction to Control Systems
What are control systems? Why do we study them? How do we identify them? The chapters in this section should answer these questions and more.
Introduction
This Wikibook
This book was written at Wikibooks, a free online community where people write open-content textbooks. Any person with internet access is welcome to participate in the creation and improvement of this book. Because this book is continuously evolving, there are no finite "versions" or "editions" of this book. Permanent links to known good versions of the pages may be provided.
What are Control Systems?
The study and design of automatic Control Systems, a field known as control engineering, has become important in modern technical society. From devices as simple as a toaster or a toilet, to complex machines like space shuttles and power steering, control engineering is a part of our everyday life. This book introduces the field of control engineering and explores some of the more advanced topics in the field. Note, however, that control engineering is a very large field and this book serves only as a foundation of control engineering and an introduction to selected advanced topics in the field. Topics in this book are added at the discretion of the authors and represent the available expertise of our contributors.
Control systems are components that are added to other components to increase functionality or meet a set of design criteria. For example:
We have a particular electric motor that is supposed to turn at a rate of 40 RPM. To achieve this speed, we must supply 10 Volts to the motor terminals. However, with 10 volts supplied to the motor at rest, it takes 30 seconds for our motor to get up to speed. This is valuable time lost.
This simple example can be complex to both users and designers of the motor system. It may seem obvious that the motor should start at a higher voltage so that it accelerates faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed.
This is clearly a simplistic example but it illustrates an important point: we can add special "Controller units" to preexisting systems to improve performance and meet new system specifications.
Here are some formal definitions of terms used throughout this book:
- Control System
- A Control System is a device, or a collection of devices that manage the behavior of other devices. Some devices are not controllable. A control system is an interconnection of components connected or related in such a manner as to command, direct, or regulate itself or another system.
Control System is a conceptual framework for designing systems with capabilities of regulation and/or tracking to give a desired performance. For this there must be a set of signals measurable to know the performance, another set of signals measurable to influence the evolution of the system in time and a third set which is not measurable but disturb the evolution.
- Controller
- A controller is a control system that manages the behavior of another device or system (using Actuators). The controller is usually fed with some input signal from outside the system which commands the system to provide desired output. In a closed loop system, the signal is preprocessed with the sensor's signal from inside the system.
- Actuator
- An actuator is a device that takes in a signal form the controller and carries some action to affect the system accordingly.
- Compensator
- A compensator is a control system that regulates another system, usually by conditioning the input or the output to that system. Compensators are typically employed to correct a single design flaw with the intention of minimizing effects on other aspects of the design.
There are essentially two methods to approach the problem of designing a new control system: the Classical Approach and the Modern Approach.
Classical and Modern
Classical and Modern control methodologies are named in a misleading way, because the group of techniques called "Classical" were actually developed later than the techniques labeled "Modern". However, in terms of developing control systems, Modern methods have been used to great effect more recently, while the Classical methods have been gradually falling out of favor. Most recently, it has been shown that Classical and Modern methods can be combined to highlight their respective strengths and weaknesses.
Classical Methods, which this book will consider first, are methods involving the Laplace Transform domain. Physical systems are modeled in the so-called "time domain", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently. To counteract this problem, integral transforms, such as the Laplace Transform and the Fourier Transform, can be employed to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the transform domain. Once a given system has been converted into the transform domain it can be manipulated with greater ease and analyzed quickly by humans and computers alike.
Modern Control Methods, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called State Equations, which can then be manipulated using techniques from linear algebra. This book will consider Modern Methods second.
A third distinction that is frequently made in the realm of control systems is to divide analog methods (classical and modern, described above) from digital methods. Digital Control Methods were designed to try and incorporate the emerging power of computer systems into previous control methodologies. A special transform, known as the Z-Transform, was developed that can adequately describe digital systems, but at the same time can be converted (with some effort) into the Laplace domain. Once in the Laplace domain, the digital system can be manipulated and analyzed in a very similar manner to Classical analog systems. For this reason, this book will not make a hard and fast distinction between Analog and Digital systems, and instead will attempt to study both paradigms in parallel.
Who is This Book For?
This book is intended to accompany a course of study in under-graduate and graduate engineering. As has been mentioned previously, this book is not focused on any particular discipline within engineering, however any person who wants to make use of this material should have some basic background in the Laplace transform (if not other transforms), calculus, etc. The material in this book may be used to accompany several semesters of study, depending on the program of your particular college or university. The study of control systems is generally a topic that is reserved for students in their 3rd or 4th year of a 4 year undergraduate program, because it requires so much previous information. Some of the more advanced topics may not be covered until later in a graduate program.
Many colleges and universities only offer one or two classes specifically about control systems at the undergraduate level. Some universities, however, do offer more than that, depending on how the material is broken up, and how much depth that is to be covered. Also, many institutions will offer a handful of graduate-level courses on the subject. This book will attempt to cover the topic of control systems from both a graduate and undergraduate level, with the advanced topics built on the basic topics in a way that is intuitive. As such, students should be able to begin reading this book in any place that seems an appropriate starting point, and should be able to finish reading where further information is no longer needed.
What are the Prerequisites?
Understanding of the material in this book will require a solid mathematical foundation. This book does not currently explain, nor will it ever try to fully explain most of the necessary mathematical tools used in this text. For that reason, the reader is expected to have read the following wikibooks, or have background knowledge comparable to them:
- Algebra
- Calculus
- The reader should have a good understanding of differentiation and integration. Partial differentiation, multiple integration, and functions of multiple variables will be used occasionally, but the students are not necessarily required to know those subjects well. These advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite.
- Linear Algebra
- State-space system representation draws heavily on linear algebra techniques. Students should know how to operate on matrices. Students should understand basic matrix operations (addition, multiplication, determinant, inverse, transpose). Students would also benefit from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are covered in this text.
- Ordinary Differential Equations
- All linear systems can be described by a linear ordinary differential equation. It is beneficial, therefore, for students to understand these equations. Much of this book describes methods to analyze these equations. Students should know what a differential equation is, and they should also know how to find the general solutions of first and second order ODEs.
- Engineering Analysis
- This book reinforces many of the advanced mathematical concepts used in the Engineering Analysis book, and we will refer to the relevant sections in the aforementioned text for further information on some subjects. This is essentially a math book, but with a focus on various engineering applications. It relies on a previous knowledge of the other math books in this list.
- Signals and Systems
- The Signals and Systems book will provide a basis in the field of systems theory, of which control systems is a subset. Readers who have not read the Signals and Systems book will be at a severe disadvantage when reading this book.
How is this Book Organized?
This book will be organized following a particular progression. First this book will discuss the basics of system theory, and it will offer a brief refresher on integral transforms. Section 2 will contain a brief primer on digital information, for students who are not necessarily familiar with them. This is done so that digital and analog signals can be considered in parallel throughout the rest of the book. Next, this book will introduce the state-space method of system description and control. After section 3, topics in the book will use state-space and transform methods interchangeably (and occasionally simultaneously). It is important, therefore, that these three chapters be well read and understood before venturing into the later parts of the book.
After the "basic" sections of the book, we will delve into specific methods of analyzing and designing control systems. First we will discuss Laplace-domain stability analysis techniques (Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots). After the classical methods are discussed, this book will then discuss Modern methods of stability analysis. Finally, a number of advanced topics will be touched upon, depending on the knowledge level of the various contributors.
As the subject matter of this book expands, so too will the prerequisites. For instance, when this book is expanded to cover nonlinear systems, a basic background knowledge of nonlinear mathematics will be required.
Versions
This wikibook has been expanded to include multiple versions of its text, differentiated by the material covered, and the order in which the material is presented. Each different version is composed of the chapters of this book, included in a different order. This book covers a wide range of information, so if you don't need all the information that this book has to offer, perhaps one of the other versions would be right for you and your educational needs.
Each separate version has a table of contents outlining the different chapters that are included in that version. Also, each separate version comes complete with a printable version, and some even come with PDF versions as well.
Take a look at the All Versions Listing Page to find the version of the book that is right for you and your needs.
Differential Equations Review
Implicit in the study of control systems is the underlying use of differential equations. Even if they aren't visible on the surface, all of the continuous-time systems that we will be looking at are described in the time domain by ordinary differential equations (ODE), some of which are relatively high-order.
Let's review some differential equation basics. Consider the topic of interest from a bank. The amount of interest accrued on a given principal balance (the amount of money you put into the bank) P, is given by:
Where is the interest (rate of change of the principal), and r is the interest rate. Notice in this case that P is a function of time (t), and can be rewritten to reflect that:
To solve this basic, first-order equation, we can use a technique called "separation of variables", where we move all instances of the letter P to one side, and all instances of t to the other:
And integrating both sides gives us:
This is all fine and good, but generally, we like to get rid of the logarithm, by raising both sides to a power of e:
Where we can separate out the constant as such:
D is a constant that represents the initial conditions of the system, in this case the starting principal.
Differential equations are particularly difficult to manipulate, especially once we get to higher-orders of equations. Luckily, several methods of abstraction have been created that allow us to work with ODEs, but at the same time, not have to worry about the complexities of them. The classical method, as described above, uses the Laplace, Fourier, and Z Transforms to convert ODEs in the time domain into polynomials in a complex domain. These complex polynomials are significantly easier to solve than the ODE counterparts. The Modern method instead breaks differential equations into systems of low-order equations, and expresses this system in terms of matrices. It is a common precept in ODE theory that an ODE of order N can be broken down into N equations of order 1.
Readers who are unfamiliar with differential equations might be able to read and understand the material in this book reasonably well. However, all readers are encouraged to read the related sections in Calculus.
History
The field of control systems started essentially in the ancient world. Early civilizations, notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement of time, the result of which were several "water clocks" that were designed and implemented.
However, there was very little in the way of actual progress made in the field of engineering until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler's Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace used the transform (later called the Laplace Transform) to solve complex problems in probability theory.
Joseph Fourier was a court mathematician in France under Napoleon I. He created a special function decomposition called the Fourier Series, that was later generalized into an integral transform, and named in his honor (the Fourier Transform).
Pierre-Simon Laplace 1749-1827 |
Joseph Fourier 1768-1840 |
The "golden age" of control engineering occurred between 1910-1945, where mass communication methods were being created and two world wars were being fought. During this period, some of the most famous names in controls engineering were doing their work: Nyquist and Bode.
Hendrik Wade Bode and Harry Nyquist, especially in the 1930's while working with Bell Laboratories, created the bulk of what we now call "Classical Control Methods". These methods were based off the results of the Laplace and Fourier Transforms, which had been previously known, but were made popular by Oliver Heaviside around the turn of the century. Previous to Heaviside, the transforms were not widely used, nor respected mathematical tools.
Bode is credited with the "discovery" of the closed-loop feedback system, and the logarithmic plotting technique that still bears his name (bode plots). Harry Nyquist did extensive research in the field of system stability and information theory. He created a powerful stability criteria that has been named for him (The Nyquist Criteria).
Modern control methods were introduced in the early 1950's, as a way to bypass some of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in modern control theory, and an adaptive controller called the Kalman Filter was named in his honor. Modern control methods became increasingly popular after 1957 with the invention of the computer, and the start of the space program. Computers created the need for digital control methodologies, and the space program required the creation of some "advanced" control techniques, such as "optimal control", "robust control", and "nonlinear control". These last subjects, and several more, are still active areas of study among research engineers.
Branches of Control Engineering
Here we are going to give a brief listing of the various different methodologies within the sphere of control engineering. Oftentimes, the lines between these methodologies are blurred, or even erased completely.
- Classical Controls
- Control methodologies where the ODEs that describe a system are transformed using the Laplace, Fourier, or Z Transforms, and manipulated in the transform domain.
- Modern Controls
- Methods where high-order differential equations are broken into a system of first-order equations. The input, output, and internal states of the system are described by vectors called "state variables".
- Robust Control
- Control methodologies where arbitrary outside noise/disturbances are accounted for, as well as internal inaccuracies caused by the heat of the system itself, and the environment.
- Optimal Control
- In a system, performance metrics are identified, and arranged into a "cost function". The cost function is minimized to create an operational system with the lowest cost.
- Adaptive Control
- In adaptive control, the control changes its response characteristics over time to better control the system.
- Nonlinear Control
- The youngest branch of control engineering, nonlinear control encompasses systems that cannot be described by linear equations or ODEs, and for which there is often very little supporting theory available.
- Game Theory
- Game Theory is a close relative of control theory, and especially robust control and optimal control theories. In game theory, the external disturbances are not considered to be random noise processes, but instead are considered to be "opponents". Each player has a cost function that they attempt to minimize, and that their opponents attempt to maximize.
This book will definitely cover the first two branches, and will hopefully be expanded to cover some of the later branches, if time allows.
MATLAB
the Appendix
MATLAB ® is a programming tool that is commonly used in the field of control engineering. We will discuss MATLAB in specific sections of this book devoted to that purpose. MATLAB will not appear in discussions outside these specific sections, although MATLAB may be used in some example problems. An overview of the use of MATLAB in control engineering can be found in the appendix at: Control Systems/MATLAB.
For more information on MATLAB in general, see: MATLAB Programming.
Resources
Nearly all textbooks on the subject of control systems, linear systems, and system analysis will use MATLAB as an integral part of the text. Students who are learning this subject at an accredited university will certainly have seen this material in their textbooks, and are likely to have had MATLAB work as part of their classes. It is from this perspective that the MATLAB appendix is written.
In the future, this book may be expanded to include information on Simulink ®, as well as MATLAB.
There are a number of other software tools that are useful in the analysis and design of control systems. Additional information can be added in the appendix of this book, depending on the experience and prior knowledge of contributors.
About Formatting
This book will use some simple conventions throughout.
Mathematical Conventions
Mathematical equations will be labeled with the {{eqn}} template, to give them names. Equations that are labeled in such a manner are important, and should be taken special note of. For instance, notice the label to the right of this equation:
[Inverse Laplace Transform]
Equations that are named in this manner will also be copied into the List of Equations Glossary in the end of the book, for an easy reference.
Italics will be used for English variables, functions, and equations that appear in the main text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX mathematics formatting engine, although an attempt will be made not to employ formatted mathematical equations inline with other text because of the difference in size and font. Greek letters, and other non-English characters will not be italicized in the text unless they appear in the midst of multiple variables which are italicized (as a convenience to the editor).
Scalar time-domain functions and variables will be denoted with lower-case letters, along with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written in a similar manner, except with an [n] instead of a (t).
Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters followed by the appropriate variable in parenthesis. For example: F(s), X(jω), Y(z), and F*(s).
Matrices will be denoted with capital letters. Matrices which are functions of time will be denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix, a(t) is a scalar function of time.
Transforms of time-variant matrices will be displayed in uppercase bold letters, such as H(s).
Math equations rendered using LaTeX will appear on separate lines, and will be indented from the rest of the text.
Text Conventions
Examples will appear in TextBox templates, which show up as large grey boxes filled with text and equations.
- Important Definitions
- Will appear in TextBox templates as well, except we will use this formatting to show that it is a definition.
Notes of interest will appear in "infobox" templates. These notes will often be used to explain some nuances of a mathematical derivation or proof. |
Warnings will appear in these "warning" boxes. These boxes will point out common mistakes, or other items to be careful of. |
System Identification
Systems
Systems, in one sense, are devices that take input and produce an output. A system can be thought to operate on the input to produce the output. The output is related to the input by a certain relationship known as the system response. The system response usually can be modeled with a mathematical relationship between the system input and the system output.
System Properties
Physical systems can be divided up into a number of different categories, depending on particular properties that the system exhibits. Some of these system classifications are very easy to work with and have a large theory base for analysis. Some system classifications are very complex and have still not been investigated with any degree of success. By properly identifying the properties of a system, certain analysis and design tools can be selected for use with the system.
The early sections of this book will focus primarily on linear time-invariant (LTI) systems. LTI systems are the easiest class of system to work with, and have a number of properties that make them ideal to study. This chapter discusses some properties of systems.
Later chapters in this book will look at time variant systems and nonlinear systems. Both time variant and nonlinear systems are very complex areas of current research, and both can be difficult to analyze properly. Unfortunately, most physical real-world systems are time-variant, nonlinear, or both.
An introduction to system identification and least squares techniques can be found here. An introduction to parameter identification techniques can be found here.
Initial Time
The initial time of a system is the time before which there is no input. Typically, the initial time of a system is defined to be zero, which will simplify the analysis significantly. Some techniques, such as the Laplace Transform require that the initial time of the system be zero. The initial time of a system is typically denoted by t0.
The value of any variable at the initial time t0 will be denoted with a 0 subscript. For instance, the value of variable x at time t0 is given by:
Likewise, any time t with a positive subscript are points in time after t0, in ascending order:
So t1 occurs after t0, and t2 occurs after both points. In a similar fashion above, a variable with a positive subscript (unless specifying an index into a vector) also occurs at that point in time:
This is valid for all points in time t.
Additivity
A system satisfies the property of additivity if a sum of inputs results in a sum of outputs. By definition: an input of results in an output of . To determine whether a system is additive, use the following test:
Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and x2) produce two outputs:
Now, create a composite input that is the sum of the previous inputs:
Then the system is additive if the following equation is true:
Systems that satisfy this property are called additive. Additive systems are useful because a sum of simple inputs can be used to analyze the system response to a more complex input.
Example: Sinusoids
Given the following equation:
Create a sum of inputs as:
and construct the expected sum of outputs:
Now, substituting these values into our equation, test for equality:
The equality is not satisfied, and therefore the sine operation is not additive.
Homogeneity
A system satisfies the condition of homogeneity if an input scaled by a certain factor produces an output scaled by that same factor. By definition: an input of results in an output of . In other words, to see if function f() is homogeneous, perform the following test:
Stimulate the system f with an arbitrary input x to produce an output y:
Now, create a second input x1, scale it by a multiplicative factor C (C is an arbitrary constant value), and produce a corresponding output y1:
Now, assign x to be equal to x1:
Then, for the system to be homogeneous, the following equation must be true:
Systems that are homogeneous are useful in many applications, especially applications with gain or amplification.
Example: Straight-Line
Given the equation for a straight line:
Comparing the two results, it is easy to see they are not equal:
Therefore, the equation is not homogeneous.
Exercise:
Prove that additivity implies homogeneity, but that homogeneity does not imply additivity.
Linearity
A system is considered linear if it satisfies the conditions of Additivity and Homogeneity. In short, a system is linear if the following is true:
Take two arbitrary inputs, and produce two arbitrary outputs:
Now, a linear combination of the inputs should produce a linear combination of the outputs:
This condition of additivity and homogeneity is called superposition. A system is linear if it satisfies the condition of superposition.
Example: Linear Differential Equations
Is the following equation linear:
To determine whether this system is linear, construct a new composite input:
Now, create the expected composite output:
Substituting the two into our original equation:
Factor out the derivative operator, as such:
Finally, convert the various composite terms into the respective variables, to prove that this system is linear:
For the record, derivatives and integrals are linear operators, and ordinary differential equations typically are linear equations.
Memory
A system is said to have memory if the output from the system is dependent on past inputs (or future inputs!) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications.
Systems that have memory are called dynamic systems, and systems that do not have memory are static systems.
Causality
Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past and/or current inputs. A system is called anti-causal if the output of the system is dependent only on future inputs. A system is called non-causal if the output depends on past and/or current and future inputs.
A system design that is not causal cannot be physically implemented (to operate in a real-time). If the system can't be built, the design is generally worthless. However, there are applications of non-causal systems, e.g. when a system does not need to operate in a real-time and already has signals stored in its memory (sound and image compression). |
Time-Invariance
A system is called time-invariant if the system relationship between the input and output signals is not dependent on the passage of time. If the input signal produces an output then any time shifted input, , results in a time-shifted output This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output. If a system is time-invariant then the system block is commutative with an arbitrary delay. This facet of time-invariant systems will be discussed later.
To determine if a system f is time-invariant, perform the following test:
Apply an arbitrary input x to a system and produce an arbitrary output y:
Apply a second input x1 to the system, and produce a second output:
Now, assign x1 to be equal to the first input x, time-shifted by a given constant value δ:
Finally, a system is time-invariant if y1 is equal to y shifted by the same value δ:
LTI Systems
A system is considered to be a Linear Time-Invariant (LTI) system if it satisfies the requirements of time-invariance and linearity. LTI systems are one of the most important types of systems, and they will be considered almost exclusively in the beginning chapters of this book.
Systems which are not LTI are more common in practice, but are much more difficult to analyze.
Lumpedness
A system is said to be lumped if one of the two following conditions are satisfied:
- There are a finite number of states that the system can be in.
- There are a finite number of state variables.
The concept of "states" and "state variables" are relatively advanced, and they will be discussed in more detail in the discussion about modern controls.
Systems which are not lumped are called distributed. A simple example of a distributed system is a system with delay, that is, , which has an infinite number of state variables (Here we use to denote the Laplace variable). However, although distributed systems are quite common, they are very difficult to analyze in practice, and there are few tools available to work with such systems. Fortunately, in most cases, a delay can be sufficiently modeled with the Pade approximation. This book will not discuss distributed systems much.
Relaxed
A system is said to be relaxed if the system is causal and at the initial time t0 the output of the system is zero, i.e., there is no stored energy in the system. The output is excited solely and uniquely by input applied thereafter.
In terms of differential equations, a relaxed system is said to have "zero initial states". Systems without an initial state are easier to work with, but systems that are not relaxed can frequently be modified to approximate relaxed systems.
Stability
Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if 5 volts is applied to the input terminals of a given circuit, it would be best if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO.
There are a number of other types of stability, most of which are based on the concept of BIBO stability. Because stability is such an important and complicated topic, an entire section of this text is devoted to its study.
Inputs and Outputs
Systems can also be categorized by the number of inputs and the number of outputs the system has. Consider a television as a system, for instance. The system has two inputs: the power wire and the signal cable. It has one output: the video display. A system with one input and one output is called single-input, single output, or SISO. a system with multiple inputs and multiple outputs is called multi-input, multi-output, or MIMO.
These systems will be discussed in more detail later.
Exercise:
Based on the definitions of SISO and MIMO, above, determine what the acronyms SIMO and MISO mean.
Digital and Analog
Digital and Analog
There is a significant distinction between an analog system and a digital system, in the same way that there is a significant difference between analog and digital data. This book is going to consider both analog and digital topics, so it is worth taking some time to discuss the differences, and to display the different notations that will be used with each.
Continuous Time
A signal is called continuous-time if it is defined at every time t.
A system is a continuous-time system if it takes a continuous-time input signal, and outputs a continuous-time output signal. Here is an example of an analog waveform:
Discrete Time
A signal is called discrete-time if it is only defined for particular points in time. A discrete-time system takes discrete-time input signals, and produces discrete-time output signals. The following image shows the difference between an analog waveform and the sampled discrete time equivalent:
Quantized
A signal is called Quantized if it can only be certain values, and cannot be other values. This concept is best illustrated with examples:
- Students with a strong background in physics will recognize this concept as being the root word in "Quantum Mechanics". In quantum mechanics, it is known that energy comes only in discrete packets. An electron bound to an atom, for example, may occupy one of several discrete energy levels, but not intermediate levels.
- Another common example is population statistics. For instance, a common statistic is that a household in a particular country may have an average of "3.5 children", or some other fractional number. Actual households may have 3 children, or they may have 4 children, but no household has 3.5 children.
- People with a computer science background will recognize that integer variables are quantized because they can only hold certain integer values, not fractions or decimal points.
The last example concerning computers is the most relevant, because quantized systems are frequently computer-based. Systems that are implemented with computer software and hardware will typically be quantized.
Here is an example waveform of a quantized signal. Notice how the magnitude of the wave can only take certain values, and that creates a step-like appearance. This image is discrete in magnitude, but is continuous in time:
Analog
By definition:
- Analog
- A signal is considered analog if it is defined for all points in time and if it can take any real magnitude value within its range.
An analog system is a system that represents data using a direct conversion from one form to another. In other words, an analog system is a system that is continuous in both time and magnitude.
Example: Motor
If we have a given motor, we can show that the output of the motor (rotation in units of radians per second, for instance) is a function of the voltage that is input to the motor. We can show the relationship as such:
Where is the output in terms of rad/sec, and is the motor's conversion function between the input voltage ( ) and the output. For any value of we can calculate out specifically what the rotational speed of the motor should be.
Example: Analog Clock
Consider a standard analog clock, which represents the passage of time though the angular position of the clock hands. We can denote the angular position of the hands of the clock with the system of equations:
Where is the angular position of the hour hand, is the angular position of the minute hand, and is the angular position of the second hand. The positions of all the different hands of the clock are dependent on functions of time.
Different positions on a clock face correspond directly to different times of the day.
Digital
Digital data is represented by discrete number values. By definition:
- Digital
- A signal or system is considered digital if it is both discrete-time and quantized.
Digital data always have a certain granularity, and therefore there will almost always be an error associated with using such data, especially if we want to account for all real numbers. The tradeoff, of course, to using a digital system is that our powerful computers with our powerful, Moore's law microprocessor units, can be instructed to operate on digital data only. This benefit more than makes up for the shortcomings of a digital representation system.
Discrete systems will be denoted inside square brackets, as is a common notation in texts that deal with discrete values. For instance, we can denote a discrete data set of ascending numbers, starting at 1, with the following notation:
- x[n] = [1 2 3 4 5 6 ...]
n, or other letters from the central area of the alphabet (m, i, j, k, l, for instance) are commonly used to denote discrete time values. Analog, or "non-discrete" values are denoted in regular expression syntax, using parenthesis. Here is an example of an analog waveform and the digital equivalent. Notice that the digital waveform is discrete in both time and magnitude:
Example: Digital Clock
As a common example, let's consider a digital clock: The digital clock represents time with binary electrical data signals of 1 and 0. The 1's are usually represented by a positive voltage, and a 0 is generally represented by zero voltage. Counting in binary, we can show that any given time can be represented by a base-2 numbering system:
Minute Binary Representation 1 1 10 1010 30 11110 59 111011
But what happens if we want to display a fraction of a minute, or a fraction of a second? A typical digital clock has a certain amount of precision, and it cannot express fractional values smaller than that precision.
Hybrid Systems
Hybrid Systems are systems that have both analog and digital components. Devices called samplers are used to convert analog signals into digital signals, and Devices called reconstructors are used to convert digital signals into analog signals. Because of the use of samplers, hybrid systems are frequently called sampled-data systems.
Example: Automobile Computer
Most modern automobiles today have integrated computer systems that monitor certain aspects of the car, and actually help to control the performance of the car. The speed of the car, and the rotational speed of the transmission are analog values, but a sampler converts them into digital values so the car computer can monitor them. The digital computer will then output control signals to other parts of the car, to alter analog systems such as the engine timing, the suspension, the brakes, and other parts. Because the car has both digital and analog components, it is a hybrid system.
Continuous and Discrete
We are not using the word "continuous" here in the sense of continuously differentiable, as is common in math texts.
A system is considered continuous-time if the signal exists for all time. Frequently, the terms "analog" and "continuous" will be used interchangeably, although they are not strictly the same.
Discrete systems can come in three flavors:
- Discrete time (sampled)
- Discrete magnitude (quantized)
- Discrete time and magnitude (digital)
Discrete magnitude systems are systems where the signal value can only have certain values.Discrete time systems are systems where signals are only available (or valid) at particular times. Computer systems are discrete in the sense of (3), in that data is only read at specific discrete time intervals, and the data can have only a limited number of discrete values.
A discrete-time system has a sampling time value associated with it, such that each discrete value occurs at multiples of the given sampling time. We will denote the sampling time of a system as T. We can equate the square-brackets notation of a system with the continuous definition of the system as follows:
Notice that the two notations show the same thing, but the first one is typically easier to write, and it shows that the system in question is a discrete system. This book will use the square brackets to denote discrete systems by the sample number n, and parenthesis to denote continuous time functions.
Sampling and Reconstruction
The process of converting analog information into digital data is called "Sampling". The process of converting digital data into an analog signal is called "Reconstruction". We will talk about both processes in a later chapter. For more information on the topic than is available in this book, see the Analog and Digital Conversion wikibook. Here is an example of a reconstructed waveform. Notice that the reconstructed waveform here is quantized because it is constructed from a digital signal:
System Metrics
System Metrics
When a system is being designed and analyzed, it doesn't make any sense to test the system with all manner of strange input functions, or to measure all sorts of arbitrary performance metrics. Instead, it is in everybody's best interest to test the system with a set of standard, simple reference functions. Once the system is tested with the reference functions, there are a number of different metrics that we can use to determine the system performance.
It is worth noting that the metrics presented in this chapter represent only a small number of possible metrics that can be used to evaluate a given system. This wikibook will present other useful metrics along the way, as their need becomes apparent.
Standard Inputs
All of the standard inputs are zero before time zero. All the standard inputs are causal.
There are a number of standard inputs that are considered simple enough and universal enough that they are considered when designing a system. These inputs are known as a unit step, a ramp, and a parabolic input.
- Unit Step
- A unit step function is defined piecewise as such:
[Unit Step Function]
- The unit step function is a highly important function, not only in control systems engineering, but also in signal processing, systems analysis, and all branches of engineering. If the unit step function is input to a system, the output of the system is known as the step response. The step response of a system is an important tool, and we will study step responses in detail in later chapters.
- Ramp
- A unit ramp is defined in terms of the unit step function, as such:
[Unit Ramp Function]
- It is important to note that the unit step function is simply the differential of the unit ramp function:
- This definition will come in handy when we learn about the Laplace Transform.
- Parabolic
- A unit parabolic input is similar to a ramp input:
[Unit Parabolic Function]
- Notice also that the unit parabolic input is equal to the integral of the ramp function:
- Again, this result will become important when we learn about the Laplace Transform.
Also, sinusoidal and exponential functions are considered basic, but they are too difficult to use in initial analysis of a system.
Steady State
To be more precise, we should have taken the limit as t approaches infinity. However, as a shorthand notation, we will typically say "t equals infinity", and assume the reader understands the shortcut that is being used.
When a unit-step function is input to a system, the steady-state value of that system is the output value at time . Since it is impractical (if not completely impossible) to wait till infinity to observe the system, approximations and mathematical calculations are used to determine the steady-state value of the system. Most system responses are asymptotic, that is that the response approaches a particular value. Systems that are asymptotic are typically obvious from viewing the graph of that response.
Step Response
The step response of a system is most frequently used to analyze systems, and there is a large amount of terminology involved with step responses. When exposed to the step input, the system will initially have an undesirable output period known as the transient response. The transient response occurs because a system is approaching its final output value. The steady-state response of the system is the response after the transient response has ended.
The amount of time it takes for the system output to reach the desired value (before the transient response has ended, typically) is known as the rise time. The amount of time it takes for the transient response to end and the steady-state response to begin is known as the settling time.
It is common for a systems engineer to try and improve the step response of a system. In general, it is desired for the transient response to be reduced, the rise and settling times to be shorter, and the steady-state to approach a particular desired "reference" output.
Target Value
The target output value is the value that our system attempts to obtain for a given input. This is not the same as the steady-state value, which is the actual value that the system does obtain. The target value is frequently referred to as the reference value, or the "reference function" of the system. In essence, this is the value that we want the system to produce. When we input a "5" into an elevator, we want the output (the final position of the elevator) to be the fifth floor. Pressing the "5" button is the reference input, and is the expected value that we want to obtain. If we press the "5" button, and the elevator goes to the third floor, then our elevator is poorly designed.
Rise Time
Rise time is the amount of time that it takes for the system response to reach the target value from an initial state of zero. Many texts on the subject define the rise time as being the time it takes to rise between the initial position and 80% of the target value. This is because some systems never rise to 100% of the expected, target value, and therefore they would have an infinite rise-time. This book will specify which convention to use for each individual problem. Rise time is typically denoted tr, or trise.
Rise time is not the amount of time it takes to achieve steady-state, only the amount of time it takes to reach the desired target value for the first time. |
Percent Overshoot
Underdamped systems frequently overshoot their target value initially. This initial surge is known as the "overshoot value". The ratio of the amount of overshoot to the target steady-state value of the system is known as the percent overshoot. Percent overshoot represents an overcompensation of the system, and can output dangerously large output signals that can damage a system. Percent overshoot is typically denoted with the term PO.
Example: Refrigerator
Consider an ordinary household refrigerator. The refrigerator has cycles where it is on and when it is off. When the refrigerator is on, the coolant pump is running, and the temperature inside the refrigerator decreases. The temperature decreases to a much lower level than is required, and then the pump turns off.
When the pump is off, the temperature slowly increases again as heat is absorbed into the refrigerator. When the temperature gets high enough, the pump turns back on. Because the pump cools down the refrigerator more than it needs to initially, we can say that it "overshoots" the target value by a certain specified amount.
Example: Refrigerator
Another example concerning a refrigerator concerns the electrical demand of the heat pump when it first turns on. The pump is an inductive mechanical motor, and when the motor first activates, a special counter-acting force known as "back EMF" resists the motion of the motor, and causes the pump to draw more electricity until the motor reaches its final speed. During the startup time for the pump, lights on the same electrical circuit as the refrigerator may dim slightly, as electricity is drawn away from the lamps, and into the pump. This initial draw of electricity is a good example of overshoot.
Steady-State Error
Sometimes a system might never achieve the desired steady-state value, but instead will settle on an output value that is not desired. The difference between the steady-state output value to the reference input value at steady state is called the steady-state error of the system. We will use the variable ess to denote the steady-state error of the system.
Settling Time
After the initial rise time of the system, some systems will oscillate and vibrate for an amount of time before the system output settles on the final value. The amount of time it takes to reach steady state after the initial rise time is known as the settling time. Notice that damped oscillating systems may never settle completely, so we will define settling time as being the amount of time for the system to reach, and stay in, a certain acceptable range. The acceptable range for settling time is typically determined on a per-problem basis, although common values are 20%, 10%, or 5% of the target value. The settling time will be denoted as ts.
System Order
The order of the system is defined by the number of independent energy storage elements in the system, and intuitively by the highest order of the linear differential equation that describes the system. In a transfer function representation, the order is the highest exponent in the transfer function. In a proper system, the system order is defined as the degree of the denominator polynomial. In a state-space equation, the system order is the number of state-variables used in the system. The order of a system will frequently be denoted with an n or N, although these variables are also used for other purposes. This book will make clear distinction on the use of these variables.
Proper Systems
A proper system is a system where the degree of the denominator is larger than or equal to the degree of the numerator polynomial. A strictly proper system is a system where the degree of the denominator polynomial is larger than (but never equal to) the degree of the numerator polynomial. A biproper system is a system where the degree of the denominator polynomial equals the degree of the numerator polynomial.
It is important to note that only proper systems can be physically realized. In other words, a system that is not proper cannot be built. It makes no sense to spend a lot of time designing and analyzing imaginary systems.
Example: System Order
Find the order of this system:
The highest exponent in the denominator is s2, so the system is order 2. Also, since the denominator is a higher degree than the numerator, this system is strictly proper.
In the above example, G(s) is a second-order transfer function because in the denominator one of the s variables has an exponent of 2. Second-order functions are the easiest to work with.
System Type
Let's say that we have a process transfer function (or combination of functions, such as a controller feeding in to a process), all in the forward branch of a unity feedback loop. Say that the overall forward branch transfer function is in the following generalized form (known as pole-zero form):
[Pole-Zero Form]
we call the parameter M the system type. Note that increased system type number correspond to larger numbers of poles at s = 0. More poles at the origin generally have a beneficial effect on the system, but they increase the order of the system, and make it increasingly difficult to implement physically. System type will generally be denoted with a letter like N, M, or m. Because these variables are typically reused for other purposes, this book will make clear distinction when they are employed.
Now, we will define a few terms that are commonly used when discussing system type. These new terms are Position Error, Velocity Error, and Acceleration Error. These names are throwbacks to physics terms where acceleration is the derivative of velocity, and velocity is the derivative of position. Note that none of these terms are meant to deal with movement, however.
- Position Error
- The position error, denoted by the position error constant . This is the amount of steady-state error of the system when stimulated by a unit step input. We define the position error constant as follows:
[Position Error Constant]
- Where G(s) is the transfer function of our system.
- Velocity Error
- The velocity error is the amount of steady-state error when the system is stimulated with a ramp input. We define the velocity error constant as such:
[Velocity Error Constant]
- Acceleration Error
- The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We define the acceleration error constant to be:
[Acceleration Error Constant]
Now, this table will show briefly the relationship between the system type, the kind of input (step, ramp, parabolic), and the steady-state error of the system:
Unit System Input Type, M Au(t) Ar(t) Ap(t) 0 1 2 > 2
Z-Domain Type
Likewise, we can show that the system order can be found from the following generalized transfer function in the Z domain:
Where the constant M is the type of the digital system. Now, we will show how to find the various error constants in the Z-Domain:
[Z-Domain Error Constants]
Error Constant Equation Kp Kv Ka
Visually
Here is an image of the various system metrics, acting on a system in response to a step input:
The target value is the value of the input step response. The rise time is the time at which the waveform first reaches the target value. The overshoot is the amount by which the waveform exceeds the target value. The settling time is the time it takes for the system to settle into a particular bounded region. This bounded region is denoted with two short dotted lines above and below the target value.
System Modeling
The Control Process
It is the job of a control engineer to analyze existing systems, and to design new systems to meet specific needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps:
- Model the system mathematically
- Analyze the mathematical model
- Design system/controller
- Implement system/controller and test
The vast majority of this book is going to be focused on (2), the analysis of the mathematical systems. This chapter alone will be devoted to a discussion of the mathematical modeling of the systems.
External Description
An external description of a system relates the system input to the system output without explicitly taking into account the internal workings of the system. The external description of a system is sometimes also referred to as the Input-Output Description of the system, because it only deals with the inputs and the outputs to the system.
If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral:
[General System Description]
This integral form holds for all linear systems, and every linear system can be described by such an equation.
If a system is causal (i.e. an input at t=r affects system behaviour only for ) and there is no input of the system before t=0, we can change the limits of the integration:
Time-Invariant Systems
If furthermore a system is time-invariant, we can rewrite the system description equation as follows:
This equation is known as the convolution integral, and we will discuss it more in the next chapter.
Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform, a powerful tool that allows us to convert an equation from the time domain into the S-Domain, where many calculations are easier. Time-variant systems cannot be used with the Laplace Transform.
Internal Description
If a system is linear and lumped, it can also be described using a system of equations known as state-space equations. In state-space equations, we use the variable x to represent the internal state of the system. We then use u as the system input, and we continue to use y as the system output. We can write the state-space equations as such:
We will discuss the state-space equations more when we get to the section on modern controls.
Complex Descriptions
Systems which are LTI and Lumped can also be described using a combination of the state-space equations, and the Laplace Transform. If we take the Laplace Transform of the state equations that we listed above, we can get a set of functions known as the Transfer Matrix Functions. We will discuss these functions in a later chapter.
Representations
To recap, we will prepare a table with the various system properties, and the available methods for describing the system:
Properties State-Space
EquationsLaplace
TransformTransfer
MatrixLinear, Time-Variant, Distributed no no no Linear, Time-Variant, Lumped yes no no Linear, Time-Invariant, Distributed no yes no Linear, Time-Invariant, Lumped yes yes yes
We will discuss all these different types of system representation later in the book.
Analysis
Once a system is modeled using one of the representations listed above, the system needs to be analyzed. We can determine the system metrics and then we can compare those metrics to our specification. If our system meets the specifications we are finished with the design process. However if the system does not meet the specifications (as is typically the case), then suitable controllers and compensators need to be designed and added to the system.
Once the controllers and compensators have been designed, the job isn't finished: we need to analyze the new composite system to ensure that the controllers work properly. Also, we need to ensure that the systems are stable: unstable systems can be dangerous.
Frequency Domain
For proposals, early stage designs, and quick turn around analyses a frequency domain model is often superior to a time domain model. Frequency domain models take disturbance PSDs (Power Spectral Densities) directly, use transfer functions directly, and produce output or residual PSDs directly. The answer is a steady-state response. Oftentimes the controller is shooting for 0 so the steady-state response is also the residual error that will be the analysis output or metric for report.
Input | Model | Output |
---|---|---|
PSD | Transfer Function | PSD |
Brief Overview of the Math
Frequency domain modeling is a matter of determining the impulse response of a system to a random process.
where
- is the one-sided input PSD in
- is the frequency response function of the system and
- is the one-sided output PSD or auto power spectral density function.
The frequency response function, , is related to the impulse response function (transfer function) by
Note some texts will state that this is only valid for random processes which are stationary. Other texts suggest stationary and ergodic while still others state weakly stationary processes. Some texts do not distinguish between strictly stationary and weakly stationary. From practice, the rule of thumb is if the PSD of the input process is the same from hour to hour and day to day then the input PSD can be used and the above equation is valid.
Notes
- ↑ Sun, Jian-Qiao (2006). Stochastic Dynamics and Control, Volume 4. Amsterdam: Elsevier Science. ISBN 0444522301.
See a full explanation with example at ControlTheoryPro.com
Modeling Examples
Modeling in Control Systems is oftentimes a matter of judgement. This judgement is developed by creating models and learning from other people's models. ControlTheoryPro.com is a site with a lot of examples. Here are links to a few of them
- Hovering Helicopter Example
- Reaction Torque Cancellation Example
- List of all examples at ControlTheoryPro.com
Manufacture
Once the system has been properly designed we can prototype our system and test it. Assuming our analysis was correct and our design is good, the prototype should work as expected. Now we can move on to manufacture and distribute our completed systems.
Classical Controls
The classical method of controls involves analysis and manipulation of systems in the complex frequency domain. This domain, entered into by applying the Laplace or Fourier Transforms, is useful in examining the characteristics of the system, and determining the system response.
Sampled Data Systems
Ideal Sampler
In this chapter, we are going to introduce the ideal sampler and the Star Transform. First, we need to introduce (or review) the Geometric Series infinite sum. The results of this sum will be very useful in calculating the Star Transform, later.
Consider a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals:
Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship works for any fractional value.
Taking the Laplace Transform of this infinite sequence will yield us with a special result called the Star Transform. The Star Transform is also occasionally called the "Starred Transform" in some texts.
Geometric Series
Before we talk about the Star Transform or even the Z-Transform, it is useful for us to review the mathematical background behind solving infinite series. Specifically, because of the nature of these transforms, we are going to look at methods to solve for the sum of a geometric series.
A geometric series is a sum of values with increasing exponents, as such:
In the equation above, notice that each term in the series has a coefficient value, a. We can optionally factor out this coefficient, if the resulting equation is easier to work with:
Once we have an infinite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation:
Let's say that we start our series off at a number that isn't zero. Let's say for instance that we start our series off at n = 1 or n = 100. Let's see:
We can generalize the sum to this series as follows:
[Geometric Series]
With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching infinity (because this is an infinite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series.
To satisfy this equation, we must satisfy the following condition:
[Geometric convergence condition]
Therefore, we come to the final result: The geometric series converges if and only if the value of r is less than one.
The Star Transform
The Star Transform is defined as such:
[Star Transform]
The Star Transform depends on the sampling time T and is different for a single signal depending on the frequency at which the signal is sampled. Since the Star Transform is defined as an infinite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform.
Star ↔ Laplace
Complex Analysis/Residue Theory
The Laplace Transform and the Star Transform are clearly related, because we obtained the Star Transform by using the Laplace Transform on a time-domain signal. However, the method to convert between the two results can be a slightly difficult one. To find the Star Transform of a Laplace function, we must take the residues of the Laplace equation, as such:
This math is advanced for most readers, so we can also use an alternate method, as follows:
Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suffice it to say, however, that the Laplace transform and the Star Transform are related mathematically.
Star + Laplace
In some systems, we may have components that are both continuous and discrete in nature. For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by a computer (for processing), and then a Digital-To-Analog converter. In this case, the computer is acting on a digital signal, but the rest of the system is acting on continuous signals. Star transforms can interact with Laplace transforms in some of the following ways:
Given:
Then:
Given:
Then:
Where is the Star Transform of the product of X(s)H(s).
Convergence of the Star Transform
The Star Transform is defined as being an infinite series, so it is critically important that the series converge (not reach infinity), or else the result will be nonsensical. Since the Star Transform is a geometic series (for many input signals), we can use geometric series analysis to show whether the series converges, and even under what particular conditions the series converges. The restrictions on the star transform that allow it to converge are known as the region of convergence (ROC) of the transform. Typically a transform must be accompanied by the explicit mention of the ROC.
The Z-Transform
Let us say now that we have a discrete data set that is sampled at regular intervals. We can call this set x[n]:
x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ]
we can utilize a special transform, called the Z-transform, to make dealing with this set more easy:
[Z Transform]
the Appendix.
Like the Star Transform the Z Transform is defined as an infinite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information.
Z Transfer Functions
Like the Laplace Transform, in the Z-domain we can use the input-output relationship of the system to define a transfer function.
The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain:
Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the definition of an "impulse" is different in the analog and digital domains.
Inverse Z Transform
The inverse Z Transform is defined by the following path integral:
[Inverse Z Transform]
Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z).
This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix.
Final Value Theorem
Like the Laplace Transform, the Z Transform also has an associated final value theorem:
[Final Value Theorem (Z)]
This equation can be used to find the steady-state response of a system, and also to calculate the steady-state error of the system.
Star ↔ Z
The Z transform is related to the Star transform though the following change of variables:
Notice that in the Z domain, we don't maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available.
Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0.
Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming's book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of different transforms can make problems more easy to solve, and we will therefore use a multi-transform approach.
Z plane
The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane.
z is a complex variable with a real part and an imaginary part. In other words, we can define z as such:
Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z.
Notice also that if we define z in terms of the star-transform relation:
we can separate out s into real and imaginary parts:
We can plug this into our equation for z:
Through Euler's formula, we can separate out the complex exponential as such:
If we define two new variables, M and φ:
We can write z in terms of M and φ. Notice that it is Euler's equation:
Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary part of s.
Region of Convergence
To best teach the region of convergance (ROC) for the Z-transform, we will do a quick example.
We have the following discrete series or a decaying exponential:
Now, we can plug this function into the Z transform equation:
Note that we can remove the unit step function, and change the limits of the sum:
This is because the series is 0 for all time less than n → 0. If we try to combine the n terms, we get the following result:
Once we have our series in this term, we can break this down to look like our geometric series:
And finally, we can find our final value, using the geometric series formula:
Again, we know that to make this series converge, we need to make the r value less than 1:
And finally we obtain the region of convergance for this Z-transform:
z and s are complex variables, and therefore we need to take the magnitude in our ROC calculations. The "Absolute Value symbols" are actually the "magnitude calculation", and is defined as such:
|
Laplace ↔ Z
There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix.
However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems:
This means that when we combine two functions in one domain multiplicatively, we must find a combined transform in the other domain. Here is how we denote this combined transform:
Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format:
Then we can put everything in terms of the Star Transform:
and once we are in the star domain, we can do a direct change of variables to reach the Z domain:
Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain.
Example
Let's say that we have the following equation in the Laplace domain:
And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each:
And
And when we add them together, we get our result:
Z ↔ Fourier
By substituting variables, we can relate the Star transform to the Fourier Transform as well:
If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable.
There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing.
Reconstruction
Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system.
If we have a sampled signal denoted by the Star Transform , we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplace-transform techniques.
Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows:
Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal.
Zero order Hold
A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform.
The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such:
[Zero Order Hold]
The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog.
First Order Hold
The zero-order hold creates a step output waveform, but this isn't always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The first-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform.
[First Order Hold]
Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the first-order hold may have a number of discontinuities.
Fractional Order Hold
The Zero-Order hold outputs the current value onto the output, and keeps it level throughout the entire bit time. The first-order hold uses the function derivative to predict the next value, and produces a series of ramp outputs to produce a fluctuating waveform. Sometimes however, neither of these solutions are desired, and therefore we have a compromise: Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding circuits, and takes a fractional number k as an argument. Notice that k must be between 0 and 1 for this circuit to work correctly.
[Fractional Order Hold]
This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit.
Other Reconstruction Circuits
Another type of circuit that can be used is a linear approximation circuit.
Further reading
- Hamming, Richard. "Numerical Methods for Scientists and Engineers" ISBN 0486652416
- Digital Signal Processing/Z Transform
- Complex Analysis/Residue Theory
- Analog and Digital Conversion
System Delays
Delays
A system can be built with an inherent delay. Delays are units that cause a time-shift in the input signal, but that don't affect the signal characteristics. An ideal delay is a delay system that doesn't affect the signal characteristics at all, and that delays the signal for an exact amount of time. Some delays, like processing delays or transmission delays, are unintentional. Other delays however, such as synchronization delays, are an integral part of a system. This chapter will talk about how delays are utilized and represented in the Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter, through change of variables, to express delays in other domains.
Ideal Delays
An ideal delay causes the input function to be shifted forward in time by a certain specified amount of time. Systems with an ideal delay cause the system output to be delayed by a finite, predetermined amount of time.
Time Shifts
Let's say that we have a function in time that is time-shifted by a certain constant time period T. For convenience, we will denote this function as x(t - T). Now, we can show that the Laplace transform of x(t - T) is the following:
What this demonstrates is that time-shifts in the time-domain become exponentials in the complex Laplace domain.
Shifts in the Z-Domain
Since we know the following general relationship between the Z Transform and the Star Transform:
We can show what a time shift in a discrete time domain becomes in the Z domain:
Delays and Stability
A time-shift in the time domain becomes an exponential increase in the Laplace domain. This would seem to show that a time shift can have an effect on the stability of a system, and occasionally can cause a system to become unstable. We define a new parameter called the time margin as the amount of time that we can shift an input function before the system becomes unstable. If the system can survive any arbitrary time shift without going unstable, we say that the time margin of the system is infinite.
Delay Margin
When speaking of sinusoidal signals, it doesn't make sense to talk about "time shifts", so instead we talk about "phase shifts". Therefore, it is also common to refer to the time margin as the phase margin of the system. The phase margin denotes the amount of phase shift that we can apply to the system input before the system goes unstable.
We denote the phase margin for a system with a lowercase Greek letter φ (phi). Phase margin is defined as such for a second-order system:
[Delay Margin]
Oftentimes, the phase margin is approximated by the following relationship:
[Delay Margin (approx)]
The Greek letter zeta (ζ) is a quantity called the damping ratio, and we discuss this quantity in more detail in the next chapter.
Transform-Domain Delays
The ordinary Z-Transform does not account for a system which experiences an arbitrary time delay, or a processing delay. The Z-Transform can, however, be modified to account for an arbitrary delay. This new version of the Z-transform is frequently called the Modified Z-Transform, although in some literature (notably in Wikipedia), it is known as the Advanced Z-Transform.
Delayed Star Transform
To demonstrate the concept of an ideal delay, we will show how the star transform responds to a time-shifted input with a specified delay of time T. The function : is the delayed star transform with a delay parameter Δ. The delayed star transform is defined in terms of the star transform as such:
[Delayed Star Transform]
As we can see, in the star transform, a time-delayed signal is multiplied by a decaying exponential value in the transform domain.
Delayed Z-Transform
Since we know that the Star Transform is related to the Z Transform through the following change of variables:
We can interpret the above result to show how the Z Transform responds to a delay:
This result is expected.
Now that we know how the Z transform responds to time shifts, it is often useful to generalize this behavior into a form known as the Delayed Z-Transform. The Delayed Z-Transform is a function of two variables, z and Δ, and is defined as such:
And finally:
[Delayed Z Transform]
Modified Z-Transform
The Delayed Z-Transform has some uses, but mathematicians and engineers have decided that a more useful version of the transform was needed. The new version of the Z-Transform, which is similar to the Delayed Z-transform with a change of variables, is known as the Modified Z-Transform. The Modified Z-Transform is defined in terms of the delayed Z transform as follows:
And it is defined explicitly:
[Modified Z Transform]
Poles and Zeros
Poles and Zeros
Poles and Zeros of a transfer function are the frequencies for which the value of the denominator and numerator of transfer function becomes infinite and zero respectively. The values of the poles and the zeros of a system determine whether the system is stable, and how well the system performs. Control systems, in the most simple sense, can be designed simply by assigning specific values to the poles and zeros of the system.
Physically realizable control systems must have a number of poles greater than the number of zeros. Systems that satisfy this relationship are called Proper. We will elaborate on this below.
Time-Domain Relationships
Let's say that we have a transfer function with 3 poles:
The poles are located at s = l, m, n. Now, we can use partial fraction expansion to separate out the transfer function:
Using the inverse transform on each of these component fractions (looking up the transforms in our table), we get the following:
But, since s is a complex variable, l m and n can all potentially be complex numbers, with a real part (σ) and an imaginary part (jω). If we just look at the first term:
Using Euler's Equation on the imaginary exponent, we get:
If a complex pole is present it is always accompanied by another pole that is its complex conjugate. The imaginary parts of their time domain representations thus cancel and we are left with 2 of the same real parts. Assuming that the complex conjugate pole of the first term is present, we can take 2 times the real part of this equation and we are left with our final result:
We can see from this equation that every pole will have an exponential part, and a sinusoidal part to its response. We can also go about constructing some rules:
- if σl = 0, the response of the pole is a perfect sinusoidal (an oscillator)
- if ωl = 0, the response of the pole is a perfect exponential.
- if σl < 0, the exponential part of the response will decay towards zero.
- if σl > 0, the exponential part of the response will rise towards infinity.
From the last two rules, we can see that all poles of the system must have negative real parts, and therefore they must all have the form (s + l) for the system to be stable. We will discuss stability in later chapters.
What are Poles and Zeros
Let's say we have a transfer function defined as a ratio of two polynomials:
Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator of the transfer function) obtained by setting N(s) = 0 and solving for s.
Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must not have more zeros than poles, we can state that the polynomial order of D(s) must be greater than or equal to the polynomial order of N(s).
Example
Consider the transfer function:
We define N(s) and D(s) to be the numerator and denominator polynomials, as such:
We set N(s) to zero, and solve for s:
So we have a zero at s → -2. Now, we set D(s) to zero, and solve for s to obtain the poles of the equation:
And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable, and it can therefore take imaginary and real values.
Effects of Poles and Zeros
As s approaches a zero, the numerator of the transfer function (and therefore the transfer function itself) approaches the value 0. When s approaches a pole, the denominator of the transfer function approaches zero, and the value of the transfer function approaches infinity. An output value of infinity should raise an alarm bell for people who are familiar with BIBO stability. We will discuss this later.
As we have seen above, the locations of the poles, and the values of the real and imaginary parts of the pole determine the response of the system. Real parts correspond to exponentials, and imaginary parts correspond to sinusoidal values. Addition of poles to the transfer function has the effect of pulling the root locus to the right, making the system less stable. Addition of zeros to the transfer function has the effect of pulling the root locus to the left, making the system more stable.
Second-Order Systems
The canonical form for a second order system is as follows:
[Second-order transfer function]
Where K is the system gain, ζ is called the damping ratio of the function, and ω is called the natural frequency of the system. ζ and ω, if exactly known for a second order system, the time responses can be easily plotted and stability can easily be checked. More information on second order systems can be found here.
Damping Ratio
The damping ratio of a second-order system, denoted with the Greek letter zeta (ζ), is a real number that defines the damping properties of the system. More damping has the effect of less percent overshoot, and slower settling time. Damping is the inherent ability of the system to oppose the oscillatory nature of the system's transient response. Larger values of damping coefficient or damping factor produces transient responses with lesser oscillatory nature.
Natural Frequency
The natural frequency is occasionally written with a subscript:
We will omit the subscript when it is clear that we are talking about the natural frequency, but we will include the subscript when we are using other values for the variable ω. Also, when .
Higher-Order Systems
Modern Controls
The modern method of controls uses systems of special state-space equations to model and manipulate systems. The state variable model is broad enough to be useful in describing a wide range of systems, including systems that cannot be adequately described using the Laplace Transform. These chapters will require the reader to have a solid background in linear algebra, and multi-variable calculus.
Digital Systems
Digital systems, expressed previously as difference equations or Z-Transform transfer functions, can also be used with the state-space representation. All the same techniques for dealing with analog systems can be applied to digital systems with only minor changes.
Digital Systems
For digital systems, we can write similar equations using discrete data sets:
Zero-Order Hold Derivation
If we have a continuous-time state equation:
We can derive the digital version of this equation that we discussed above. We take the Laplace transform of our equation:
Now, taking the inverse Laplace transform gives us our time-domain system, keeping in mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix, Φ:
Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period:
We were able to remove u(kT) from the integral because it did not rely on τ. We now define a new function, Γ, as follows:
Inserting this new expression into our equation, and setting t = (k + 1)T gives us:
Now Φ(T) and Γ(T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coefficient matrices:
We can use these values in our state equation, converting to our bracket notation instead:
Relating Continuous and Discrete Systems
Continuous and discrete systems that perform similarly can be related together through a set of relationships. It should come as no surprise that a discrete system and a continuous system will have different characteristics and different coefficient matrices. If we consider that a discrete system is the same as a continuous system, except that it is sampled with a sampling time T, then the relationships below will hold. The process of converting an analog system for use with digital hardware is called discretization. We've given a basic introduction to discretization already, but we will discuss it in more detail here.
Discrete Coefficient Matrices
Of primary importance in discretization is the computation of the associated coefficient matrices from the continuous-time counterparts. If we have the continuous system (A, B, C, D), we can use the relationship t = kT to transform the state-space solution into a sampled system:
Now, if we want to analyze the k+1 term, we can solve the equation again:
Separating out the variables, and breaking the integral into two parts gives us:
If we substitute in a new variable β = (k + 1)T + τ, and if we see the following relationship:
We get our final result:
Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system.
Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q
If the Ac matrix is nonsingular, then we can find its inverse and instead define Bd as:
The differences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are different. Continuous-time systems are represented by linear differential equations, while the digital systems are described by difference equations. High order terms in a difference equation are delayed copies of the signals, while high order terms in the differential equations are derivatives of the analog signal.
If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm.
Notation
Because the coefficient matrices for the discrete systems are computed differently from the continuous-time coefficient matrices, and because the matrices technically represent different things, it is not uncommon in the literature to denote these matrices with different variables. For instance, the following variables are used in place of A and B frequently:
These substitutions would give us a system defined by the ordered quadruple (Ω, R, C, D) for representing our equations.
As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book.
Converting Difference Equations
Now, let's say that we have a 3rd order difference equation, that describes a discrete-time system:
From here, we can define a set of discrete state variables x in the following manner:
Which in turn gives us 3 first-order difference equations:
Again, we say that matrix x is a vertical vector of the 3 state variables we have defined, and we can write our state equation in the same form as if it were a continuous-time system:
Solving for x[n]
We can find a general time-invariant solution for the discrete time difference equations. Let us start working up a pattern. We know the discrete state equation:
Starting from time n = 0, we can start to create a pattern:
With a little algebraic trickery, we can reduce this pattern to a single equation:
[General State Equation Solution]
Substituting this result into the output equation gives us:
[General Output Equation Solution]
Time Variant Solutions
If the system is time-variant, we have a general solution that is similar to the continuous-time case:
Where φ, the state transition matrix, is defined in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are different. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist.
State Transition Matrix
The discrete time state transition matrix is the unique solution of the equation:
Where the following restriction must hold:
From this definition, an obvious way to calculate this state transition matrix presents itself:
Or,
MATLAB Calculations
MATLAB is a computer program, and therefore calculates all systems using digital methods. The MATLAB function lsim is used to simulate a continuous system with a specified input. This function works by calling the c2d, which converts a system (A, B, C, D) into the equivalent discrete system. Once the system model is discretized, the function passes control to the dlsim function, which is used to simulate discrete-time systems with the specified input.
Because of this, simulation programs like MATLAB are subjected to round-off errors associated with the discretization process.
Stability
System stability is an important topic, because unstable systems may not perform correctly, and may actually be harmful to people. There are a number of different methods and tools that can be used to determine system stability, depending on whether you are in the state-space, or the complex domain.
Stability
Stability
When a system is unstable, the output of the system may be infinite even though the input to the system was finite. This causes a number of practical problems. For instance, a robot arm controller that is unstable may cause the robot to move dangerously. Also, systems that are unstable often incur a certain amount of physical damage, which can become costly. Nonetheless, many systems are inherently unstable - a fighter jet, for instance, or a rocket at liftoff, are examples of naturally unstable systems. Although we can design controllers that stabilize the system, it is first important to understand what stability is, how it is determined, and why it matters.
The chapters in this section are heavily mathematical and many require a background in linear differential equations. Readers without a strong mathematical background might want to review the necessary chapters in the Calculus and Ordinary Differential Equations books (or equivalent) before reading this material.
For most of this chapter we will be assuming that the system is linear and can be represented either by a set of transfer functions or in state space. Linear systems have an associated characteristic polynomial which tells us a great deal about the stability of the system. If any coefficient of the characteristic polynomial is zero or negative then the system is either unstable or at most marginally stable. It is important to note that even if all of the coefficients of the characteristic polynomial are positive the system may still be unstable. We will look into this in more detail below.
BIBO Stability
A system is defined to be BIBO Stable if every bounded input to the system results in a bounded output over the time interval . This must hold for all initial times to. So long as we don't input infinity to our system, we won't get infinity output.
A system is defined to be uniformly BIBO Stable if there exists a positive constant k that is independent of t0 such that for all t0 the following conditions:
implies that
There are a number of different types of stability, and keywords that are used with the topic of stability. Some of the important words that we are going to be discussing in this chapter, and the next few chapters are: BIBO Stable, Marginally Stable, Conditionally Stable, Uniformly Stable, Asymptotically Stable, and Unstable. All of these words mean slightly different things.
Determining BIBO Stability
We can prove mathematically that a system f is BIBO stable if an arbitrary input x is bounded by two finite but large arbitrary constants M and -M:
We apply the input x, and the arbitrary boundaries M and -M to the system to produce three outputs:
Now, all three outputs should be finite for all possible values of M and x, and they should satisfy the following relationship:
If this condition is satisfied, then the system is BIBO stable.
A SISO linear time-invariant (LTI) system is BIBO stable if and only if is absolutely integrable from [0,∞] or from:
Example
Consider the system:
We can apply our test, selecting an arbitrarily large finite constant M, and an arbitrary input x such that M>x>-M
As M approaches infinity (but does not reach infinity), we can show that:
And:
So now, we can write out our inequality:
And this inequality should be satisfied for all possible values of x. However, we can see that when x is zero, we have the following:
Which means that x is between -M and M, but the value yx is not between y-M and yM. Therefore, this system is not stable.
Poles and Stability
When the poles of the closed-loop transfer function of a given system are located in the right-half of the S-plane (RHP), the system becomes unstable. When the poles of the system are located in the left-half plane (LHP) and the system is not improper, the system is shown to be stable. A number of tests deal with this particular facet of stability: The Routh-Hurwitz Criteria, the Root-Locus, and the Nyquist Stability Criteria all test whether there are poles of the transfer function in the RHP. We will learn about all these tests in the upcoming chapters.
If the system is a multivariable, or a MIMO system, then the system is stable if and only if every pole of every transfer function in the transfer function matrix has a negative real part and every transfer function in the transfer function matrix is not improper. For these systems, it is possible to use the Routh-Hurwitz, Root Locus, and Nyquist methods described later, but these methods must be performed once for each individual transfer function in the transfer function matrix.
Poles and Eigenvalues
Every pole of G(s) is an eigenvalue of the system matrix A. However, not every eigenvalue of A is a pole of G(s).
The poles of the transfer function, and the eigenvalues of the system matrix A are related. In fact, we can say that the eigenvalues of the system matrix A are the poles of the transfer function of the system. In this way, if we have the eigenvalues of a system in the state-space domain, we can use the Routh-Hurwitz, and Root Locus methods as if we had our system represented by a transfer function instead.
On a related note, eigenvalues and all methods and mathematical techniques that use eigenvalues to determine system stability only work with time-invariant systems. In systems which are time-variant, the methods using eigenvalues to determine system stability fail.
Transfer Functions Revisited
We are going to have a brief refesher here about transfer functions, because several of the later chapters will use transfer functions for analyzing system stability.
Let us remember our generalized feedback-loop transfer function, with a gain element of K, a forward path Gp(s), and a feedback of Gb(s). We write the transfer function for this system as:
Where is the closed-loop transfer function, and is the open-loop transfer function. Again, we define the open-loop transfer function as the product of the forward path and the feedback elements, as such:
- <---Note this definition now contradicts the updated definition in the Feedback Loops section.
Now, we can define F(s) to be the characteristic equation. F(s) is simply the denominator of the closed-loop transfer function, and can be defined as such:
[Characteristic Equation]
We can say conclusively that the roots of the characteristic equation are the poles of the transfer function. Now, we know a few simple facts:
- The locations of the poles of the closed-loop transfer function determine if the system is stable or not
- The zeros of the characteristic equation are the poles of the closed-loop transfer function.
- The characteristic equation is always a simpler equation than the closed-loop transfer function.
These functions combined show us that we can focus our attention on the characteristic equation, and find the roots of that equation.
State-Space and Stability
As we have discussed earlier, the system is stable if the eigenvalues of the system matrix A have negative real parts. However, there are other stability issues that we can analyze, such as whether a system is uniformly stable, asymptotically stable, or otherwise. We will discuss all these topics in a later chapter.
Marginal Stability
When the poles of the system in the complex s-domain exist on the imaginary axis (the vertical axis), or when the eigenvalues of the system matrix are imaginary (no real part), the system exhibits oscillatory characteristics, and is said to be marginally stable. A marginally stable system may become unstable under certain circumstances, and may be perfectly stable under other circumstances. It is impossible to tell by inspection whether a marginally stable system will become unstable or not.
We will discuss marginal stability more in the following chapters.
Discrete Time Stability
Discrete-Time Stability
The stability analysis of a discrete-time or digital system is similar to the analysis for a continuous time system. However, there are enough differences that it warrants a separate chapter.
Input-Output Stability
Uniform Stability
An LTI causal system is uniformly BIBO stable if there exists a positive constant L such that the following conditions:
imply that
Impulse Response Matrix
We can define the impulse response matrix of a discrete-time system as:
[Impulse Response Matrix]
Or, in the general time-varying case:
A digital system is BIBO stable if and only if there exists a positive constant L such that for all non-negative k:
Stability of Transfer Function
A MIMO discrete-time system is BIBO stable if and only if every pole of every transfer function in the transfer function matrix has a magnitude less than 1. All poles of all transfer functions must exist inside the unit circle on the Z plane.
Lyapunov Stability
There is a discrete version of the Lyapunov stability theorem that applies to digital systems. Given the discrete Lyapunov equation:
[Digital Lypapunov Equation]
We can use this version of the Lyapunov equation to define a condition for stability in discrete-time systems:
- Lyapunov Stability Theorem (Digital Systems)
- A digital system with the system matrix A is asymptotically stable if and only if there exists a unique matrix M that satisfies the Lyapunov Equation for every positive definite matrix N.
Poles and Eigenvalues
Every pole of G(z) is an eigenvalue of the system matrix A. Not every eigenvalue of A is a pole of G(z). Like the poles of the transfer function, all the eigenvalues of the system matrix must have magnitudes less than 1. Mathematically:
If the magnitude of the eigenvalues of the system matrix A, or the poles of the transfer functions are greater than 1, the system is unstable.
Finite Wordlengths
Digital computer systems have an inherent problem because implementable computer systems have finite wordlengths to deal with. Some of the issues are:
- Real numbers can only be represented with a finite precision. Typically, a computer system can only accurately represent a number to a finite number of decimal points.
- Because of the fact above, computer systems with feedback can compound errors with each program iteration. Small errors in one step of an algorithm can lead to large errors later in the program.
- Integer numbers in computer systems have finite lengths. Because of this, integer numbers will either roll-over, or saturate, depending on the design of the computer system. Both situations can create inaccurate results.
Jury's Test
Routh-Hurwitz in Digital Systems
Because of the differences in the Z and S domains, the Routh-Hurwitz criteria can not be used directly with digital systems. This is because digital systems and continuous-time systems have different regions of stability. However, there are some methods that we can use to analyze the stability of digital systems. Our first option (and arguably not a very good option) is to convert the digital system into a continuous-time representation using the bilinear transform. The bilinear transform converts an equation in the Z domain into an equation in the W domain, that has properties similar to the S domain. Another possibility is to use Jury's Stability Test. Jury's test is a procedure similar to the RH test, except it has been modified to analyze digital systems in the Z domain directly.
Bilinear Transform
One common, but time-consuming, method of analyzing the stability of a digital system in the z-domain is to use the bilinear transform to convert the transfer function from the z-domain to the w-domain. The w-domain is similar to the s-domain in the following ways:
- Poles in the right-half plane are unstable
- Poles in the left-half plane are stable
- Poles on the imaginary axis are partially stable
The w-domain is warped with respect to the s domain, however, and except for the relative position of poles to the imaginary axis, they are not in the same places as they would be in the s-domain.
Remember, however, that the Routh-Hurwitz criterion can tell us whether a pole is unstable or not, and nothing else. Therefore, it doesn't matter where exactly the pole is, so long as it is in the correct half-plane. Since we know that stable poles are in the left-half of the w-plane and the s-plane, and that unstable poles are on the right-hand side of both planes, we can use the Routh-Hurwitz test on functions in the w domain exactly like we can use it on functions in the s-domain.
Other Mappings
There are other methods for mapping an equation in the Z domain into an equation in the S domain, or a similar domain. We will discuss these different methods in the Appendix.
Jury's Test
Jury's test is a test that is similar to the Routh-Hurwitz criterion, except that it can be used to analyze the stability of an LTI digital system in the Z domain. To use Jury's test to determine if a digital system is stable, we must check our z-domain characteristic equation against a number of specific rules and requirements. If the function fails any requirement, it is not stable. If the function passes all the requirements, it is stable. Jury's test is a necessary and sufficient test for stability in digital systems.
Again, we call D(z) the characteristic polynomial of the system. It is the denominator polynomial of the Z-domain transfer function. Jury's test will focus exclusively on the Characteristic polynomial. To perform Jury's test, we must perform a number of smaller tests on the system. If the system fails any test, it is unstable.
Jury Tests
Given a characteristic equation in the form:
The following tests determine whether this system has any poles outside the unit circle (the instability region). These tests will use the value N as being the degree of the characteristic polynomial.
The system must pass all of these tests to be considered stable. If the system fails any test, you may stop immediately: you do not need to try any further tests.
- Rule 1
- If z is 1, the system output must be positive:
- Rule 2
- If z is -1, then the following relationship must hold:
- Rule 3
- The absolute value of the constant term (a0) must be less than the value of the highest coefficient (aN):
If Rule 1 Rule 2 and Rule 3 are satisified, construct the Jury Array (discussed below).
- Rule 4
- Once the Jury Array has been formed, all the following relationships must be satisifed until the end of the array:
- And so on until the last row of the array. If all these conditions are satisifed, the system is stable.
While you are constructing the Jury Array, you can be making the tests of Rule 4. If the Array fails Rule 4 at any point, you can stop calculating the array: your system is unstable. We will discuss the construction of the Jury Array below.
The Jury Array
The Jury Array is constructed by first writing out a row of coefficients, and then writing out another row with the same coefficients in reverse order. For instance, if your polynomial is a third order system, we can write the First two lines of the Jury Array as follows:
Now, once we have the first row of our coefficients written out, we add another row of coefficients (we will use b for this row, and c for the next row, as per our previous convention), and we will calculate the values of the lower rows from the values of the upper rows. Each new row that we add will have one fewer coefficient then the row before it:
Note: The last file is the (2N-3) file, and always has 3 elements. This test doesn't have sense if N=1, but in this case you know the pole!
Once we get to a row with 2 members, we can stop constructing the array.
To calculate the values of the odd-number rows, we can use the following formulae. The even number rows are equal to the previous row in reverse order. We will use k as an arbitrary subscript value. These formulae are reusable for all elements in the array:
This pattern can be carried on to all lower rows of the array, if needed.
Example: Calculating e5
Give the equation for member e5 of the jury array (assuming the original polynomial is sufficiently large to require an e5 member).
Going off the pattern we set above, we can have this equation for a member e:
Where we are using R as the subtractive element from the above equations. Since row c had R → 1, and row d had R → 2, we can follow the pattern and for row e set R → 3. Plugging this value of R into our equation above gives us:
And since we want e5 we know that k is 5, so we can substitute that into the equation:
When we take the determinant, we get the following equation:
Further reading
We will discuss the bilinear transform, and other methods to convert between the Laplace domain and the Z domain in the appendix:
Root Locus
The Problem
Consider a system like a radio. The radio has a "volume" knob, that controls the amount of gain of the system. High volume means more power going to the speakers, low volume means less power to the speakers. As the volume value increases, the poles of the transfer function of the radio change, and they might potentially become unstable. We would like to find out if the radio becomes unstable, and if so, we would like to find out what values of the volume cause it to become unstable. Our current methods would require us to plug in each new value for the volume (gain, "K"), and solve the open-loop transfer function for the roots. This process can be a long one. Luckily, there is a method called the root-locus method, that allows us to graph the locations of all the poles of the system for all values of gain, K
Root-Locus
As we change gain, we notice that the system poles and zeros actually move around in the S-plane. This fact can make life particularly difficult, when we need to solve higher-order equations repeatedly, for each new gain value. The solution to this problem is a technique known as Root-Locus graphs. Root-Locus allows you to graph the locations of the poles and zeros for every value of gain, by following several simple rules. As we know that a fan switch also can control the speed of the fan.
Let's say we have a closed-loop transfer function for a particular system:
Where N is the numerator polynomial and D is the denominator polynomial of the transfer functions, respectively. Now, we know that to find the poles of the equation, we must set the denominator to 0, and solve the characteristic equation. In other words, the locations of the poles of a specific equation must satisfy the following relationship:
from this same equation, we can manipulate the equation as such:
And finally by converting to polar coordinates:
Now we have 2 equations that govern the locations of the poles of a system for all gain values:
[The Magnitude Equation]
[The Angle Equation]
Digital Systems
The same basic method can be used for considering digital systems in the Z-domain:
Where N is the numerator polynomial in z, D is the denominator polynomial in z, and is the open-loop transfer function of the system, in the Z domain.
The denominator D(z), by the definition of the characteristic equation is equal to:
We can manipulate this as follows:
We can now convert this to polar coordinates, and take the angle of the polynomial:
We are now left with two important equations:
[The Magnitude Equation]
[The Angle Equation]
If you will compare the two, the Z-domain equations are nearly identical to the S-domain equations, and act exactly the same. For the remainder of the chapter, we will only consider the S-domain equations, with the understanding that digital systems operate in nearly the same manner.
The Root-Locus Procedure
In this section, the rules for the S-Plane and the Z-plane are the same, so we won't refer to the differences between them.
In the transform domain (see note at right), when the gain is small, the poles start at the poles of the open-loop transfer function. When gain becomes infinity, the poles move to overlap the zeros of the system. This means that on a root-locus graph, all the poles move towards a zero. Only one pole may move towards one zero, and this means that there must be the same number of poles as zeros.
If there are fewer zeros than poles in the transfer function, there are a number of implicit zeros located at infinity, that the poles will approach.
First thing, we need to convert the magnitude equation into a slightly more convenient form:
We generally use capital letters for functions in the frequency domain, but a(s) and b(s) are unimportant enough to be lower-case.
Now, we can assume that G(s)H(s) is a fraction of some sort, with a numerator and a denominator that are both polynomials. We can express this equation using arbitrary functions a(s) and b(s), as such:
We will refer to these functions a(s) and b(s) later in the procedure.
We can start drawing the root-locus by first placing the roots of b(s) on the graph with an 'X'. Next, we place the roots of a(s) on the graph, and mark them with an 'O'.
Poles are marked on the graph with an 'X' and zeros are marked with an 'O' by common convention. These letters have no particular meaning |
Next, we examine the real-axis. starting from the right-hand side of the graph and traveling to the left, we draw a root-locus line on the real-axis at every point to the left of an odd number of poles or zeros on the real-axis. This may sound tricky at first, but it becomes easier with practice.
Double poles or double zeros count as two. |
Now, a root-locus line starts at every pole. Therefore, any place that two poles appear to be connected by a root locus line on the real-axis, the two poles actually move towards each other, and then they "break away", and move off the axis. The point where the poles break off the axis is called the breakaway point. From here, the root locus lines travel towards the nearest zero.
It is important to note that the s-plane is symmetrical about the real axis, so whatever is drawn on the top-half of the S-plane, must be drawn in mirror-image on the bottom-half plane.
Once a pole breaks away from the real axis, they can either travel out towards infinity (to meet an implicit zero), or they can travel to meet an explicit zero, or they can re-join the real-axis to meet a zero that is located on the real-axis. If a pole is traveling towards infinity, it always follows an asymptote. The number of asymptotes is equal to the number of implicit zeros at infinity.
Root Locus Rules
Here is the complete set of rules for drawing the root-locus graph. We will use p and z to denote the number of poles and the number of zeros of the open-loop transfer function, respectively. We will use Pi and Zi to denote the location of the ith pole and the ith zero, respectively. Likewise, we will use ψi and ρi to denote the angle from a given point to the ith pole and zero, respectively. All angles are given in radians (π denotes π radians).
There are 11 rules that, if followed correctly, will allow you to create a correct root-locus graph.
- Rule 1
- There is one branch of the root-locus for every root of b(s).
- Rule 2
- The roots of b(s) are the poles of the open-loop transfer function. Mark the roots of b(s) on the graph with an X.
- Rule 3
- The roots of a(s) are the zeros of the open-loop transfer function. Mark the roots of a(s) on the graph with an O. There should be a number of O's less than or equal to the number of X's. There is a number of zeros p - z located at infinity. These zeros at infinity are called "implicit zeros". All branches of the root-locus will move from a pole to a zero (some branches, therefore, may travel towards infinity).
- Rule 4
- A point on the real axis is a part of the root-locus if it is to the left of an odd number of poles and zeros.
- Rule 5
- The gain at any point on the root locus can be determined by the inverse of the absolute value of the magnitude equation.
- Rule 6
- The root-locus diagram is symmetric about the real-axis. All complex roots are conjugates.
- Rule 7
- Two roots that meet on the real-axis will break away from the axis at certain break-away points. If we set s → σ (no imaginary part), we can use the following equation:
- And differentiate to find the local maximum:
- Rule 8
- The breakaway lines of the root locus are separated by angles of , where α is the number of poles intersecting at the breakaway point.
- Rule 9
- The breakaway root-loci follow asymptotes that intersect the real axis at angles φω given by:
- The origin of these asymptotes, OA, is given as the sum of the pole locations, minus the sum of the zero locations, divided by the difference between the number of poles and zeros:
- The OA point should lie on the real axis.
- Rule 10
- The branches of the root locus cross the imaginary axis at points where the angle equation value is π (i.e., 180o).
- Rule 11
- The angles that the root locus branch makes with a complex-conjugate pole or zero is determined by analyzing the angle equation at a point infinitessimally close to the pole or zero. The angle of departure, φd is given by the following equation:
- The angle of arrival, φa, is given by:
We will explain these rules in the rest of the chapter.
Root Locus Equations
Here are the two major equations:
[Root Locus Equations]
S-Domain Equations Z-Domain Equations
Note that the sum of the angles of all the poles and zeros must equal to 180.
Number of Asymptotes
If the number of explicit zeros of the system is denoted by Z (uppercase z), and the number of poles of the system is given by P, then the number of asymptotes (Na) is given by:
[Number of Asymptotes]
The angles of the asymptotes are given by:
[Angle of Asymptotes]
for values of .
The angles for the asymptotes are measured from the positive real-axis |
Asymptote Intersection Point
The asymptotes intersect the real axis at the point:
[Origin of Asymptotes]
Where is the sum of all the locations of the poles, and is the sum of all the locations of the explicit zeros.
Breakaway Points
The breakaway points are located at the roots of the following equation:
[Breakaway Point Locations]
- or
Once you solve for z, the real roots give you the breakaway/reentry points. Complex roots correspond to a lack of breakaway/reentry.
The breakaway point equation can be difficult to solve, so many times the actual location is approximated.
Root Locus and Stability
The root locus procedure should produce a graph of where the poles of the system are for all values of gain K. When any or all of the roots of D are in the unstable region, the system is unstable. When any of the roots are in the marginally stable region, the system is marginally stable (oscillatory). When all of the roots of D are in the stable region, then the system is stable.
It is important to note that a system that is stable for gain K1 may become unstable for a different gain K2. Some systems may have poles that cross over from stable to unstable multiple times, giving multiple gain values for which the system is unstable.
Here is a quick refresher:
Region S-Domain Z-Domain Stable Region Left-Hand S Plane Inside the Unit Circle Marginally Stable Region The vertical axis The Unit Circle Unstable Region Right-Hand S Plane Outside the Unit Circle,
Examples
Example 1: First-Order System
Find the root-locus of the open-loop system:
If we look at the characteristic equation, we can quickly solve for the single pole of the system:
We plot that point on our root-locus graph, and everything on the real axis to the left of that single point is on the root locus (from the rules, above). Therefore, the root locus of our system looks like this:
From this image, we can see that for all values of gain this system is stable.
Example 2: Third Order System
We are given a system with three real poles, shown by the transfer function:
Is this system stable?
To answer this question, we can plot the root-locus. First, we draw the poles on the graph at locations -1, -2, and -3. The real-axis between the first and second poles is on the root-locus, as well as the real axis to the left of the third pole. We know also that there is going to be breakaway from the real axis at some point. The origin of asymptotes is located at:
- ,
and the angle of the asymptotes is given by:
We know that the breakaway occurs between the first and second poles, so we will estimate the exact breakaway point. Drawing the root-locus gives us the graph below.
We can see that for low values of gain the system is stable, but for higher values of gain, the system becomes unstable.
Example: Complex-Conjugate Zeros
Find the root-locus graph for the following system transfer function:
If we look at the denominator, we have poles at the origin, -1, and -2. Following Rule 4, we know that the real-axis between the first two poles, and the real axis after the third pole are all on the root-locus. We also know that there is going to be a breakaway point between the first two poles, so that they can approach the complex conjugate zeros. If we use the quadratic equation on the numerator, we can find that the zeros are located at:
If we draw our graph, we get the following:
We can see from this graph that the system is stable for all values of K.
Example: Root-Locus Using MATLAB/Octave
{{TextBox|1=Use MATLAB, Octave, or another piece of mathematical simulation software to produce the root-locus graph for the following system:
First, we must multiply through in the denominator:
Now, we can generate the coefficient vectors from the numerator and denominator:
num = [0 1 7 12]; den = [0 1 3 2];
Next, we can feed these vectors into the rlocus command:
rlocus(num, den);
Note:In Octave, we need to create a system structure first, by typing:
sys = tf(num, den); rlocus(sys);
Either way, we generate the following graph:
Nyquist Criterion
Nyquist Stability Criteria
The Nyquist Stability Criteria is a test for system stability, just like the Routh-Hurwitz test, or the Root-Locus Methodology. However, the Nyquist Criteria can also give us additional information about a system. Routh-Hurwitz and Root-Locus can tell us where the poles of the system are for particular values of gain. By altering the gain of the system, we can determine if any of the poles move into the RHP, and therefore become unstable. The Nyquist Criteria, however, can tell us things about the frequency characteristics of the system. For instance, some systems with constant gain might be stable for low-frequency inputs, but become unstable for high-frequency inputs.
Here is an example of a system responding differently to different frequency input values: Consider an ordinary glass of water. If the water is exposed to ordinary sunlight, it is unlikely to heat up too much. However, if the water is exposed to microwave radiation (from inside your microwave oven, for instance), the water will quickly heat up to a boil.
Also, the Nyquist Criteria can tell us things about the phase of the input signals, the time-shift of the system, and other important information.
Contours
A contour is a complicated mathematical construct, but luckily we only need to worry ourselves with a few points about them. We will denote contours with the Greek letter Γ (gamma). Contours are lines, drawn on a graph, that follow certain rules:
- The contour must close (it must form a complete loop)
- The contour may not cross directly through a pole of the system.
- Contours must have a direction (clockwise or counterclockwise, generally).
- A contour is called "simple" if it has no self-intersections. We only consider simple contours here.
Once we have such a contour, we can develop some important theorems about them, and finally use these theorems to derive the Nyquist stability criterion.
Argument Principle
Here is the argument principle, which we will use to derive the stability criterion. Do not worry if you do not understand all the terminology, we will walk through it:
- The Argument Principle
- If we have a contour, Γ, drawn in one plane (say the complex laplace plane, for instance), we can map that contour into another plane, the F(s) plane, by transforming the contour with the function F(s). The resultant contour, will circle the origin point of the F(s) plane N times, where N is equal to the difference between Z and P (the number of zeros and poles of the function F(s), respectively).
When we have our contour, Γ, we transform it into by plugging every point of the contour into the function F(s), and taking the resultant value to be a point on the transformed contour.
Example: First Order System
Let's say, for instance, that Γ is a unit square contour in the complex s plane. The vertices of the square are located at points I,J,K,L, as follows:
we must also specify the direction of our contour, and we will say (arbitrarily) that it is a clockwise contour (travels from I to J to K to L). We will also define our transform function, F(s), to be the following:
We can factor the denominator of F(s), and we can show that there is one zero at s → -0.5, and no poles. Plotting this root on the same graph as our contour, we see clearly that it lies within the contour. Since s is a complex variable, defined with real and imaginary parts as:
We know that F(s) must also be complex. We will say, for reasons of simplicity, that the axes in the F(s) plane are u and v, and are related as such:
From this relationship, we can define u and v in terms of σ and ω:
Now, to transform Γ, we will plug every point of the contour into F(s), and the resultant values will be the points of . We will solve for complex values u and v, and we will start with the vertices, because they are the simplest examples:
We can take the lines in between the vertices as a function of s, and plug the entire function into the transform. Luckily, because we are using straight lines, we can simplify very much:
- Line from I to J:
- Line from J to K:
- Line from K to L:
- Line from L to I:
And when we graph these functions, from vertex to vertex, we see that the resultant contour in the F(s) plane is a square, but not centered at the origin, and larger in size. Notice how the contour encircles the origin of the F(s) plane one time. This will be important later on.
Example: Second-Order System
Let's say that we have a slightly more complicated mapping function:
We can see clearly that F(s) has a zero at s → -0.5, and a complex conjugate set of poles at s → -0.5 + 0.5j and s → -0.5 - 0.5j. We will use the same unit square contour, Γ, from above:
We can see clearly that the poles and the zero of F(s) lie within Γ. Setting F(s) to u + vj and solving, we get the following relationships:
This is a little difficult now, because we need to simplify this whole expression, and separate it out into real and imaginary parts. There are two methods to doing this, neither of which is short or easy enough to demonstrate here to entirety:
- We convert the numerator and denominator polynomials into a polar representation in terms of r and θ, then perform the division, and then convert back into rectangular format.
- We plug each segment of our contour into this equation, and simplify numerically.
The Nyquist Contour
The Nyquist contour, the contour that makes the entire nyquist criterion work, must encircle the entire unstable region of the complex plane. For analog systems, this is the right half of the complex s plane. For digital systems, this is the entire plane outside the unit circle. Remember that if a pole to the closed-loop transfer function (or equivalently a zero of the characteristic equation) lies in the unstable region of the complex plane, the system is an unstable system.
- Analog Systems
- The Nyquist contour for analog systems is an infinite semi-circle that encircles the entire right-half of the s plane. The semicircle travels up the imaginary axis from negative infinity to positive infinity. From positive infinity, the contour breaks away from the imaginary axis, in the clock-wise direction, and forms a giant semicircle.
- Digital Systems
- The Nyquist contour in digital systems is a counter-clockwise encirclement of the unit circle.
Nyquist Criteria
Let us first introduce the most important equation when dealing with the Nyquist criterion:
Where:
- N is the number of encirclements of the (-1, 0) point.
- Z is the number of zeros of the characteristic equation.
- P is the number of poles in the of the open-loop characteristic equation.
With this equation stated, we can now state the Nyquist Stability Criterion:
- Nyquist Stability Criterion
- A feedback control system is stable, if and only if the contour in the F(s) plane does not encircle the (-1, 0) point when P is 0.
- A feedback control system is stable, if and only if the contour in the F(s) plane encircles the (-1, 0) point a number of times equal to the number of poles of F(s) enclosed by Γ.
In other words, if P is zero then N must equal zero. Otherwise, N must equal P. Essentially, we are saying that Z must always equal zero, because Z is the number of zeros of the characteristic equation (and therefore the number of poles of the closed-loop transfer function) that are in the right-half of the s plane.
Keep in mind that we don't necessarily know the locations of all the zeros of the characteristic equation. So if we find, using the nyquist criterion, that the number of poles is not equal to N, then we know that there must be a zero in the right-half plane, and that therefore the system is unstable.
Nyquist ↔ Bode
A careful inspection of the Nyquist plot will reveal a surprising relationship to the Bode plots of the system. If we use the Bode phase plot as the angle θ, and the Bode magnitude plot as the distance r, then it becomes apparent that the Nyquist plot of a system is simply the polar representation of the Bode plots.
To obtain the Nyquist plot from the Bode plots, we take the phase angle and the magnitude value at each frequency ω. We convert the magnitude value from decibels back into gain ratios. Then, we plot the ordered pairs (r, θ) on a polar graph.
Nyquist in the Z Domain
The Nyquist Criteria can be utilized in the digital domain in a similar manner as it is used with analog systems. The primary difference in using the criteria is that the shape of the Nyquist contour must change to encompass the unstable region of the Z plane. Therefore, instead of an infinitesimal semi-circle, the Nyquist contour for digital systems is a counter-clockwise unit circle. By changing the shape of the contour, the same N = Z - P equation holds true, and the resulting Nyquist graph will typically look identical to one from an analog system, and can be interpreted in the same way.
State-Space Stability
State-Space Stability
If a system is represented in the state-space domain, it doesn't make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us first introduce the notion of unstability:
- Unstable
- A system is said to be unstable if the system response approaches infinity as time approaches infinity. If our system is G(t), then, we can say a system is unstable if:
Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point:
- Equilibrium Point
- Given a system f such that:
A particular state xe is called an equilibrium point if
for all time t in the interval , where t0 is the starting time of the system.
The definitions below typically require that the equilibrium point be zero. If we have an equilibrium point xe = a, then we can use the following change of variables to make the equilibrium point zero:
We will also see below that a system's stability is defined in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point:
- Zero State
- A state xz is a zero state if xz = 0. A zero state may or may not be an equilibrium point.
Stability Definitions
The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t0, there exists an associated finite constant k(t0) such that:
Where sup is the supremum, or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point).
- Uniform Stability
- The system is defined to be uniformly stable if it is stable for all initial values of t0:
Uniform stability is a more general, and more powerful form of stability than was previously provided.
- Asymptotic Stability
- A system is defined to be asymptotically stable if:
A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable.
- Uniform Asymptotic Stability
- A system is defined to be uniformly asymptotically stable if the system is asymptotically stable for all values of t0.
- Exponential Stability
- A system is defined to be exponentially stable if the system response decays exponentially towards zero as time approaches infinity.
For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems.
Marginal Stability
Here we will discuss some rules concerning systems that are marginally stable. Because we are discussing eigenvalues and eigenvectors, these theorems only apply to time-invariant systems.
- A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A.
- The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts.
- The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts.
Eigenvalues and Poles
A Linearly Time Invariant (LTI) system is stable (asymptotically stable, see above) if all the eigenvalues of A have negative real parts. Consider the following state equation:
We can take the Laplace Transform of both sides of this equation, using initial conditions of x0 = 0:
Subtract AX(s) from both sides:
Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse:
Now, if we remember our formula for finding the matrix inverse from the adjoint matrix:
We can use that definition here:
Let's look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true:
And if we substitute λ for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts.
Impulse Response Matrix
We can define the Impulse response matrix, G(t, τ) in order to define further tests for stability:
[Impulse Response Matrix]
The system is uniformly stable if and only if there exists a finite positive constant L such that for all time t and all initial conditions t0 with the following integral is satisfied:
In other words, the above integral must have a finite value, or the system is not uniformly stable.
In the time-invariant case, the impulse response matrix reduces to:
In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral:
Where L is a finite constant.
Positive Definiteness
These terms are important, and will be used in further discussions on this topic.
- f(x) is positive definite if f(x) > 0 for all x.
- f(x) is positive semi-definite if for all x, and f(x) = 0 only if x = 0.
- f(x) is negative definite if f(x) < 0 for all x.
- f(x) is negative semi-definite if for all x, and f(x) = 0 only if x = 0.
A Hermitian matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably.
Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness.
Lyapunov Stability
Lyapunov's Equation
For linear systems, we can use the Lyapunov Equation, below, to determine if a system is stable. We will state the Lyapunov Equation first, and then state the Lyapunov Stability Theorem.
[Lyapunov Equation]
Where A is the system matrix, and M and N are p × p square matrices.
- Lyapunov Stability Theorem
- An LTI system is stable if there exists a matrix M that satisfies the Lyapunov Equation where N is an arbitrary positive definite matrix, and M is a unique positive definite matrix.
Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively, we can write:
- Lyapunov Stability Theorem (alternate)
- If all the eigenvalues of the system matrix A have negative real parts, then the Lyapunov Equation has a unique solution M for every positive definite matrix N, and the solution can be calculated by:
If the matrix M can be calculated in this manner, the system is asymptotically stable.
Controllers and Compensators
There are a number of preexisting devices for use in system control, such as lead and lag compensators, and powerful PID controllers. PID controllers are so powerful that many control engineers may use no other method of system control! The chapters in this section will discuss some of the common types of system compensators and controllers.
Controllability and Observability
System Interaction
In the world of control engineering, there are a slew of systems available that need to be controlled. The task of a control engineer is to design controller and compensator units to interact with these pre-existing systems. However, some systems simply cannot be controlled (or, more often, cannot be controlled in specific ways). The concept of controllability refers to the ability of a controller to arbitrarily alter the functionality of the system plant.
The state-variable of a system, x, represents the internal workings of the system that can be separate from the regular input-output relationship of the system. This also needs to be measured, or observed. The term observability describes whether the internal state variables of the system can be externally measured.
Controllability
Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input to move the internal state of a system from any initial state to any other final state in a finite time interval
We will start off with the definitions of the term controllability, and the related terms reachability and stabilizability.
- Controllability
- A system with internal state vector x is called controllable if and only if the system states can be changed by changing the system input.
- Reachability
- A particular state x1 is called reachable if there exists an input that transfers the state of the system from the initial state x0 to x1 in some finite time interval [t0, t).
- Stabilizability
- A system is Stabilizable if all states that cannot be reached decay to zero asymptotically.
We can also write out the definition of reachability more precisely:
A state x1 is called reachable at time t1 if for some finite initial time t0 there exists an input u(t) that transfers the state x(t) from the origin at t0 to x1.
A system is reachable at time t1 if every state x1 in the state-space is reachable at time t1.
Similarly, we can more precisely define the concept of controllability:
A state x0 is controllable at time t0 if for some finite time t1 there exists an input u(t) that transfers the state x(t) from x0 to the origin at time t1.
A system is called controllable at time t0 if every state x0 in the state-space is controllable.
Controllability Matrix
For LTI (linear time-invariant) systems, a system is reachable if and only if its controllability matrix, ζ, has a full row rank of p, where p is the dimension of the matrix A, and p × q is the dimension of matrix B.
[Controllability Matrix]
A system is controllable or "Controllable to the origin" when any state x1 can be driven to the zero state x = 0 in a finite number of steps.
A system is controllable when the rank of the system matrix A is p, and the rank of the controllability matrix is equal to:
If the second equation is not satisfied, the system is not .
MATLAB allows one to easily create the controllability matrix with the ctrb command. To create the controllability matrix simply type
where A and B are mentioned above. Then in order to determine if the system is controllable or not one can use the rank command to determine if it has full rank.
If
Then controllability does not imply reachability.
- Reachability always implies controllability.
- Controllability only implies reachability when the state transition matrix is nonsingular.
Determining Reachability
There are four methods that can be used to determine if a system is reachable or not:
- If the p rows of are linearly independent over the field of complex numbers. That is, if the rank of the product of those two matrices is equal to p for all values of t and τ
- If the rank of the controllability matrix is the same as the rank of the system matrix A.
- If the rank of for all eigenvalues λ of the matrix A.
- If the rank of the reachability gramian (described below) is equal to the rank of the system matrix A.
Each one of these conditions is both necessary and sufficient. If any one test fails, all the tests will fail, and the system is not reachable. If any test is positive, then all the tests will be positive, and the system is reachable.
Gramians
Gramians are complicated mathematical functions that can be used to determine specific things about a system. For instance, we can use gramians to determine whether a system is controllable or reachable. Gramians, because they are more complicated than other methods, are typically only used when other methods of analyzing a system fail (or are too difficult).
All the gramians presented on this page are all matrices with dimension p × p (the same size as the system matrix A).
All the gramians presented here will be described using the general case of Linear time-variant systems. To change these into LTI (time-invariant equations), the following substitutions can be used:
Where we are using the notation X' to denote the transpose of a matrix X (as opposed to the traditional notation XT).
Reachability Gramian
We can define the reachability gramian as the following integral:
[Reachability Gramian]