Control Systems/Noise Driven Systems

Noise-Driven Systems edit

Systems frequently have to deal with not only the control input u, but also a random noise input v. In some disciplines, such as in a study of electrical communication systems, the noise and the data signal can be added together into a composite input r = u + v. However, in studying control systems, we cannot combine these inputs together, for a variety of different reasons:

  1. The control input works to stabilize the system, and the noise input works to destabilize the system.
  2. The two inputs are independent random variables.
  3. The two inputs may act on the system in completely different ways.

As we will show in the next example, it is frequently a good idea to consider the noise and the control inputs separately:

Example: Consider a moving automobile. The control signals for the automobile consist of acceleration (gas pedal) and deceleration (brake pedal) inputs acting on the wheels of the vehicle, and working to create forward motion. The noise inputs to the system can consist of wind pushing against the vertical faces of the automobile, rough pavement (or even dirt) under the tires, bugs and debris hitting the front windshield, etc. As we can see, the control inputs act on the wheels of the vehicle, while the noise inputs can act on multiple sides of the vehicle, in different ways.

Probability Refresher edit

We are going to have a brief refesher here for calculus-based probability, specifically focusing on the topics that we will use in the rest of this chapter.

Expectation edit

The expectation operatior, E, is used to find the expected, or mean value of a given random variable. The expectation operator is defined as:

 

If we have two variables that are independent of one another, the expectation of their product is zero.

Covariance edit

The covariance matrix, Q, is the expectation of a random vector times it's transpose:

 

If we take the value of the x transpose at a different point in time, we can calculate out the covariance as:

 

Where δ is the impulse function.

Noise-Driven System Description edit

We can define the state equation to a system incorporating a noise vector v:

 

For generality, we will discuss the case of a time-variant system. Time-invariant system results will then be a simplification of the time-variant case. Also, we will assume that v is a gaussian random variable. We do this because physical systems frequently approximate gaussian processes, and because there is a large body of mathematical tools that we can use to work with these processes. We will assume our gaussian process has zero-mean.

Mean System Response edit

We would like to find out how our system will respond to the new noisy input. Every system iteration will have a different response that varies with the noise input, but the average of all these iterations should converge to a single value.

For the system with zero control input, we have:

 

For which we know our general solution is given as:

 

If we take the expected value of this function, it should give us the expected value of the output of the system. In other words, we would like to determine what the expected output of our system is going to be by adding a new, noise input.

 

In the second term of this equation, neither φ nor B are random variables, and therefore they can come outside of the expectaion operation. Since v is zero-mean, the expectation of it is zero. Therefore, the second term is zero. In the first equation, φ is not a random variable, but x0 does create a dependancy on the output of x(t), and we need to take the expectation of it. This means that:

 

In other words, the expected output of the system is, on average, the value that the output would be if there were no noise. Notice that if our noise vector v was not zero-mean, and if it was not gaussian, this result would not hold.

System Covariance edit

We are now going to analyze the covariance of the system with a noisy input. We multiply our system solution by its transpose, and take the expectation: (this equation is long and might break onto multiple lines)

  

If we multiply this out term by term, and cancel out the expectations that have a zero-value, we get the following result:

 

We call this result P, and we can find the first derivative of P by using the chain-rule:

 

Where

 

We can reduce this to:

 

In other words, we can analyze the system without needing to calculate the state-transition matrix. This is a good thing, because it can often be very difficult to calculate the state-transition matrix.

Alternate Analysis edit

Let us look again at our general solution:

 

We can run into a problem because in a gaussian distribution, especially systems with high variance (especially systems with infinite variance), the value of v can momentarily become undefined (approach infinity), which will cause the value of x to likewise become undefined at certain points. This is unacceptable, and makes further analysis of this problem difficult. Let us look again at our original equation, with zero control input:

 

We can multiply both sides by dt, and get the following result:

 
This new term, dw, is a random process known as a Weiner Process, which the result of transforming a gaussian process in this manner.

We can define a new differential, dw(t), which is an infinitesimal function of time as:

 

Now, we can integrate both sides of this equation:

 

However, this leads us to an unusual place, and one for which we are (probably) not prepared to continue further: in the third term on the left-hand side, we are attempting to integrate with respect to a function, not a variable. In this instance, the standard Riemann integrals that we are all familiar with cannot solve this equation. There are advanced techniques known as Ito Calculus however that can solve this equation, but these methods are currently outside the scope of this book.