Signal Processing/Wiener Filters

Signal Prediction

edit

The idea of signal prediction is a very important one, not only in the field of signal processing, but in most of engineering. Stated formally, the prediction problem is:

Given a series of input samples u(n), u(n - 1), u(n - 2) ..., predict with the smallest error possible the value of the next future input u(n + 1).

In short, the idea of a predictor is to use what we already know about the previous values of a signal to try and determine what the next value of the signal will be. Because many signals tend to be self-correlated, there is typically a strong relation between the previous values and the next value.

Wiener Filter

edit
 

Shown at right is a brief block diagram that represents the prediction problem. The input signal, w[n] is fed into a predictor with tap-weights ai. The output of the predictor is x[n].

Now, we want to compare x[n] to our desired signal, denoted here as s[n]. The difference between these two signals is our error signal, e[n]. The idea behind the Wiener Filter is that we can select the optimum tap-weights ai such that the error signal is minimized.

Types of Minimization

edit

It is worth asking, how exactly are we minimizing the error? Put another way, what quantity associated with the error are we attempting to minimize? The most common, and most mathematically tractable answer to this question is that we are going to minimize the mean-square error of the signal. That is, we want to minimize the average expected error, not any individual error value. In other words, we can define a cost function J as the expectation of our error squared:

 

Where e*[n] is the complex conjugate of our system. We will talk about complex numbers as being the most general case, and the student will be able to reduce that to the case of real numbers easily. Also it is important to note that in many situations, such as communications channels or other types of signal processing, that signals and filter coefficients typically are complex in nature, and need to be represented by complex values.

To minimize our cost function, we need to take the gradient of the cost function, and find the point where the gradient is identically equal to zero:

 

We use the vector gradient here instead of the single-dimensional derivative, because eventually our signals are going to become more complicated vectors.

Wiener-Hopf Equations

edit

The Big Picture

edit

By now, the casual reader is probably asking themselves exactly what the purpose of a Wiener filter is, and how it could possibly be used. While the initial formulation does not appear to be too useful, it is the application of adaptative algorithms, such as the steepest descent algorithm to the Wiener filter that exposes the true power of the system.

Instead of selecting a single vector of tap weights that represents an optimal solution, we can continuously apply the Wiener-Hopf equations to update the filter coefficients continuously. Consider the case of a transmitted data signal over a communications channel. The signal that comes into the receiver is a sum of two other terms:

 

Where u(t) is the transmitted data signal, and n(t) is the random noise signal. The purpose of a filter such as the Wiener Filter, is that we know the characteristics of the transmitted data, but the characteristics of the received signal vary as the noise signal varies over time. Using an adaptive filter such as the Wiener Filter, we can continue to minimize the error due to the noise as the noise changes over time.