Signal Processing/Steepest Descent Algorithm

§Steepest DescentEdit

The steepest descent algorithm is an old mathematical tool for numerically finding the minimum value of a function, based on the gradient of that function. Steepest descent uses the gradient function (or the scalar derivative, if the function is single-valued) to determine the direction in which a function is increasing or decreasing most rapidly. Each successive iteration of the algorithm moves along this direction for a specified step size, and the recomputes the gradient to determine the new direction to travel.

The method of steepest descent is a useful tool for signal processing because it can be applied iteratively.

We can apply the steepest descent algorithm to the Wiener filter in such a way that at each new step we can calculate a new set of filter coefficients. Using the steepest descent method, we can approach a minimum error value in relatively few iterations, and we can track a signal that changes in time to apply new minimum coefficients to each new version of the signal. It is important to use a steepest descent, or other fast minimization algorithm so that the filter coefficients can be updated more quickly then the next input sample is received.

§Steepest Descent AlgorithmEdit

The steepest descent algorithm can be formulated as follows:

[Steepest Descent Algorithm]

$\bold{a}(n + 1) = \bold{a}(n) + \mu;[\nabla J(\bold{a})]$

For mathematical convenience, it is often useful to add in an additional 1/2 term:

$\bold{a}(n + 1) = \bold{a}(n) + \frac{1}{2}\mu;[\nabla J(\bold{a})]$

§Steepest Descent Wiener FilterEdit

We can apply the steepest descent algorithm to the Wiener filter so that at each iteration the tap-weight vector a will approach the optimum Wiener tap weight vector ao. If p is the cross-correlation vector between the filter input x(n) and the desired response d(n), and if R is the cross-correlation matrix of the filter input u(n), then applying the steepest descent algorithm we obtain:

$\bold{a}(n + 1) = \bold{a}(n) + \mu[\bold{p} - \bold{R} \bold{a}(n)]$

This update algorithm is especially convenient since it is possible to parallelize large portions of the correlation calculations.

§StabilityEdit

The steepest descent algorithm has the potential to diverge or become unstable unless:

$0 \le \mu$

and

$-1 < 1 - \mu \lambda_k < 1$

Where λk are the eigenvalues of R. If these two conditions are satisfied, the algorithm should converge towards an optimal value.

§PerformanceEdit

A higher value of μ will cause the algorithm to converge more quickly.