Last modified on 13 April 2013, at 08:56

Calculus/Newton's Method

← Extrema and Points of Inflection Calculus Related Rates →
Newton's Method

Newton's Method (also called the Newton-Raphson method) is a recursive algorithm for approximating the root of a differentiable function. We know simple formulas for finding the roots of linear and quadratic equations, and there are also more complicated formulae for cubic and quartic equations. At one time it was hoped that there would be formulas found for equations of quintic and higher-degree, though it was later shown by Neils Henrik Abel that no such equations exist. The Newton-Raphson method is a method for approximating the roots of polynomial equations of any order. In fact the method works for any equation, polynomial or not, as long as the function is differentiable in a desired interval.

Newton's Method

Let f\left(x\right) be a differentiable function. Select a point x_0 based on a first approximation to the root, arbitrarily close to the function's root. To approximate the root you then recursively calculate using:

x_{n+1} = x_n - \frac{f\left(x_n\right)}{f'\left(x_n\right)}

As you recursively calculate, the x_{n+1}'s often become increasingly better approximations of the function's root.

In order to explain Newton's method, imagine that x_0 is already very close to a zero of f\left(x\right). We know that if we only look at points very close to x_0 then f\left(x\right) looks like its tangent line. If x_0 was already close to the place where f\left(x\right) was zero, and near x_0 we know that f\left(x\right) looks like its tangent line, then we hope the zero of the tangent line at x_0 is a better approximation then x_0 itself.

The equation for the tangent line to f\left(x\right) at x_0 is given by

y=f'\left(x_0\right)\left(x-x_0\right)+f\left(x_0\right).

Now we set y=0 and solve for x.

0=f'\left(x_0\right)\left(x-x_0\right)+f\left(x_0\right)
-f\left(x_0\right)=f'\left(x_0\right)\left(x-x_0\right)
\frac{-f\left(x_0\right)}{f'\left(x_0\right)}=\left(x-x_0\right)
x=\frac{-f\left(x_0\right)}{f'\left(x_0\right)}+x_0

This value of x we feel should be a better guess for the value of x where f\left(x\right)=0. We choose to call this value of x_1, and a little algebra we have

x_1=x_0-\frac{f\left(x_0\right)}{f'\left(x_0\right)}.

If our intuition was correct and x_1 is in fact a better approximation for the root of f\left(x\right), then our logic should apply equally well at x_1. We could look to the place where the tangent line at x_1 is zero. We call x_2, following the algebra above we arrive at the formula

x_2=x_1-\frac{f\left(x_1\right)}{f'\left(x_1\right)}.

And we can continue in this way as long as we wish. At each step, if your current approximation is x_n our new approximation will be x_{n+1}=x_n-\frac{f\left(x_n\right)}{f'\left(x_n\right)}.


ExamplesEdit

Find the root of the function  f\left(x\right) = x^2 \ .
Figure 1: A few iterations of Newton's method applied to y=x^2 starting with x_0=4. The blue curve is f\left(x\right). The other solid lines are the tangents at the various iteration points.

\begin{align}x_{0}&=&4\\
x_{1}&=&x_{0}-\frac{f\left(x_{0}\right)}{f'\left(x_{0}\right)}=4-\frac{16}{8}=2\\
x_{2}&=&x_{1}-\frac{f\left(x_{1}\right)}{f'\left(x_{1}\right)}=2-\frac{4}{4}=1\\
x_{3}&=&x_{2}-\frac{f\left(x_{2}\right)}{f'\left(x_{2}\right)}=1-\frac{1}{2}=\frac{1}{2}\\
x_{4}&=&x_{3}-\frac{f\left(x_{3}\right)}{f'\left(x_{3}\right)}=\frac{1}{2}-\frac{1/4}{1}=\frac{1}{4}\\
x_{5}&=&x_{4}-\frac{f\left(x_{4}\right)}{f'\left(x_{4}\right)}=\frac{1}{4}-\frac{1/16}{1/2}=\frac{1}{8}\\
x_{6}&=&x_{5}-\frac{f\left(x_{5}\right)}{f'\left(x_{5}\right)}=\frac{1}{8}-\frac{1/64}{1/4}=\frac{1}{16}\\
x_{7}&=&x_{6}-\frac{f\left(x_{6}\right)}{f'\left(x_{6}\right)}=\frac{1/256}{1/8}=\frac{1}{32}\end{align}

As you can see x_n is gradually approaching zero (which we know is the root of f\left(x\right)). One can approach the function's root with arbitrary accuracy.

Answer:  f\left(x\right) = x^2 \  has a root at  x = 0 \ .

NotesEdit

Figure 2: Newton's method applied to the function
f\left(x\right) = \begin{cases} \sqrt{x-4}, & \mbox{for}\,\,x \ge 4 \\ -\sqrt{4-x}, & \mbox{for}\,\,x < 4 \end{cases}
starting with x_0=2.

This method fails when f'\left(x\right) = 0. In that case, one should choose a new starting place. Occasionally it may happen that f\left(x\right) = 0 and f'\left(x\right) = 0 have a common root. To detect whether this is true, we should first find the solutions of f'\left(x\right) = 0, and then check the value of f\left(x\right) at these places.

Newton's method also may not converge for every function, take as an example:

f\left(x\right) = \begin{cases} \sqrt{x-r}, & \mbox{for}\,\,x \ge r \\ -\sqrt{r-x}, & \mbox{for}\,\,x < r \end{cases}

For this function choosing any x_1 = r - h then x_2 = r + h would cause successive approximations to alternate back and forth, so no amount of iteration would get us any closer to the root than our first guess.

Figure 3: Newton's method, when applied to the function f\left(x\right)=x^5-x+1 with initial guess x_0=0, eventually iterates between the three points shown above.

Newton's method may also fail to converge on a root if the function has a local maximum or minimum that does not cross the x-axis. As an example, consider f\left(x\right)=x^5-x+1 with initial guess x_0=0. In this case, Newton's method will be fooled by the function, which dips toward the x-axis but never crosses it in the vicinity of the initial guess.

See alsoEdit

← Extrema and Points of Inflection Calculus Related Rates →
Newton's Method