Calculus/Taylor series

< Calculus

Taylor SeriesEdit

As the degree of the Taylor series rises, it approaches the correct function. sin(x) and Taylor approximations, polynomials of degree 1, 3, 5, 7, 9, 11 and 13.

The Taylor series of an infinitely, often differentiable real, (or complex) function f defined on an open interval (a-r, a+r) is the power series


\sum_{n=0}^{\infin} \frac{f^{(n)}(a)}{n!} (x-a)^{n}

Here, n! is the factorial of n and f^n(a) denotes the nth derivative of f at the point a. If this series converges for every x in the interval (a-r, a+r) and the sum is equal to f(x), then the function f(x) is called analytic. To check whether the series converges towards f(x), one normally uses estimates for the remainder term of Taylor's theorem. A function is analytic if and only if a power series converges to the function; the coefficients in that power series are then necessarily the ones given in the above Taylor series formula.

If a = 0, the series is also called a Maclaurin series.

The importance of such a power series representation is threefold. First, differentiation and integration of power series can be performed term by term and is hence particularly easy. Second, an analytic function can be uniquely extended to a holomorphic function defined on an open disk in the complex plane, which makes the whole machinery of complex analysis available. Third, the (truncated) series can be used to approximate values of the function near the point of expansion.


Around zero, the function looks very flat. The function e-1/x² is not analytic: the Taylor series is 0, although the function is not.

Note that there are examples of infinitely often differentiable functions f(x) whose Taylor series converge, but are not equal to f(x). For instance, for the function defined piecewise by saying that f(x) = exp(−1/x²) if x ≠ 0 and f(0) = 0, all the derivatives are zero at x = 0, so the Taylor series of f(x) is zero, and its radius of convergence is infinite, even though the function most definitely is not zero. This particular pathology does not afflict complex-valued functions of a complex variable. Notice that exp(−1/z²) does not approach 0 as z approaches 0 along the imaginary axis.

Some functions cannot be written as Taylor series because they have a singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable x; see Laurent series. For example, f(x) = exp(−1/x²) can be written as a Laurent series.

The Parker-Sockacki theorem is a recent advance in finding Taylor series which are solutions to differential equations. This theorem is an expansion on the Picard iteration.

DerivationEdit

Suppose we want to represent a function as an infinite power series, or in other words a polynomial with infinite terms of degree "infinity". Each of these terms are assumed to have unique coefficients, as do most finite-polynomials do. We can represent this as an infinite sum like so:


f(x)={c_0}(x-a)^0+c_1(x-a)^1+c_2(x-a)^2+c_3(x-a)^3+\cdots+c_n(x-a)^n+\cdots

where a is the radius of convergence and c_0,c_1,c_2,c_3,...,c_n,... are coefficients. Next, with summation notation, we can efficiently represent this series as

\sum_{n=0}^{\infty}c_n(x-a)^n

which will become more useful later. As of now, we have no schematic for finding the coefficients other than finding each one in the series by hand. That method would not be particularly useful. However, if we substitute a for x then we get

f(a)=c_0

This gives us c_0. This is useful, but we still would like a general equation to find any coefficient in the series. We can try differentiating with respect to x the series to get


{f}'(x)=c_1(x-a)^{0}+2c_2(x-a)^{1}+3c_3(x-a)^{2}+4c_4(x-a)^3+\cdots+nc_n(x-a)^{(n-1)}+\cdots

We can assume c_n and a are constant. This proves to be useful, because if we again substitute a for x we get

{f}'(a)=c_1

Noting that the first derivative has one constant term (c_1(x-a)^0=c_1) we can find the second derivative to find c_2. It is


{f}''(x)=2c_2+(2\times3)c_3(x-a)^1+(3\times4)c_4(x-a)^2+\cdots+(n)(n-1)c_n(x-a)^{(n-2)}+\cdots

If we again substitute a for x:

{f}''(a)=2c_2

Note that c_2's initial exponent was 2, and c_1's initial exponent was 1. This is slightly more enlightening, however it is still slightly ambiguous as to what is happening. Going off the previous examples, if we differentiate again we get


f'''(x)=(2\times3)c_3(x-a)^0+(2\times3\times4)c_4(x-a)^1+(3\times4\times5)c_5(x-a)^2+\cdots+(n)(n-1)(n-2)c_n(x-a)^{n-3}

If we substitute x=a we, again, that

f'''(a)=(2\times3)c_3

By now, the pattern should be becoming clearer. (n)(n-1)(n-2) looks suspiciously like n!. And indeed, it is! If we carry this out n times by finding the nth derivative, we find that the multiple of the coefficient is n!. So for some c_n, for any integer n\geq0,

n!\times c_n=\frac{d^n}{dx^n}f(a)

Or, with some simple manipulation, more usefully,

c_n=\frac{f^n(a)}{n!}

where f^0(x)=f(x) and f^1(x)=f'(x) and so on. With this, we can find any coefficient of the "infinite polynomial." Using the summation definition for our "polynomial" given earlier,

\sum_{n=0}^{\infty}c_n(x-a)^n

we can substitute for c_n to get


f(x)=\sum_{n=0}^{\infty}\frac{f^n(a)}{n!}(x-a)^n


List of Taylor seriesEdit

Several important Taylor series expansions follow. All these expansions are also valid for complex arguments x.

Exponential function and natural logarithm:

e^{x} = \sum^{\infin}_{n=0} \frac{x^n}{n!}\quad\mbox{ for all } x
\ln(1+x) = \sum^{\infin}_{n=1} \frac{(-1)^{n+1}}n x^n\quad\mbox{ for } \left| x \right| < 1

Geometric series:

\frac{1}{1-x} = \sum^{\infin}_{n=0} x^n\quad\mbox{ for } \left| x \right| < 1

Binomial series:

(1+x)^\alpha = \sum^{\infin}_{n=0} C(\alpha,n) x^n\quad\mbox{ for all } \left| x \right| < 1\quad\mbox{ and all complex } \alpha

Trigonometric functions:

\sin x = \sum^{\infin}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1}\quad\mbox{ for all } x
\cos x = \sum^{\infin}_{n=0} \frac{(-1)^n}{(2n)!} x^{2n}\quad\mbox{ for all } x
\tan x = \sum^{\infin}_{n=1} \frac{B_{2n} (-4)^n (1-4^n)}{(2n)!} x^{2n-1}\quad\mbox{ for } \left| x \right| < \frac{\pi}{2}
\sec x = \sum^{\infin}_{n=0} \frac{(-1)^n E_{2n}}{(2n)!} x^{2n}\quad\mbox{ for } \left| x \right| < \frac{\pi}{2}
\arcsin x = \sum^{\infin}_{n=0} \frac{(2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}\quad\mbox{ for } \left| x \right| < 1
\arctan x = \sum^{\infin}_{n=0} \frac{(-1)^n}{2n+1} x^{2n+1}\quad\mbox{ for } \left| x \right| < 1

Hyperbolic functions:

\sinh x = \sum^{\infin}_{n=0} \frac{1}{(2n+1)!} x^{2n+1}\quad\mbox{ for all } x
\cosh x = \sum^{\infin}_{n=0} \frac{1}{(2n)!} x^{2n}\quad\mbox{ for all } x
\tanh x = \sum^{\infin}_{n=1} \frac{B_{2n} 4^n (4^n-1)}{(2n)!} x^{2n-1}\quad\mbox{ for } \left| x \right| < \frac{\pi}{2}
\sinh^{-1} x = \sum^{\infin}_{n=0} \frac{(-1)^n (2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}\quad\mbox{ for } \left| x \right| < 1
\tanh^{-1} x = \sum^{\infin}_{n=0} \frac{1}{2n+1} x^{2n+1}\quad\mbox{ for } \left| x \right| < 1

Lambert's W function:

W_0(x) = \sum^{\infin}_{n=1} \frac{(-n)^{n-1}}{n!} x^n\quad\mbox{ for } \left| x \right| < \frac{1}{e}

The numbers Bk appearing in the expansions of tan(x) and tanh(x) are the Bernoulli numbers. The C(α,n) in the binomial expansion are the binomial coefficients. The Ek in the expansion of sec(x) are Euler numbers.

Multiple dimensionsEdit

The Taylor series may be generalized to functions of more than one variable with


\sum_{n_1=0}^{\infin} \cdots \sum_{n_d=0}^{\infin}
\frac{\partial^{n_1}}{\partial x_{1}^{n_1}} \cdots \frac{\partial^{n_d}}{\partial x_{d}^{n_d}}
\frac{f(a_1,\cdots,a_d)}{n_1!\cdots n_d!}
(x_1-a_1)^{n_1}\cdots (x_d-a_d)^{n_d}

HistoryEdit

The Taylor series is named for mathematician Brook Taylor, who first published the power series formula in 1715.

Constructing a Taylor SeriesEdit

Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the Taylor series as-is and generalize the form of the coefficients, or one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series (such as those above) to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applying integration by parts. The use of computer algebra systems to calculate Taylor series is common, since it eliminates tedious substitution and manipulation.

Example 1Edit

Consider the function

f(x)=\ln{(1+\cos{x})} \,,

for which we want a Taylor series at 0.

We have for the natural logarithm

\ln(1+x) = \sum^{\infin}_{n=1} \frac{(-1)^{n+1}}{n} x^n = x - {x^2\over 2}+{x^3 \over 3} - {x^4 \over 4} + \cdots \quad\mbox{ for } \left| x \right| < 1

and for the cosine function

\cos x = \sum^{\infin}_{n=0} \frac{(-1)^n}{(2n)!} x^{2n} = 1 -{x^2\over 2!}+{x^4\over 4!}- \cdots \quad\mbox{ for all } x\in\mathbb{C}.

We can simply substitute the second series into the first. Doing so gives

\left(1 -{x^2\over 2!}+{x^4\over 4!}-\cdots\right)-{1\over 2}\left(1 -{x^2\over 2!}+{x^4\over 4!}-\cdots\right)^2 +{1\over 3}\left(1 -{x^2\over 2!}+{x^4\over 4!}-\cdots\right)^3-\cdots

Expanding by using multinomial coefficients gives the required Taylor series. Note that cosine and therefore f are even functions, meaning that f(x)=f(-x), hence the coefficients of the odd powers x, x^3, x^5, x^7 and so on have to be zero and don't need to be calculated. The first few terms of the series are

\ln(1+\cos x)=\ln 2-{x^2\over 4}-{x^4\over 96}-{x^6\over 1440} -{17x^8\over 322560}-{31x^{10}\over 7257600}-\cdots

The general coefficient can be represented using Faà di Bruno's formula. However, this representation does not seem to be particularly illuminating and is therefore omitted here.

Example 2Edit

Suppose we want the Taylor series at 0 of the function

g(x)=\frac{e^x}{\cos x}\,.

We have for the exponential function

e^x = \sum^\infty_{n=0} {x^n\over n!} =1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} +\cdots

and, as in the first example,

\cos x = 1 - {x^2 \over 2!} + {x^4 \over 4!} - \cdots

Assume the power series is

{e^x \over \cos x} = c_0 + c_1 x + c_2 x^2 + c_3 x^3 + \cdots

Then multiplication with the denominator and substitution of the series of the cosine yields

\begin{align} e^x &= (c_0 + c_1 x + c_2 x^2 + c_3 x^3 + \cdots)\cos x\\
&=\left(c_0 + c_1 x + c_2 x^2 + c_3 x^3 + c_4x^4 + \cdots\right)\left(1 - {x^2 \over 2!} + {x^4 \over 4!} - \cdots\right)\\
&=c_0 - {c_0 \over 2}x^2 + {c_0 \over 4!}x^4 + c_1x - {c_1 \over 2}x^3 + {c_1 \over 4!}x^5 + c_2x^2 - {c_2 \over 2}x^4 + {c_2 \over 4!}x^6 + c_3x^3 - {c_3 \over 2}x^5 + {c_3 \over 4!}x^7 +\cdots \end{align}

Collecting the terms up to fourth order yields

=c_0 + c_1x + \left(c_2 - {c_0 \over 2}\right)x^2 + \left(c_3 - {c_1 \over 2}\right)x^3+\left(c_4+{c_0 \over 4!}-{c_2\over 2}\right)x^4 + \cdots

Comparing coefficients with the above series of the exponential function yields the desired Taylor series

\frac{e^x}{\cos x}=1 + x + x^2 + {2x^3 \over 3} + {x^4 \over 2} + \cdots

ConvergenceEdit

Generalized Mean Value TheoremEdit