# Using High Order Finite Differences/Definitions and Basics

## Definitions and Basics

### Vector Norms

#### Definition of a Vector Norm

The most ordinary kind of vectors are those consisting of ordered n-tuples of real or complex numbers. They may be written in row      ${\displaystyle <\;x\quad y\quad z\;>,}$       ${\displaystyle {\begin{bmatrix}x&y&z\\\end{bmatrix}},}$        ${\displaystyle (\;x,\quad y,\quad z\;),}$

or column      ${\displaystyle {\begin{bmatrix}x\\y\\z\\\end{bmatrix}}}$

forms. Commas or other seperators of components or coordinates may or may not be used. When a vector has many elements, notations like

${\displaystyle {\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\end{bmatrix}}}$    or   ${\displaystyle {\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\end{bmatrix}}^{T}}$    are often used.

A most popular notation to indicate a vector is  ${\displaystyle {\vec {v}}\;=\;}$ .

Vectors are usually added component-wise, for

${\displaystyle {\vec {v}}\;=\;}$   and   ${\displaystyle {\vec {w}}\;=\;}$ ,

${\displaystyle {\vec {v}}+{\vec {w}}\;=\;<(v_{1}+w_{1})\;\;(v_{2}+w_{2})\;\cdots \;(v_{n}+w_{n})>}$ .

Scalar multiplication is defined by   ${\displaystyle \alpha \,{\vec {v}}\;=\;<\alpha \,v_{1}\;\,\alpha \,v_{2}\;\cdots \;\alpha \,v_{n}>}$ .

A vector norm is a generalization of ordinary absolute value  ${\displaystyle \left\vert x\right\vert }$   of a real or complex number.

For  ${\displaystyle {\vec {u}}}$   and  ${\displaystyle {\vec {v}}}$ vectors, and  ${\displaystyle \alpha }$ ,  a scalar, a vector norm is a real value  ${\displaystyle \lVert \cdot \rVert }$   associated with a vector for which the following properties hold.

{\displaystyle {\begin{aligned}(i)\;\;\;\quad &\lVert \,{\vec {v}}\,\rVert \;\geq \;0\\(ii)\;\;\quad &\lVert \,{\vec {v}}\,\rVert \;=\;0\iff \;{\vec {v}}\;=\;0\\(iii)\;\quad &\lVert \,\alpha \,{\vec {v}}\,\rVert \;=\;\left\vert \alpha \right\vert \,\lVert \,{\vec {v}}\,\rVert \\(iv)\;\;\quad &\lVert \,{\vec {v}}+{\vec {w}}\,\rVert \;\;\leq \;\;\lVert \,{\vec {v}}\,\rVert \;+\;\lVert \,{\vec {w}}\,\rVert \;\end{aligned}}} .

#### Common Vector Norms

The most commonly used norms are:

{\displaystyle {\begin{aligned}\quad &\lVert \,{\vec {v}}\,\rVert _{2}\;=\;{\sqrt {{\left\vert v_{1}\right\vert }^{2}+{\left\vert v_{2}\right\vert }^{2}+\ldots +{\left\vert v_{n}\right\vert }^{2}}}\\\quad &\lVert \,{\vec {v}}\,\rVert _{1}\;=\;\left\vert v_{1}\right\vert +\left\vert v_{2}\right\vert +\ldots +\left\vert v_{n}\right\vert \\\quad &\lVert \,{\vec {v}}\,\rVert _{\infty }\;=\;{\underset {1\,\leq \,i\,\leq \,n}{\max \left\vert v_{i}\right\vert }}\\\quad &\lVert \,{\vec {v}}\,\rVert _{p}\;=\;(\left\vert v_{1}\right\vert ^{p}+\left\vert v_{2}\right\vert ^{p}+\ldots +\left\vert v_{n}\right\vert ^{p})^{\frac {1}{p}}\\\end{aligned}}} .

Any two norms on ${\displaystyle n}$  dimensional vectors of complex numbers are topologically equivalent in the sense that, if  ${\displaystyle \lVert \cdot \rVert _{a}}$   and  ${\displaystyle \lVert \cdot \rVert _{b}}$   are two different norms, then there exist positive constants  ${\displaystyle c_{1}}$   and  ${\displaystyle c_{2}}$   such that  ${\displaystyle c_{1}\,\lVert \cdot \rVert _{a}\;\leq \;\lVert \cdot \rVert _{b}\;\leq \;c_{2}\,\lVert \cdot \rVert _{a}}$ .

#### Inner Product of Vectors

The inner product (or dot product), of two vectors

${\displaystyle {\vec {v}}\;=\;}$   and   ${\displaystyle {\vec {w}}\;=\;}$ ,

is defined by

${\displaystyle {\vec {v}}\,\cdot \,{\vec {w}}\;=\;v_{1}\,w_{1}+v_{2}\,w_{2}+\;\cdots \;+v_{n}\,w_{n}}$ ,

or when  ${\displaystyle {\vec {v}}}$   and  ${\displaystyle {\vec {w}}}$   are complex valued, by

${\displaystyle {\vec {v}}\,\cdot \,{\vec {w}}\;=\;v_{1}\,{\overline {w}}_{1}+v_{2}\,{\overline {w}}_{2}+\;\cdots \;+v_{n}\,{\overline {w}}_{n}}$ .

 (1.3.3.0)

It is often indicated by any one of several notations:

${\displaystyle {\vec {v}}\,\cdot \,{\vec {w}}\,,\quad {\vec {v}}^{\,T}{\vec {w}}\,,\quad <{\vec {v}}\,,\;{\vec {w}}>,}$     or    ${\displaystyle ({\vec {v}}\,,\;{\vec {w}})}$ .

Besides the dot product, other inner products are defined to be a rule that sssigns to each pair of vectors  ${\displaystyle {\vec {v}}\,,\;{\vec {w}}}$ ,  a complex number with the following properties.

${\displaystyle <\alpha \,{\vec {v}}\,,\;{\vec {w}}>\;=\;\alpha \,<{\vec {v}}\,,\;{\vec {w}}>}$

${\displaystyle <{\vec {v}}\,,\;{\vec {w}}>\;=\;{\overline {<{\vec {w}}\,,\;{\vec {v}}>}}}$

${\displaystyle <{\vec {v_{1}}}+{\vec {v_{2}}}\,,\;{\vec {w}}>\;=\;<{\vec {v_{1}}}\,,\;{\vec {w}}>+<{\vec {v_{2}}}\,,\;{\vec {w}}>}$

for  ${\displaystyle {\vec {v}}\;\neq \;0}$ ${\displaystyle <{\vec {v}}\,,\;{\vec {v}}>}$   is real valued and positive

${\displaystyle <{\vec {v}}\,,\;{\vec {v}}>\;>\;0}$

and

${\displaystyle \left\vert <{\vec {v}}\,,\;{\vec {w}}>\right\vert ^{2}\;\leq \;<{\vec {v}}\,,\;{\vec {v}}><{\vec {w}}\,,\;{\vec {w}}>}$

An inner product defines a norm by

${\displaystyle \lVert \,{\vec {v}}\,\rVert \;=\;{\sqrt {<{\vec {v}}\,,\;{\vec {v}}>}}}$ .
 (1.3.3.1)

#### Inequalities Involving Norms

The Cauchy Schwarz and Holder's inequalities are commonly employed.

${\displaystyle \left\vert \,{\vec {v}}\,\cdot \,{\vec {w}}\,\right\vert \;\leq \;\lVert \,{\vec {v}}\,\rVert _{2}\,\lVert \,{\vec {w}}\,\rVert _{2}}$

${\displaystyle \left\vert \,{\vec {v}}\,\cdot \,{\vec {w}}\,\right\vert \;\leq \;\lVert \,{\vec {v}}\,\rVert _{p}\,\lVert \,{\vec {w}}\,\rVert _{q}}$     for  ${\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}\;=\;1}$

### Matrix Norms

#### Definition of a Matrix Norm

The most ordinary kind of matices are those consisting of rectangular arrays of real or complex numbers. They may be written in element form      ${\displaystyle A\;=\;(a_{i\,j}),\;1\;\leq \;m\,,\;\;1\;\leq \;n}$       and be considered as collections of column  or  row  vectors.

Matrices are usually added element-wise, for

${\displaystyle A\;=\;(a_{i\,j}),\;B\;=\;(b_{i\,j}),\quad \;A+B\;=\;(a_{i\,j}+b_{i\,j})}$

Scalar multiplication is defined by   ${\displaystyle \alpha \,A\;=\;(\alpha \,a_{i\,j})}$ .

The notation  ${\displaystyle A\;=\;0}$   means that all elements of  ${\displaystyle A}$   are identically zero.

A matrix norm is a generalization of ordinary absolute value  ${\displaystyle \left\vert x\right\vert }$   of a real or complex number, and can be considered a type of a vector norm.

For  ${\displaystyle A}$   and  ${\displaystyle B}$ matrices, and  ${\displaystyle \alpha }$ ,  a scalar, a matrix norm is a real value  ${\displaystyle \lVert \cdot \rVert }$   associated with a matrix for which the following properties hold.

{\displaystyle {\begin{aligned}(i)\;\;\;\quad &\lVert \,A\,\rVert \;\geq \;0\\(ii)\;\;\quad &\lVert \,A\,\rVert \;=\;0\iff \;A\;=\;0\\(iii)\;\quad &\lVert \,\alpha \,A\,\rVert \;=\;\left\vert \alpha \right\vert \,\lVert \,A\,\rVert \\(iv)\;\;\quad &\lVert \,A+B\,\rVert \;\;\leq \;\;\lVert \,A\,\rVert \;+\;\lVert \,B\,\rVert \;\end{aligned}}} .

#### Common Matrix Norms

The most commonly used norms are:

{\displaystyle {\begin{aligned}\quad &\lVert \,A\,\rVert _{F}\;=\;{\sqrt {\textstyle \sum _{i\,=\,1}^{m}\textstyle \sum _{j\,=\,1}^{n}{\left\vert a_{i\,j}\right\vert }^{2}}}\\\quad &\lVert \,A\,\rVert _{\infty }\;=\;{\underset {1\;\leq \;i\;\leq \;m}{\max \;}}(\left\vert a_{i\,1}\right\vert +\left\vert a_{i\,2}\right\vert +\ldots +\left\vert a_{i\,n}\right\vert )\\\quad &\lVert \,A\,\rVert _{1}\;=\;{\underset {1\;\leq \;j\;\leq \;n}{\max \;}}(\left\vert a_{1\,j}\right\vert +\left\vert a_{2\,j}\right\vert +\ldots +\left\vert a_{m\,j}\right\vert )\\\quad &\lVert \,A\,\rVert _{\max }\;=\;{\underset {1\,\leq \,i\,\leq \,m,\;1\,\leq \,j\,\leq \,n}{\max {\{\left\vert a_{i\,j}\right\vert \}}}}\\\quad &\lVert \,A\,\rVert _{p}\;=\;(\textstyle \sum _{i\,=\,1}^{m}\textstyle \sum _{j\,=\,1}^{n}\left\vert a_{i\,j}\right\vert ^{p})^{\frac {1}{p}}\\\quad &\lVert \,A\,\rVert _{2}\;=\;{\underset {\lVert \,x\,\rVert _{2}\;=\;1}{\max }}\;\lVert \,A\,x\,\rVert _{2}\\\end{aligned}}} .

Like vector norms any two matrix norms on ${\displaystyle m\times n}$  matrices of complex numbers are topologically equivalent in the sense that, if  ${\displaystyle \lVert \cdot \rVert _{a}}$   and  ${\displaystyle \lVert \cdot \rVert _{b}}$   are two different norms, then there exist positive constants  ${\displaystyle c_{1}}$   and  ${\displaystyle c_{2}}$   such that  ${\displaystyle c_{1}\,\lVert \cdot \rVert _{a}\;\leq \;\lVert \cdot \rVert _{b}\;\leq \;c_{2}\,\lVert \cdot \rVert _{a}}$ .

The norm  ${\displaystyle \lVert \,A\,\rVert _{2}}$   on matrices  ${\displaystyle A}$   is an example of an induced norm. An induced norm is for a vector norm ${\displaystyle \lVert \,x\,\rVert _{b}}$   defined by

${\displaystyle \lVert \,A\,\rVert _{b}\;=\;{\underset {\lVert \,x\,\rVert _{b}\;=\;1}{\max }}\;\lVert \,A\,x\,\rVert _{b}}$ .

This could cause the same subscript notation to be used fot two different norms, sometimes.

#### Positive definite matrices

${\displaystyle n\times n}$   matrix  ${\displaystyle A}$   is said to be positive definite if for any vector  ${\displaystyle x}$

${\displaystyle x^{T}A\,x\;\geq \;\alpha \,x^{T}x}$

for some positive constant  ${\displaystyle \alpha }$  , not depending on  ${\displaystyle x}$  .

The property of being positive definite insures the numerical stability of a variety of common numerical techniques used to solve the equation  ${\displaystyle A\,x\;=\;y}$  .

Taking  ${\displaystyle x\;=\;A^{-1}y}$   then

${\displaystyle (A^{-1}y)^{T}A\,(A^{-1}y)\;\geq \;\alpha \,(A^{-1}y)^{T}A^{-1}y}$  .

so that

${\displaystyle (A^{-1}y)^{T}y\;\geq \;\alpha \,\lVert A^{-1}y\rVert _{2}^{2}}$      and     ${\displaystyle \alpha \,\lVert A^{-1}y\rVert _{2}^{2}\;\leq \;\lVert A^{-1}y\rVert _{2}\,\lVert y\rVert _{2}}$   .

This gives

${\displaystyle \alpha \,\lVert A^{-1}y\rVert _{2}\;\leq \;\lVert y\rVert _{2}}$

and

${\displaystyle \lVert A^{-1}\rVert _{2}\;\leq \;\alpha ^{-1}}$  .

#### Consistency of Norms

A matrix norm  ${\displaystyle \lVert \,\cdot \,\rVert _{M}}$   and a vector norm  ${\displaystyle \lVert \,\cdot \,\rVert _{V}}$   are said to be consistent when

${\displaystyle \lVert \,A\,x\,\rVert _{M}\;\leq \;\lVert \,A\,\rVert _{M}\,\lVert \,x\,\rVert _{V}}$  .

When  ${\displaystyle \lVert \,\cdot \,\rVert _{M}}$   is the matrix norm induced by the vector norm  ${\displaystyle \lVert \,\cdot \,\rVert _{V}}$   then the two norms will be consistent.

When the two norms are not consistent there will still be a positive constant  ${\displaystyle \alpha }$   such that

${\displaystyle \lVert \,A\,x\,\rVert _{M}\;\leq \;\alpha \,\lVert \,A\,\rVert _{M}\,\lVert \,x\,\rVert _{V}}$  .

## Finite Difference Operators

### approximation of a first derivative

The defition of the derivative of a function gives the first and simplest finite difference.

${\displaystyle f^{\prime }(x_{0})\;=\;\lim _{x_{1}\to x_{0}}{\frac {f(x_{1})-f(x_{0})}{x_{1}-x_{0}}}\quad {\text{or}}\quad f^{\prime }(x_{0})\;=\;\lim _{h\to 0}{\frac {f(x_{0}+h)-f(x_{0})}{h}}}$ .

So the finite difference

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;{\frac {f(x+h)-f(x)}{h}}\;=\;h^{-1}\,(f(x+h)-f(x))}$

can be defined. It is an approximation to  ${\displaystyle f^{\prime }(x)}$   when  ${\displaystyle h}$  is near ${\displaystyle 0}$ .

The finite difference approximation  ${\displaystyle {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)}$   for  ${\displaystyle f^{(\,n)}(x)}$   is said to be of order  ${\displaystyle k}$ ,  if there exists  ${\displaystyle M\;>\;0}$   such that

${\displaystyle \left\vert {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)-f^{(\,n)}(x)\right\vert \;\leq \;M\,h^{k}}$ ,

when  ${\displaystyle h}$  is near ${\displaystyle 0}$ .

For practical reasons the order of a finite difference will be described under the assumption that  ${\displaystyle f(x)}$   is sufficiently smooth so that it's Taylor's expansion up to some order exists. For example, if

${\displaystyle f(x+h)\;=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(z_{h})\,h^{2}}$

then

${\displaystyle {\frac {f(x+h)-f(x)}{h}}\;=\;f^{\prime }(x)+{\tfrac {1}{2}}\,f^{\prime \prime }(z_{h})\,h}$

so that

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)-f^{\prime }(x)\;=\;{\tfrac {1}{2}}\,f^{\prime \prime }(z_{h})\,h}$ ,

meaning that the order of the approximation of  ${\displaystyle f^{\prime }(x)}$   by  ${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)}$   is  ${\displaystyle 1}$ .

The finite difference so far defined is a 2-point operator, since it requires 2 evaluations of ${\displaystyle f(x)}$ . If

${\displaystyle f(x+h)\;=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(z_{h})\,h^{3}}$

then another 2-point operator

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;{\frac {f(x+h)-f(x-h)}{2\,h}}\;=\;(2\,h)^{-1}\,(f(x+h)-f(x-h))}$

can be defined. Since

${\displaystyle {\frac {f(x+h)-f(x-h)}{2\,h}}\;=\;f^{\prime }(x)+{\tfrac {1}{2}}\,({\tfrac {1}{6}}\,f^{\prime \prime \prime }(z_{h})+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(z_{-h}))\,h^{2}}$ ,

this  ${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)}$   is of order 2, and is referred to as a centered difference operator. Centered difference operators are usually one order of accuracy higher than un-centered operators for the same number of points.

More generally, for  ${\displaystyle m}$   points  ${\displaystyle x+\alpha _{1}\,h\,,\;x+\alpha _{2}\,h\,,\;\ldots \;,\;x+\alpha _{m}\,h}$   a finite difference operator

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;h^{-1}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

is usually defined by choosing the coefficients  ${\displaystyle a_{1}\,,\;a_{2}\,,\;\ldots ,\;a_{m}}$   so that  ${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)}$   has as high of an order of accuracy as possible. Considering

{\displaystyle {\begin{aligned}f(x+h)\;&=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,f^{(m)}(z_{h})\,h^{m}\\\end{aligned}}} .

Then

{\displaystyle {\begin{aligned}{\frac {d_{h}}{d_{h}x}}\,f(x)\;&=\;h^{-1}\,(c_{1}\,f(x)+c_{2}\,f^{\prime }(x)\,h+{\tfrac {1}{2}}\,c_{3}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,c_{4}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,c_{m}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,R_{m}\,h^{m})\\\end{aligned}}} .

where the  ${\displaystyle c_{1}\,,\;c_{2}\,,\;\ldots ,\;c_{m}}$   are the right hand side of the Vandermonde system

${\displaystyle {\begin{bmatrix}1&1&\cdots &1&1\\\alpha _{1}&\alpha _{2}&\cdots &\alpha _{m-1}&\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\cdots &\alpha _{m-1}^{2}&\alpha _{m}^{2}\\\vdots &\vdots &\cdots &\vdots &\vdots \\\alpha _{1}^{m-2}&\alpha _{2}^{m-2}&\cdots &\alpha _{m-1}^{m-2}&\alpha _{m}^{m-2}\\\alpha _{1}^{m-1}&\alpha _{2}^{m-1}&\cdots &\alpha _{m-1}^{m-1}&\alpha _{m}^{m-1}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m-1}\\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}c_{1}\\c_{2}\\c_{3}\\\vdots \\c_{m-1}\\c_{m}\\\end{bmatrix}}}$

and

{\displaystyle {\begin{aligned}R_{m}\;=&\;a_{1}\,\alpha _{1}^{m}\,f^{(m)}(z_{\,\alpha _{1}\,h})+a_{2}\,\alpha _{2}^{m}\,f^{(m)}(z_{\,\alpha _{2}\,h})+a_{3}\,\alpha _{3}^{m}\,f^{(m)}(z_{\,\alpha _{3}\,h})\\&+\;\ldots \;+a_{m-1}\,\alpha _{m-1}^{m}\,f^{(m)}(z_{\,\alpha _{m-1}\,h})+a_{m}\,\alpha _{m}^{m}\,f^{(m)}(z_{\,\alpha _{m}\,h})\\\end{aligned}}} .

When the  ${\displaystyle a_{i}\,{\text{'s}}}$   are chosen so that

${\displaystyle c_{1}\;=\;0\,,\;\;c_{2}\;=\;1\,,\;\;c_{3}\;=\;\ldots \;=\;c_{m}\;=\;0}$

then

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;f^{\prime }(x)+{\tfrac {1}{m!}}\,R_{m}\,h^{m-1}}$ .

so that the operator is of order  ${\displaystyle m-1}$ .

At the end of the section a table for the first several values of  ${\displaystyle m}$ ,  the number of points, will be provided. The discussion will move on to the approximation of the second derivative.

### approximation of a second derivative

The definition of the second derivative of a function

${\displaystyle f^{\prime \prime }(x)\;=\;\lim _{h\to 0}{\frac {f^{\prime }(x+h)-f^{\prime }(x)}{h}}}$ .

used together with the finite difference approximation for the first derivative

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;{\frac {f(x+h)-f(x)}{h}}\;=\;h^{-1}\,(f(x+h)-f(x))}$

gives the finite difference

{\displaystyle {\begin{aligned}{\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=&\;h^{-1}\,{\big (}h^{-1}\,(f(x+2\,h)-f(x+h))-h^{-1}\,(f(x+h)-f(x)){\big )}\\=&\;h^{-2}\,(f(x+2\,h)-2\,f(x+h)+f(x)))\\\end{aligned}}}

In view of

${\displaystyle f(x+h)\;=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(z_{h})\,h^{3}}$

for the operator just defined

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;f^{\prime \prime }(x)+({\tfrac {4}{3}}\,f^{\prime \prime \prime }(z_{\,2\,h})-{\tfrac {1}{3}}\,f^{\prime \prime \prime }(z_{h}))\,h}$ .

If instead, the difference operator

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;(2\,h)^{-1}\,(f(x+h)-f(x-h))}$

is used

{\displaystyle {\begin{aligned}{\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=&\;h^{-1}\,{\big (}(2\,h)^{-1}\,(f(x+2\,h)-f(x))-(2\,h)^{-1}\,(f(x+h)-f(x-h)){\big )}\\=&\;{\tfrac {1}{2}}\,h^{-2}\,(f(x+2\,h)-f(x+h)-f(x)+f(x-h))\\&\;\\=&\;f^{\prime \prime }(x)+({\tfrac {2}{3}}\,f^{\prime \prime \prime }(z_{\,2\,h})-{\tfrac {1}{12}}\,f^{\prime \prime \prime }(z_{h})-{\tfrac {1}{12}}\,f^{\prime \prime \prime }(z_{-h}))\,h\\\end{aligned}}}

If the other obvious possibility is tried

{\displaystyle {\begin{aligned}{\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=&\;h^{-1}\,{\big (}h^{-1}\,(f(x+h)-f(x))-h^{-1}\,(f(x)-f(x-h)){\big )}\\=&\;h^{-2}\,(f(x+h)-2\,f(x)+f(x-h))\\\end{aligned}}}

In view of

${\displaystyle f(x+h)\;=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(x)\,h^{3}+{\tfrac {1}{24}}\,f^{(iv)}(z_{h})\,h^{4}}$ ,

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;f^{\prime \prime }(x)+({\tfrac {1}{24}}\,f^{(iv)}(z_{h})+{\tfrac {1}{24}}\,f^{(iv)}(z_{-h}))\,h^{2}}$ .

So   ${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;h^{-2}\,(f(x+h)-2\,f(x)+f(x-h))}$

is a second order centered finite difference approximation for  ${\displaystyle f^{\prime \prime }(x)}$ .

The reasoning applied to the approximation of a first derivative can be used for the second derivative with only a few modifications.

For  ${\displaystyle m}$   points  ${\displaystyle x+\alpha _{1}\,h\,,\;x+\alpha _{2}\,h\,,\;\ldots \;,\;x+\alpha _{m}\,h}$   a finite difference operator

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;h^{-2}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

is usually defined by choosing the coefficients  ${\displaystyle a_{1}\,,\;a_{2}\,,\;\ldots ,\;a_{m}}$   so that  ${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)}$   has as high of an order of accuracy as possible. Considering

{\displaystyle {\begin{aligned}f(x+h)\;&=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,f^{(m)}(z_{h})\,h^{m}\\\end{aligned}}} .

Then

{\displaystyle {\begin{aligned}{\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;&=\;h^{-2}\,(c_{1}\,f(x)+c_{2}\,f^{\prime }(x)\,h+{\tfrac {1}{2}}\,c_{3}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,c_{4}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,c_{m}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,R_{m}\,h^{m})\\\end{aligned}}} .

where the  ${\displaystyle c_{1}\,,\;c_{2}\,,\;\ldots ,\;c_{m}}$   are the right hand side of the Vandermonde system

${\displaystyle {\begin{bmatrix}1&1&\cdots &1&1\\\alpha _{1}&\alpha _{2}&\cdots &\alpha _{m-1}&\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\cdots &\alpha _{m-1}^{2}&\alpha _{m}^{2}\\\vdots &\vdots &\cdots &\vdots &\vdots \\\alpha _{1}^{m-2}&\alpha _{2}^{m-2}&\cdots &\alpha _{m-1}^{m-2}&\alpha _{m}^{m-2}\\\alpha _{1}^{m-1}&\alpha _{2}^{m-1}&\cdots &\alpha _{m-1}^{m-1}&\alpha _{m}^{m-1}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m-1}\\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}c_{1}\\c_{2}\\c_{3}\\\vdots \\c_{m-1}\\c_{m}\\\end{bmatrix}}}$

and

{\displaystyle {\begin{aligned}R_{m}\;=&\;a_{1}\,\alpha _{1}^{m}\,f^{(m)}(z_{\,\alpha _{1}\,h})+a_{2}\,\alpha _{2}^{m}\,f^{(m)}(z_{\,\alpha _{2}\,h})+a_{3}\,\alpha _{3}^{m}\,f^{(m)}(z_{\,\alpha _{3}\,h})\\&+\;\ldots \;+a_{m-1}\,\alpha _{m-1}^{m}\,f^{(m)}(z_{\,\alpha _{m-1}\,h})+a_{m}\,\alpha _{m}^{m}\,f^{(m)}(z_{\,\alpha _{m}\,h})\\\end{aligned}}} .

When the  ${\displaystyle a_{i}\,{\text{'s}}}$   are chosen so that

${\displaystyle c_{1}\;=\;0\,,\;\;c_{2}\;=\;0\,,\;\;c_{3}\;=\;2\,,\;\;c_{4}\;=\;\ldots \;=\;c_{m}\;=\;0}$

then

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;f^{\prime \prime }(x)+{\tfrac {1}{m!}}\,R_{m}\,h^{m-2}}$ .

so that the operator is of order  ${\displaystyle m-2}$ .

The effect of centering the points will be covered somewhere below.

### approximation of higher derivatives

Although approximations to higher derivatives can be defined recursively from those for derivatives of lower order, the end result is the same finite difference operators. The Vandermonde type system will be used again for this purpose.

${\displaystyle f^{(\,n)}(x)\;=\;\lim _{h\to 0}{\frac {f^{\,(n-1)}(x+h)-f^{\,(n-1)}(x)}{h}}}$ .

The number of points needed to approximate  ${\displaystyle f^{(\,n)}(x)}$   by finite differences is at least  ${\displaystyle n+1}$ .

For  ${\displaystyle m}$   points  ${\displaystyle x+\alpha _{1}\,h\,,\;x+\alpha _{2}\,h\,,\;\ldots ,\;x+\alpha _{m}\,h}$   a finite difference operator

${\displaystyle {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)\;=\;h^{-n}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

is usually defined by choosing the coefficients  ${\displaystyle a_{1}\,,\;a_{2}\,,\;\ldots ,\;a_{m}}$   so that  ${\displaystyle {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)}$  approximates  ${\displaystyle f^{(\,n)}(x)}$   to as high of an order of accuracy as possible. Considering

{\displaystyle {\begin{aligned}f(x+h)\;=&\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,f^{(m)}(z_{h})\,h^{m}\\\end{aligned}}} .

Then

{\displaystyle {\begin{aligned}&{\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)\;=\;h^{-n}\,(c_{1}\,f(x)+c_{2}\,f^{\prime }(x)\,h+{\tfrac {1}{2}}\,c_{3}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,c_{4}\,f^{\prime \prime \prime }(x)\,h^{3}+\\&\ldots +{\tfrac {1}{n!}}\,c_{n+1}\,f^{(\,n)}(x)\,h^{n}+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,c_{m}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,R_{m}\,h^{m})\\\end{aligned}}} .

where the  ${\displaystyle c_{1}\,,\;c_{2}\,,\;\ldots ,\;c_{m}}$   are the right hand side of the Vandermonde system

${\displaystyle {\begin{bmatrix}1&1&\cdots &1&1\\\alpha _{1}&\alpha _{2}&\cdots &\alpha _{m-1}&\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\cdots &\alpha _{m-1}^{2}&\alpha _{m}^{2}\\\vdots &\vdots &\cdots &\vdots &\vdots \\\alpha _{1}^{m-2}&\alpha _{2}^{m-2}&\cdots &\alpha _{m-1}^{m-2}&\alpha _{m}^{m-2}\\\alpha _{1}^{m-1}&\alpha _{2}^{m-1}&\cdots &\alpha _{m-1}^{m-1}&\alpha _{m}^{m-1}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m-1}\\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}c_{1}\\c_{2}\\c_{3}\\\vdots \\c_{m-1}\\c_{m}\\\end{bmatrix}}}$

and

{\displaystyle {\begin{aligned}R_{m}\;=&\;a_{1}\,\alpha _{1}^{m}\,f^{(m)}(z_{\,\alpha _{1}\,h})+a_{2}\,\alpha _{2}^{m}\,f^{(m)}(z_{\,\alpha _{2}\,h})+a_{3}\,\alpha _{3}^{m}\,f^{(m)}(z_{\,\alpha _{3}\,h})\\&+\;\ldots \;+a_{m-1}\,\alpha _{m-1}^{m}\,f^{(m)}(z_{\,\alpha _{m-1}\,h})+a_{m}\,\alpha _{m}^{m}\,f^{(m)}(z_{\,\alpha _{m}\,h})\\\end{aligned}}} .

When the  ${\displaystyle a_{i}\,{\text{'s}}}$   are chosen so that

${\displaystyle c_{n+1}\;=\;n!\,,\;\;{\text{and}}\;\;c_{i}\;=\;0\,,\quad {\text{for}}\;\;i\;\neq \;n+1}$

then

${\displaystyle {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)\;=\;f^{(\,n)}(x)+{\tfrac {1}{m!}}\,R_{m}\,h^{m-n}}$ .

so that the operator is of order  ${\displaystyle m-n}$ .

An alternative analysis is to require that the finite difference operator differentiates powers of  ${\displaystyle x}$   exactly, up to the highest power possible.

### effect of the placement of points

Usually the  ${\displaystyle \alpha _{i}{\text{'s}}}$   are taken to be integer valued, since the points are intended to coincide with those of some division of an interval or 2 or 3 dimensional domain. If these points and hence  ${\displaystyle \alpha _{i}{\text{'s}}}$   are chosen with only accuracy in mind, then a higher accuracy of only one order can be achieved.

So start by seeing how high is the accuracy that  ${\displaystyle f^{\prime }(x)}$   can be approximated with three points.

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;h^{-1}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+a_{3}\,f(x+\alpha _{3}\,h))}$

Then accuracy of order 4 can not be achieved, because it would require the solution of

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\1\\0\\0\\0\\\end{bmatrix}}}$

which can not be solved since the matrix

${\displaystyle {\begin{bmatrix}\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}}$

is non-singular. The possibility of an  ${\displaystyle \alpha _{i}}$   being ${\displaystyle 0}$   can be ruled out otherwise.

For accuracy of order 3

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\1\\0\\0\\\end{bmatrix}}}$

So the matrix

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\end{bmatrix}}}$

is singular and  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{3}+b_{1}\,\alpha ^{2}+b_{0}}$  .

Two examples are next.

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;h^{-1}\,(-({\tfrac {\sqrt {3}}{3}}+{\tfrac {1}{2}})\,f(x+({\tfrac {\sqrt {3}}{3}}-1)\,h)+2\,{\tfrac {\sqrt {3}}{3}}\,f(x+{\tfrac {\sqrt {3}}{3}}\,h)-({\tfrac {\sqrt {3}}{3}}-{\tfrac {1}{2}})\,f(x+({\tfrac {\sqrt {3}}{3}}+1)\,h))}$

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;h^{-1}\,(-{\tfrac {9}{2}}\,f(x-{\tfrac {1}{2}}\,h)+{\tfrac {16}{3}}\,f(x+{\tfrac {3}{4}}\,h)-{\tfrac {5}{6}}\,f(x+{\tfrac {3}{2}}\,h))}$

To see what the accuracy that  ${\displaystyle f^{\prime \prime }(x)}$   can be approximated to with three points.

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;h^{-2}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+a_{3}\,f(x+\alpha _{3}\,h))}$

Then accuracy of order 3 can not be achieved, because it would require the solution of

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\0\\2\\0\\0\\\end{bmatrix}}}$

which can not be solved since the matrices

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\end{bmatrix}}}$       and      ${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}}$

would both need to be singular.

If the matrix

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\end{bmatrix}}}$

is singular, then  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{3}+b_{1}\,\alpha +b_{0}}$  ,

implying

${\displaystyle {\begin{bmatrix}\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}\;\;=\;\;-b_{1}\,{\begin{bmatrix}\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\end{bmatrix}}\;-\;b_{0}\,{\begin{bmatrix}\alpha _{1}&\alpha _{2}&\alpha _{3}\\\end{bmatrix}}}$

meaning that elementary row operations can transform

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}\\\end{bmatrix}}}$

to

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\end{bmatrix}}}$

which is non-singular.

Conversely, if  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{3}+b_{1}\,\alpha +b_{0}}$  , then

${\displaystyle {\begin{bmatrix}1&1&1\\\alpha _{1}&\alpha _{2}&\alpha _{3}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\0\\2\\0\\\end{bmatrix}}}$

can be solved and  ${\displaystyle f^{\prime \prime }(x)}$   approximated to an order of 2 accuracy.

See how high is the accuracy that  ${\displaystyle f^{\prime }(x)}$   can be approximated with  ${\displaystyle m}$   points.

${\displaystyle {\frac {d_{h}}{d_{h}x}}\,f(x)\;=\;h^{-1}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

Then accuracy of order  ${\displaystyle m+1}$   can not be achieved, because it would require the solution of

${\displaystyle {\begin{bmatrix}1&1&1&\cdots &1\\\alpha _{1}&\alpha _{2}&\alpha _{3}&\cdots &\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\alpha _{1}^{m+1}&\alpha _{2}^{m+1}&\alpha _{3}^{m+1}&\cdots &\alpha _{m}^{m+1}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\1\\0\\\vdots \\0\\\end{bmatrix}}}$

which can not be solved since the matrix

${\displaystyle {\begin{bmatrix}\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\alpha _{1}^{m+1}&\alpha _{2}^{m+1}&\alpha _{3}^{m+1}&\cdots &\alpha _{m}^{m+1}\\\end{bmatrix}}}$

is non-singular. The possibility of an  ${\displaystyle \alpha _{i}}$   being ${\displaystyle 0}$   can be ruled out otherwise, because, for example, if  ${\displaystyle \alpha _{1}\;=\;0}$ ,  then the non-singularity of the block

${\displaystyle {\begin{bmatrix}\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\cdots &\vdots \\\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\end{bmatrix}}}$

would force  ${\displaystyle a_{2}\;=\;a_{3}\;\ldots \;=\;a_{m}\;=\;0}$ .

For accuracy of order  ${\displaystyle m}$

${\displaystyle {\begin{bmatrix}1&1&1&\cdots &1\\\alpha _{1}&\alpha _{2}&\alpha _{3}&\cdots &\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\1\\0\\\vdots \\0\\\end{bmatrix}}}$

So the matrix

${\displaystyle {\begin{bmatrix}1&1&1&\cdots &1\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\end{bmatrix}}}$

is singular and  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}\,,\;\ldots ,\;\alpha _{m}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{2}\,\alpha ^{3}+b_{1}\,\alpha ^{2}+b_{0}}$  .

The progression for the second, third, ... derivatives goes as follows.

If  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}\,,\;\ldots ,\;\alpha _{m}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{2}\,\alpha ^{3}+b_{1}\,\alpha +b_{0}}$

then the system

${\displaystyle {\begin{bmatrix}1&1&1&\cdots &1\\\alpha _{1}&\alpha _{2}&\alpha _{3}&\cdots &\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\a_{4}\\\vdots \\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\0\\2\\0\\\vdots \\0\\\end{bmatrix}}}$

can be solved, and

${\displaystyle {\frac {d_{h}^{2}}{d_{h}x^{2}}}\,f(x)\;=\;h^{-2}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

approximates  ${\displaystyle f^{\prime \prime }(x)}$   to an order of accuracy of  ${\displaystyle m-1}$ .

If  ${\displaystyle \alpha _{1}\,,\;\alpha _{2}\,,\;\alpha _{3}\,,\;\ldots ,\;\alpha _{m}}$   are the roots of some polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{3}\,\alpha ^{4}+b_{2}\,\alpha ^{2}+b_{1}\,\alpha +b_{0}}$

then the system

${\displaystyle {\begin{bmatrix}1&1&1&\cdots &1\\\alpha _{1}&\alpha _{2}&\alpha _{3}&\cdots &\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\alpha _{3}^{2}&\cdots &\alpha _{m}^{2}\\\alpha _{1}^{3}&\alpha _{2}^{3}&\alpha _{3}^{3}&\cdots &\alpha _{m}^{3}\\\alpha _{1}^{4}&\alpha _{2}^{4}&\alpha _{3}^{4}&\cdots &\alpha _{m}^{4}\\\vdots &\vdots &\vdots &\cdots &\vdots \\\alpha _{1}^{m}&\alpha _{2}^{m}&\alpha _{3}^{m}&\cdots &\alpha _{m}^{m}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\a_{4}\\a_{5}\\\vdots \\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}0\\0\\0\\6\\0\\\vdots \\0\\\end{bmatrix}}}$

can be solved, and

${\displaystyle {\frac {d_{h}^{3}}{d_{h}x^{3}}}\,f(x)\;=\;h^{-3}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

approximates  ${\displaystyle f^{\prime \prime \prime }(x)}$   to an order of accuracy of  ${\displaystyle m-2}$ .

Now, the analysis is not quite done yet. Returning to the approximation of  ${\displaystyle f^{\prime \prime }(x)}$ .  If for the polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{2}\,\alpha ^{3}+b_{1}\,\alpha +b_{0}}$

it were that  ${\displaystyle b_{1}\;=\;0}$ ,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{2}\,\alpha ^{3}+b_{0}}$

exist that have  ${\displaystyle m}$   distinct real roots. When  ${\displaystyle m\;=\;3}$   there is not. So consider  ${\displaystyle m\;=\;4}$ .

${\displaystyle p(\alpha )\;=\;\alpha ^{4}+b_{2}\,\alpha ^{3}+b_{0}}$

If  ${\displaystyle p(\alpha )}$   has 4 distinct real roots, then

${\displaystyle p^{\prime }(\alpha )\;=\;4\,\alpha ^{3}+3\,b_{2}\,\alpha ^{2}}$

has 3 distinct real roots, which it does not. So the order of approximation can not be improved. This is generally the case.

Returning to the approximation of  ${\displaystyle f^{\prime \prime \prime }(x)}$ .  If for the polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{3}\,\alpha ^{4}+b_{2}\,\alpha ^{2}+b_{1}\,\alpha +b_{0}}$

it were that  ${\displaystyle b_{2}\;=\;0}$ ,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-2}\,\alpha ^{m-1}+\ldots +b_{3}\,\alpha ^{4}+b_{1}\,\alpha +b_{0}}$

exist that have  ${\displaystyle m}$   distinct real roots.

If  ${\displaystyle p(\alpha )}$   has  ${\displaystyle m}$   distinct real roots, then

${\displaystyle p^{\prime \prime }(\alpha )\;=\;m(m-1)\,\alpha ^{m-2}+(m-1)(m-2)\,b_{m-2}\,\alpha ^{m-3}+\ldots +12\,b_{3}\,\alpha ^{2}}$

has  ${\displaystyle m-2}$   distinct real roots, which it does not. So the order of approximation can not be improved.

For functions of a complex variable using roots of unity, for example, may obtain higher orders of approximation, since complex roots are allowed.

### centered difference operators

For  ${\displaystyle m}$   points  ${\displaystyle x+\alpha _{1}\,h\,,\;x+\alpha _{2}\,h\,,\;\ldots ,\;x+\alpha _{m}\,h}$   a finite difference operator

${\displaystyle {\frac {d_{h}^{\,n}}{d_{h}x^{n}}}\,f(x)\;=\;h^{-n}\,(a_{1}\,f(x+\alpha _{1}\,h)+a_{2}\,f(x+\alpha _{2}\,h)+\;\ldots \;+a_{m}\,f(x+\alpha _{m}\,h))}$

is said to be centered when the points are symmetrically placed about  ${\displaystyle x}$ .

${\displaystyle \alpha _{i}\;=\;-\alpha _{m-i+1}\quad {\text{for}}\;\;i\;=\;1\,,\;2\,,\;\ldots ,\;\left[m\,/\,2\right]}$

When  ${\displaystyle m}$   is odd  ${\displaystyle \alpha _{\left[m\,/\,2\right]+1}\;=\;0}$ .

To find the centered difference operators, consider

{\displaystyle {\begin{aligned}f(x+h)\;&=\;f(x)+f^{\prime }(x)\,h+{\tfrac {1}{2}}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{m!}}\,f^{(m)}(x)\,h^{m}+{\tfrac {1}{(m+1)!}}\,f^{(m+1)}(z_{h})\,h^{m+1}\\\end{aligned}}} .

Then

{\displaystyle {\begin{aligned}&{\frac {d_{h}^{n}}{d_{h}x^{n}}}\,f(x)\;=\;h^{-n}\,(c_{1}\,f(x)+c_{2}\,f^{\prime }(x)\,h+{\tfrac {1}{2}}\,c_{3}\,f^{\prime \prime }(x)\,h^{2}+{\tfrac {1}{6}}\,c_{4}\,f^{\prime \prime \prime }(x)\,h^{3}\\&+\;\ldots \;+{\tfrac {1}{(m-1)!}}\,c_{m}\,f^{(m-1)}(x)\,h^{m-1}+{\tfrac {1}{m!}}\,c_{m+1}f^{(m)}(x)\,h^{m}+{\tfrac {1}{(m+1)!}}\,R_{m+1}\,h^{m+1})\\\end{aligned}}} .

where the  ${\displaystyle c_{1}\,,\;c_{2}\,,\;\ldots ,\;c_{m}\,,\;c_{m+1}}$   are the right hand side of the over-determined system

${\displaystyle {\begin{bmatrix}1&1&\cdots &1&1\\\alpha _{1}&\alpha _{2}&\cdots &\alpha _{m-1}&\alpha _{m}\\\alpha _{1}^{2}&\alpha _{2}^{2}&\cdots &\alpha _{m-1}^{2}&\alpha _{m}^{2}\\\vdots &\vdots &\cdots &\vdots &\vdots \\\alpha _{1}^{m-2}&\alpha _{2}^{m-2}&\cdots &\alpha _{m-1}^{m-2}&\alpha _{m}^{m-2}\\\alpha _{1}^{m-1}&\alpha _{2}^{m-1}&\cdots &\alpha _{m-1}^{m-1}&\alpha _{m}^{m-1}\\\alpha _{1}^{m}&\alpha _{2}^{m}&\cdots &\alpha _{m-1}^{m}&\alpha _{m}^{m}\\\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\\\vdots \\a_{m-1}\\a_{m}\\\end{bmatrix}}\quad =\quad {\begin{bmatrix}c_{1}\\c_{2}\\c_{3}\\\vdots \\c_{m-1}\\c_{m}\\c_{m+1}\\\end{bmatrix}}}$

and

{\displaystyle {\begin{aligned}R_{m+1}\;=&\;a_{1}\,\alpha _{1}^{m+1}\,f^{(m+1)}(z_{\,\alpha _{1}\,h})+a_{2}\,\alpha _{2}^{m+1}\,f^{(m+1)}(z_{\,\alpha _{2}\,h})+a_{3}\,\alpha _{3}^{m+1}\,f^{(m+1)}(z_{\,\alpha _{3}\,h})\\&+\;\ldots \;+a_{m-1}\,\alpha _{m-1}^{m+1}\,f^{(m+1)}(z_{\,\alpha _{m-1}\,h})+a_{m}\,\alpha _{m}^{m+1}\,f^{(m+1)}(z_{\,\alpha _{m}\,h})\\\end{aligned}}} .

When the  ${\displaystyle a_{i}\,{\text{'s}}}$   are chosen so that

${\displaystyle c_{n+1}\;=\;n!\,,\;\;c_{i}\;=\;0\,,\;\;{\text{for}}\;\;i\;\neq \;n+1}$

then

${\displaystyle {\frac {d_{h}^{n}}{d_{h}x^{n}}}\,f(x)\;=\;f^{(n)}(x)+{\tfrac {1}{(m+1)!}}\,R_{m+1}\,h^{m-n+1}}$ .

so that the operator is of order  ${\displaystyle m-n+1}$ .

Since, for the centered case, the system is over-determined, some restriction is needed for the system to have a solution. A solution occurs when the  ${\displaystyle \alpha _{1},\;\alpha _{2},\ldots ,\alpha _{m}}$   are the roots of a polynomial

${\displaystyle p(\alpha )\;=\;\alpha ^{m}+b_{m-1}\,\alpha ^{m-1}+\ldots +b_{2}\,\alpha ^{2}+b_{1}\,\alpha +b_{0}}$

with

${\displaystyle b_{n}\;=\;0}$ .

Observing that when  ${\displaystyle m}$   is even

${\displaystyle p(\alpha )\;=\;\prod _{i=1}^{\tfrac {m}{2}}(\alpha ^{2}-\alpha _{i}^{2})}$

and when  ${\displaystyle m}$   is odd

${\displaystyle p(\alpha )\;=\;\alpha \,\prod _{i=1}^{\left[{\tfrac {m}{2}}\right]}(\alpha ^{2}-\alpha _{i}^{2})}$ .

So when  ${\displaystyle m}$   is even  ${\displaystyle p(\alpha )}$   has  ${\displaystyle b_{n}\;=\;0}$   for all odd  ${\displaystyle n}$  and when  ${\displaystyle m}$   is odd  ${\displaystyle p(\alpha )}$   has  ${\displaystyle b_{n}\;=\;0}$   for all even  ${\displaystyle n}$ .

So a centered difference operator will achieve the one order extra of accuracy when the number of points  ${\displaystyle m}$   is even and the order of the derivative  ${\displaystyle n}$   is odd or when the number of points  ${\displaystyle m}$   is odd and the order of the derivative  ${\displaystyle n}$   is even.

## Shift Operators

### Trigometric polynomials

Let

${\displaystyle p(x)\;=\;a_{0}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k}\,\cos(k\,\pi \,x)+b_{k}\,\sin(k\,\pi \,x))}$
 (3.2.0)

be trigometric polynomial defined on  ${\displaystyle -1\;\leq \;x\;\leq \;1}$ .

Define the inner product on such trigometric polynomials by

${\displaystyle \;=\;\int _{-1}^{1}p_{1}(x){\overline {p_{2}(x)}}\,dx}$ .

 (3.2.1)

In light of the orthogonalities

${\displaystyle <\sin(k\,\pi \,x)\,,\;\sin(r\,\pi \,x)>\;=\;0\,,\quad <\cos(k\,\pi \,x)\,,\;\cos(r\,\pi \,x)>\;=\;0}$ ,

and  ${\displaystyle <\sin(k\,\pi \,x)\,,\;\cos(r\,\pi \,x)>\;=\;0\,,}$       when  ${\displaystyle k\;\neq \;r}$ ,

inner products can be calculated easily.

${\displaystyle \lVert \,p(x)\,\rVert ^{2}\;=\;\;=\;\left\vert a_{0}\right\vert ^{2}+\textstyle \sum _{\,k\,=\,1}^{m}(\left\vert a_{k}\right\vert ^{2}+\left\vert b_{k}\right\vert ^{2})}$ .

 (3.2.2)

and for

${\displaystyle p_{1}(x)\;=\;a_{0\,,\,1}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k\,,\,1}\,\cos(k\,\pi \,x)+b_{k\,,\,1}\,\sin(k\,\pi \,x))}$

${\displaystyle p_{2}(x)\;=\;a_{0\,,\,2}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k\,,\,2}\,\cos(k\,\pi \,x)+b_{k\,,\,2}\,\sin(k\,\pi \,x))}$

the inner product is given by

${\displaystyle \;=\;a_{0\,,\,1}\,{\overline {a_{0\,,\,2}}}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k\,,\,1}\,{\overline {a_{k\,,\,2}}}+b_{k\,,\,1}\,{\overline {b_{k\,,\,2}}})}$ .

 (3.2.3)

### Definition of a shift operator

Define the shift operator  ${\displaystyle (sft)_{h}}$   on  ${\displaystyle p(x)}$   by

${\displaystyle (sft)_{h}\,p(x)\;=\;p(x-h)\;=\;a_{0}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k}\,\cos(k\,\pi \,(x-h))+b_{k}\,\sin(k\,\pi \,(x-h)))}$ .

 (3.3.0)

Since

${\displaystyle \sin(k\,\pi (x-h))\;=\;\cos(k\,\pi \,h)\sin(k\,\pi \,x)-\sin(k\,\pi \,h)\cos(k\,\pi \,x)}$

and

${\displaystyle \cos(k\,\pi (x-h))\;=\;\cos(k\,\pi \,h)\cos(k\,\pi \,x)+\sin(k\,\pi \,h)\sin(k\,\pi \,x)}$ ,

so that

{\displaystyle {\begin{aligned}(sft)_{h}p(x)\;=\;a_{0}+\textstyle \sum _{\,k\,=\,1}^{m}&{\big (}(a_{k}\,\cos(k\,\pi \,h)-b_{k}\,\sin(k\,\pi \,h))\,\cos(k\,\pi \,x)\\&+(a_{k}\,\sin(k\,\pi \,h)+b_{k}\,\cos(k\,\pi \,h))\,\sin(k\,\pi \,x){\big )}\\\end{aligned}}} .

 (3.3.1)

### Approximation by trigometric polynomials

Let  ${\displaystyle f(x)}$   be a function defined on and periodic with respect to the interval  ${\displaystyle -1\;\leq \;x\;\leq \;1}$ .  That is  ${\displaystyle f(x+2)\;=\;f(x)}$ .

The  ${\displaystyle m\,{\text{th}}}$   degree trigometric polynomial approximation to  ${\displaystyle f(x)}$   is given by

${\displaystyle p(x)\;=\;a_{0}+\textstyle \sum _{\,k\,=\,1}^{m}(a_{k}\,\cos(k\,\pi \,x)+b_{k}\,\sin(k\,\pi \,x))}$

where

${\displaystyle a_{k}\,=\;\int _{-1}^{1}\,f(x)\,\cos(k\,\pi \,x)\,dx}$       and      ${\displaystyle b_{k}\,=\;\int _{-1}^{1}\,f(x)\,\sin(k\,\pi \,x)\,dx}$ .

 (3.4.0)

${\displaystyle p(x)}$   approximates  ${\displaystyle f(x)}$   in the sense that

${\displaystyle \int _{-1}^{1}\left\vert f(x)-p(x)\right\vert ^{2}\,dx}$

is minimized over all trigometric polynomials, of degree  ${\displaystyle m}$   or less, by  ${\displaystyle p(x)}$ .

In fact

${\displaystyle \int _{-1}^{1}\left\vert f(x)-p(x)\right\vert ^{2}\,dx\;=\;\int _{-1}^{1}(f(x)-p(x))({\overline {f(x)}}-{\overline {p(x)}})\,dx}$

${\displaystyle =\;\int _{-1}^{1}(\left\vert f(x)\right\vert ^{2}-2\,\Re (f(x){\overline {p(x)}})+\left\vert p(x)\right\vert ^{2})\,dx}$ .

The term in the center

${\displaystyle \int _{-1}^{1}\Re f(x){\overline {p(x)}}\,dx}$  ${\displaystyle =\;{\overline {a_{0}}}\int _{-1}^{1}f(x)\,dx+\textstyle \sum _{\,k\,=\,1}^{m}({\overline {a_{k}}}\,\int _{-1}^{1}f(x)\cos(k\,\pi \,x)\,dx+{\overline {b_{k}}}\,\int _{-1}^{1}f(x)sin(k\,\pi \,x))\,dx}$

${\displaystyle =\;\left\vert a_{0}\right\vert ^{2}+\textstyle \sum _{\,k\,=\,1}^{m}(\left\vert a_{k}\right\vert ^{2}+\left\vert b_{k}\right\vert ^{2})\;=\;\int _{-1}^{1}\left\vert p(x)\right\vert ^{2}\,dx}$ ,

so that

${\displaystyle \int _{-1}^{1}\left\vert f(x)-p(x)\right\vert ^{2}\,dx\;=\;\int _{-1}^{1}\left\vert f(x)\right\vert ^{2}\,dx-\int _{-1}^{1}\left\vert p(x)\right\vert ^{2}\,dx}$

${\displaystyle =\;\lVert \,f(x)\,\rVert ^{2}-\lVert \,p(x)\,\rVert ^{2}}$ .

 (3.4.1)

linearity property

 (3.4.2)

If  ${\displaystyle p(x)}$   and  ${\displaystyle q(x)}$   are the  ${\displaystyle m\,{\text{th}}}$