Using High Order Finite Differences/Definitions and Basics


Definitions and Basics edit

Vector Norms edit

Definition of a Vector Norm edit

The most ordinary kind of vectors are those consisting of ordered n-tuples of real or complex numbers. They may be written in row                           

or column       

forms. Commas or other seperators of components or coordinates may or may not be used. When a vector has many elements, notations like

    or       are often used.

A most popular notation to indicate a vector is   .

Vectors are usually added component-wise, for

    and    ,

  .

Scalar multiplication is defined by    .

A vector norm is a generalization of ordinary absolute value     of a real or complex number.

For     and   vectors, and   ,  a scalar, a vector norm is a real value     associated with a vector for which the following properties hold.

 .

Common Vector Norms edit

The most commonly used norms are:

 .

Any two norms on   dimensional vectors of complex numbers are topologically equivalent in the sense that, if     and     are two different norms, then there exist positive constants     and     such that   .

Inner Product of Vectors edit

The inner product (or dot product), of two vectors

    and    ,

is defined by

  ,

or when     and     are complex valued, by

 

 .

(1.3.3.0)


It is often indicated by any one of several notations:

     or     .

Besides the dot product, other inner products are defined to be a rule that sssigns to each pair of vectors   ,  a complex number with the following properties.

 

 

 

for      is real valued and positive

 

and

 

An inner product defines a norm by


  .

(1.3.3.1)


Inequalities Involving Norms edit

The Cauchy Schwarz and Holder's inequalities are commonly employed.

 

     for   

Matrix Norms edit

Definition of a Matrix Norm edit

The most ordinary kind of matices are those consisting of rectangular arrays of real or complex numbers. They may be written in element form             and be considered as collections of column  or  row  vectors.

Matrices are usually added element-wise, for

 

Scalar multiplication is defined by    .

The notation     means that all elements of     are identically zero.

A matrix norm is a generalization of ordinary absolute value     of a real or complex number, and can be considered a type of a vector norm.

For     and   matrices, and   ,  a scalar, a matrix norm is a real value     associated with a matrix for which the following properties hold.

 .

Common Matrix Norms edit

The most commonly used norms are:

 .

Like vector norms any two matrix norms on   matrices of complex numbers are topologically equivalent in the sense that, if     and     are two different norms, then there exist positive constants     and     such that   .

The norm     on matrices     is an example of an induced norm. An induced norm is for a vector norm    defined by

 .

This could cause the same subscript notation to be used fot two different norms, sometimes.

Positive definite matrices edit

   matrix     is said to be positive definite if for any vector   

 

for some positive constant    , not depending on    .

The property of being positive definite insures the numerical stability of a variety of common numerical techniques used to solve the equation    .

Taking     then

  .

so that

      and        .

This gives

 

and

  .

Consistency of Norms edit

A matrix norm     and a vector norm     are said to be consistent when

  .

When     is the matrix norm induced by the vector norm     then the two norms will be consistent.

When the two norms are not consistent there will still be a positive constant     such that

  .

Finite Difference Operators edit

approximation of a first derivative edit

The defition of the derivative of a function gives the first and simplest finite difference.

 .

So the finite difference 

 

can be defined. It is an approximation to     when    is near  .

The finite difference approximation     for     is said to be of order   ,  if there exists     such that

 ,

when    is near  .

For practical reasons the order of a finite difference will be described under the assumption that     is sufficiently smooth so that it's Taylor's expansion up to some order exists. For example, if

 

then

 

so that

 ,

meaning that the order of the approximation of     by     is   .

The finite difference so far defined is a 2-point operator, since it requires 2 evaluations of  . If

 

then another 2-point operator

 

can be defined. Since

 ,

this     is of order 2, and is referred to as a centered difference operator. Centered difference operators are usually one order of accuracy higher than un-centered operators for the same number of points.


More generally, for     points     a finite difference operator

 

is usually defined by choosing the coefficients     so that     has as high of an order of accuracy as possible. Considering

 .

Then

 .

where the     are the right hand side of the Vandermonde system

 

and

 .

When the     are chosen so that 

 

then

 .

so that the operator is of order   .

At the end of the section a table for the first several values of   ,  the number of points, will be provided. The discussion will move on to the approximation of the second derivative.

approximation of a second derivative edit

The definition of the second derivative of a function

 .

used together with the finite difference approximation for the first derivative

 

gives the finite difference 

 

In view of

 

for the operator just defined

 .

If instead, the difference operator

 

is used

 

If the other obvious possibility is tried

 

In view of

 ,

 .

So    

is a second order centered finite difference approximation for   .


The reasoning applied to the approximation of a first derivative can be used for the second derivative with only a few modifications.

For     points     a finite difference operator

 

is usually defined by choosing the coefficients     so that     has as high of an order of accuracy as possible. Considering

 .

Then

 .

where the     are the right hand side of the Vandermonde system

 

and

 .

When the     are chosen so that 

 

then

 .

so that the operator is of order   .

The effect of centering the points will be covered somewhere below.

approximation of higher derivatives edit

Although approximations to higher derivatives can be defined recursively from those for derivatives of lower order, the end result is the same finite difference operators. The Vandermonde type system will be used again for this purpose.

 .

The number of points needed to approximate     by finite differences is at least   .


For     points     a finite difference operator

 

is usually defined by choosing the coefficients     so that    approximates     to as high of an order of accuracy as possible. Considering

 .

Then

 .

where the     are the right hand side of the Vandermonde system

 

and

 .

When the     are chosen so that 

 

then

 .

so that the operator is of order   .

An alternative analysis is to require that the finite difference operator differentiates powers of     exactly, up to the highest power possible.

effect of the placement of points edit

Usually the     are taken to be integer valued, since the points are intended to coincide with those of some division of an interval or 2 or 3 dimensional domain. If these points and hence     are chosen with only accuracy in mind, then a higher accuracy of only one order can be achieved.

So start by seeing how high is the accuracy that     can be approximated with three points.

 

Then accuracy of order 4 can not be achieved, because it would require the solution of

 

which can not be solved since the matrix

 

is non-singular. The possibility of an     being    can be ruled out otherwise.

For accuracy of order 3

 

So the matrix

 

is singular and     are the roots of some polynomial 

  .

Two examples are next.

 

 


To see what the accuracy that     can be approximated to with three points.

 

Then accuracy of order 3 can not be achieved, because it would require the solution of

 

which can not be solved since the matrices

       and       

would both need to be singular.

If the matrix

 

is singular, then     are the roots of some polynomial 

  ,

implying

 

meaning that elementary row operations can transform

 

to

 

which is non-singular.

Conversely, if     are the roots of some polynomial 

  , then

 

can be solved and     approximated to an order of 2 accuracy.

See how high is the accuracy that     can be approximated with     points.

 

Then accuracy of order     can not be achieved, because it would require the solution of

 

which can not be solved since the matrix

 

is non-singular. The possibility of an     being    can be ruled out otherwise, because, for example, if   ,  then the non-singularity of the block

 

would force   .

For accuracy of order   

 

So the matrix

 

is singular and     are the roots of some polynomial 

  .

The progression for the second, third, ... derivatives goes as follows.

If     are the roots of some polynomial 

  

then the system

 

can be solved, and

 

approximates     to an order of accuracy of   .

If     are the roots of some polynomial 

  

then the system

 

can be solved, and

 

approximates     to an order of accuracy of   .

Now, the analysis is not quite done yet. Returning to the approximation of   .  If for the polynomial

 

it were that   ,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

 

exist that have     distinct real roots. When     there is not. So consider   .

 

If     has 4 distinct real roots, then

 

has 3 distinct real roots, which it does not. So the order of approximation can not be improved. This is generally the case.

Returning to the approximation of   .  If for the polynomial

 

it were that   ,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

 

exist that have     distinct real roots.

If     has     distinct real roots, then

 

has     distinct real roots, which it does not. So the order of approximation can not be improved.

For functions of a complex variable using roots of unity, for example, may obtain higher orders of approximation, since complex roots are allowed.

centered difference operators edit

For     points     a finite difference operator

 

is said to be centered when the points are symmetrically placed about   .

 

When     is odd   .

To find the centered difference operators, consider

 .

Then

 .

where the     are the right hand side of the over-determined system

 

and

 .

When the     are chosen so that 

 

then

 .

so that the operator is of order   .

Since, for the centered case, the system is over-determined, some restriction is needed for the system to have a solution. A solution occurs when the     are the roots of a polynomial

 

with

 .

Observing that when     is even

 


and when     is odd

 .

So when     is even     has     for all odd    and when     is odd     has     for all even   .

So a centered difference operator will achieve the one order extra of accuracy when the number of points     is even and the order of the derivative     is odd or when the number of points     is odd and the order of the derivative     is even.


Shift Operators edit

Shift of a Function edit

Trigometric polynomials edit

Let 

  

(3.2.0)

be trigometric polynomial defined on   .

Define the inner product on such trigometric polynomials by


 

 .

(3.2.1)


In light of the orthogonalities

 ,

and         when   ,

inner products can be calculated easily.


 

 .

(3.2.2)


and for

 

 

the inner product is given by


 

 .

(3.2.3)


Definition of a shift operator edit

Define the shift operator     on     by

 

 .

(3.3.0)

Since

 

and

 ,

so that


 

 .

(3.3.1)


Approximation by trigometric polynomials edit

Let     be a function defined on and periodic with respect to the interval   .  That is   .

The     degree trigometric polynomial approximation to     is given by

 

where


 

       and       .

(3.4.0)


   approximates     in the sense that

 

is minimized over all trigometric polynomials, of degree     or less, by   .

In fact