Advanced Mathematics for Engineers and Scientists/Introduction and Classifications

Introduction and Classifications

edit

The intent of the prior chapters was to provide a shallow introduction to PDEs and their solution without scaring anyone away. A lot of fundamentals and very important details were left out. After this point, we are going to proceed with a little more rigor; however, knowledge past one undergraduate ODE class alongside some set theory and countless hours on Wikipedia should be enough.

Some Definitions and Results

edit

An equation of the form

 

is called a partial differential equation if   is unknown and the function   involves partial differentiation. More concisely,   is an operator or a map which results in (among other things) the partial differentiation of  .   is called the dependent variable, the choice of this letter is common in this context. Examples of partial differential equations (referring to the definition above):

 
 
 

Note that what exactly   is made of is unspecified, it could be a function, several functions bundled into a vector, or something else; but if   satisfies the partial differential equation, it is called a solution.

Another thing to observe is seeming redundancy of  , its utility draws from the study of linear equations. If  , the equation is called homogeneous, otherwise it's nonhomogeneous or inhomogeneous.

It's worth mentioning now that the terms "function", "operator", and "map" are loosely interchangeable, and that functions can involve differentiation, or any operation. This text will favor, not exclusively, the term function.

The order of a PDE is the order of the highest derivative appearing, but often distinction is made between variables. For example the equation

 

is second order in   and fourth order in   (fourth derivatives will result regardless of the form of  ).

Linear Partial Differential Equations

edit

Suppose that  , and that   satisfies the following properties:

  •  
  •  

for any scalar  . The first property is called additivity, and the second one is called homogeneity. If   is additive and homogeneous, it is called a linear function, additionally if it involves partial differentiation and

 

then the equation above is a linear partial differential equation. This is where the importance of   shows up. Consider the equation

 

where   is not a function of  . Now, if we represent the equation through

 

then   fails both additivity and homogeneity and so is nonlinear (Note: the equation defining the condition is 'homogeneous', but in a distinct usage of the term). If instead

 

then   is now linear. Note then that the choice of   and   is generally not unique, but if an equation could be written in a linear form it is called a linear equation.

Linear equations are very popular. One of the reasons for this popularity is a little piece of magic called the superposition principle. Suppose that both   and   are solutions of a linear, homogeneous equation (here onwards,   will denote a linear function), ie

 

for the same  . We can feed a combination of   and   into the PDE and, recalling the definition of a linear function, see that

 
 

for some constants   and  . As stated previously, both   and   are solutions, which means that

 
 

What all this means is that if both   and   solve the linear and homogeneous equation  , then the quantity   is also a solution of the partial differential equation. The quantity   is called a linear combination of   and  . The result would hold for more combinations, and generally,

    The Superposition Principle

Suppose that in the equation

 

the function   is linear. If some sequence   satisfies the equation, that is if

 

then any linear combination of the sequence also satisfies the equation:

 

where   is a sequence of constants and the sum is arbitrary.

Note that there is no mention of partial differentiation. Indeed, it's true for any linear equation, algebraic or integro-partial differential-whatever. Concerning nonhomogeneous equations, the rule can be extended easily. Consider the nonhomogeneous equation

 

Let's say that this equation is solved by   and that a sequence   solves the "associated homogeneous problem",

 
 

where   is the same between the two. An extension of superposition is observed by, say, the specific combination  :

 
 
 
 

More generally,

    The Extended Superposition Principle

Suppose that in the nonhomogeneous equation

 

the function   is linear. Suppose that this equation is solved by some  , and that the associated homogeneous problem

 

is solved by a sequence  . That is,

 

Then   plus any linear combination of the sequence   satisfies the original (nonhomogeneous) equation:

 

where   is a sequence of constants and the sum is arbitrary.

The possibility of combining solutions in an arbitrary linear combination is precious, as it allows the solutions of complicated problems be expressed in terms of solutions of much simpler problems.

This part of is why even modestly nonlinear equations pose such difficulties: in almost no case is there anything like a superposition principle.

Classification of Linear Equations

edit

A linear second order PDE in two variables has the general form

 

If the capital letter coefficients are constants, the equation is called linear with constant coefficients, otherwise linear with variable coefficients, and again, if   = 0 the equation is homogeneous. The letters   and   are used as generic independent variables, they need not represent space. Equations are further classified by their coefficients; the quantity

 

is called the discriminant. Equations are classified as follows:

 
 
 

Note that if coefficients vary, an equation can belong to one classification in one domain and another classification in another domain. Note also that all first order equations are parabolic.

Smoothness of solutions is interestingly affected by equation type: elliptic equations produce solutions that are smooth (up to the smoothness of coefficients) even if boundary values aren't, parabolic equations will cause the smoothness of solutions to increase along the low order variable, and hyperbolic equations preserve lack of smoothness.

Generalizing classifications to more variables, especially when one is always treated temporally (ie associated with ICs, but we haven't discussed such conditions yet), is not too obvious and the definitions can vary from context to context and source to source. A common way to classify is with what's called an elliptic operator.

    Definition: Elliptic Operator

A second order operator   of the form

 

is called elliptic if  , an array of coefficients for the highest order derivatives, is a positive definite symmetric matrix.   is the imaginary unit. More generally, an   order elliptic operator is

 

if the   dimensional array of coefficients of the highest ( ) derivatives is analogous to a positive definite symmetric matrix.

Not commonly, the definition is extended to include negative definite matrices.

The negative of the Laplacian,  , is elliptic with  . The definition for the second order case is separately provided because second order operators are by a large margin the most common.

Classifications for the equations are then given as

 
 
 

for some constant k. The most classic examples of these equations are obtained when the elliptic operator is the Laplacian: Laplace's equation, linear diffusion, and the wave equation are respectively elliptic, parabolic, and hyperbolic and are all defined in an arbitrary number of spatial dimensions.

Other classifications

edit

Quasilinear

edit

The linear form

 

was considered previously with the possibility of the capital letter coefficients being functions of the independent variables. If these coefficients are additionally functions of   which do not produce or otherwise involve derivatives, the equation is called quasilinear. It must be emphasized that quasilinear equations are not linear, no superposition or other such blessing; however these equations receive special attention. They are better understood and are easier to examine analytically, qualitatively, and numerically than general nonlinear equations.

A common quasilinear equation that'll probably be studied for eternity is the advection equation

 

which describes the conservative transport (advection) of the quantity   in a velocity field  . The equation is quasilinear when the velocity field depends on  , as it usually does. A specific example would be a traffic flow formulation which would result in

 

Despite resemblance, this equation is not parabolic since it is not linear. Unlike its parabolic counterparts, this equation can produce discontinuities even with continuous initial conditions.

General Nonlinear

edit

Some equations defy classification because they're too abnormal. A good example of an equation is the one that defines a minimal surface expressible as  :

 

where   is the height of the surface.