Parallel Spectral Numerical Methods/Separation of Variables

Separation of variables is a technique which can be used to solve both ODEs and PDEs. The basic idea for an equation in two variables is to rewrite the equation so that each of the two variables is located on different sides of an equality sign, and since both sides of the equation depend on different variables, the two sides must be equal to a constant. We introduce this idea with the simple first order linear ODE

{dy \over dt}= y.





( 1)

As long as y(t) \ne 0 for any value of t, we can formally separate variables and rewrite eq. 1 as

{dy \over y}= dt.





( 2)

Now we can solve for y(t) by integrating both sides

\int {dy \over y} = \int dt





( 3)

ln  y + a = t + b





( 4)

e^{ln y+a} = e^{t+b}





( 5)

e^{ln y}e^a = e^te^b





( 6)

y = {e^b \over e^a}e^t





( 7)

y(t) = ce^t.





( 8)

Where a, b, and c are arbitrary constants of integration.

We now perform a similar example for a linear partial differential equation. The heat

equation is

u_{t} = u_{xx}.





( 9)

We suppose that u = X(x)T(t), so that we obtain

X(x) {dT \over dt}(t) = {d^2X \over dx^2}(x)T(t).





( 10)

We can rewrite this as

{{dT \over dt}(t) \over T(t)} = {{d^2X \over dx^2}(x) \over X(x)} = -C,





( 11)

where C is a constant independent of x and t. The two sides can be integrated separately to get T(t) = exp(-Ct) and either X(x) = sin(\sqrt{C}x) or X(x) = cos(\sqrt{C}x). Since the heat equation is linear, one can then add different solutions to the heat equation and still obtain a solution of the heat equation. Hence solutions of the heat equation can be found by

\sum_{n} \alpha_{n}|exp(-C_{n}t)sin(\sqrt{C_n}x)+\beta_{n}exp(-C_{n}t)cos(\sqrt{C_n}x)





( 12)

where the constants  \alpha_{n}, \beta_{n} and C_{n} are appropriately chosen. Convergence of such series to an actual solution is studied in mathematics courses on analysis (see for example Evans[1] or Renardy and Rogers[2]), however the main ideas necessary to choose the constants,  \alpha_{n}, \beta_{n} and C_{n} and hence construct such solutions are typically encountered towards the end of a calculus course or at the beginning of a differential equations course, see for example Courant and John[3][4] or Boyce and DiPrima[5]. Here, we consider the case where x \in [0, 2\pi], and for which we have periodic boundary conditions. In this case \sqrt{C_n} must be integers, which we choose to be non-negative to avoid redundancies. At time t = 0, we shall suppose that the initial condition is given by

u(x, t = 0) = f(x).





( 13)


\int_{0}^{2\pi}sin(nx)sin(mx) = 
\pi \qquad m = n \\
\qquad \qquad \qquad, \\  
0 \qquad m \ne n





( 14)

\int_{0}^{2\pi}cos(nx)cos(mx) = 
\pi \qquad m = n \\
\qquad \qquad \qquad, \\  
0 \qquad m \ne n





( 15)


\int_{0}^{2\pi}cos(nx)sin(mx) = 0





( 16)

Thus we can consider the trigonometric polynomials as being orthogonal vectors. It can be shown that a sum of these trigonometric polynomials can be used to approximate a wide class of periodic functions on the interval [0, 2\pi]; for well behaved functions, only the first few terms in such a sum are required to obtain highly-accurate approximations. Thus, we can expand the initial condition into a sum of trigonometric functions,

f(x) = \sum_n\alpha_n sin(\sqrt{C_n}x)+\beta_n cos(\sqrt{C_n}x)





( 17)

Multiplying the above equation by either sin(\sqrt{C_n}x) or cos(\sqrt{C_n}x) and using the orthogonality of the functions, we deduce that

 \alpha_n = \frac{\int_{0}^{2\pi} f(x)sin(\sqrt{C_n}x)}{\int_{0}^{2\pi}sin^{2}(\sqrt{C_n}x)dx}





( 18)


 \beta_{n} = \frac{\int_0^{2\pi} f(x)cos(\sqrt{C_n}x)}{\int_0^{2\pi}cos^2(\sqrt{C_n}x)dx}





( 19)

Most ODEs and PDEs of practical interest will not be separable. However, the ideas behind separation of variables can be used to allow one to find series solutions to a wide class of PDEs. These series solutions can also be found numerically and are what we will use to find approximate solutions to PDEs, and so the ideas behind this simple examples are quite useful.


1) Solve the ordinary differential equation

u_{t} = u(u - 1)

u(t = 0) = 0.8

using separation of variables.


a) Use separation of variables to solve the partial differential equation

u_{tt} = u_{xx}


u(x = 0, t) = u(x = 2\pi, t),

u(x; t = 0) = sin(6x) + cos(4x)


u_{t}(x, t = 0) = 0.

b) Create plots of your solution at several different times and/or create an animation of the solution you have found.
c) The procedure required to find the coefficients in the Fourier series expansion for the initial condition can become quite tedious/intractable. Consider the initial condition u(x, t = 0) = exp(sin(x)). Explain why it would be difficult to compute the Fourier coefficients for this by hand. Also explain why it would be nice to have an algorithm or computer program that does this for you.


  1. Evans (2010)
  2. Renardy and Rogers (2004)
  3. Courant and John (1998)
  4. Courant and John (1999)
  5. Boyce and DiPrima (2010)


Boyce, W.E.; DiPrima, R.C. (2010). Elementary Differential Equations and Boundary Value Problems. Wiley. 

Courant, R.; John, F. (1998). Introduction to Calculus and Analysis. I. Springer. 

Courant, R.; John, F. (1999). Introduction to Calculus and Analysis. II. Springer. 

Evans, L.C. (2010). Partial Differential Equations. American Mathematical Society. 

Rogers, R.C. (2004). An Introduction to Partial Differential Equations. Springer. 

Last modified on 25 October 2013, at 14:13