Ordinary Differential Equations/One-dimensional first-order linear equations

Definition

edit

One-dimensional first-order inhomogenous linear ODEs are ODEs of the form

 

for suitable (that is, mostly, continuous) functions  ; note that when  , we have a homogenous equation instead.

General solution

edit

First we note that we have the following superposition principle: If we have a solution   (" " standing for "homogenous") of the problem

 

(which is nothing but the homogenous problem associated to the above ODE) and a solution to the actual problem  ; that is a function   such that

 

(" " standing for "particular solution", indicating that this is only one of the many possible solutions), then the function

  (  arbitrary)

still solves  , just like the particular solution   does. This is proved by computing the derivative of   directly.

In order to obtain the solutions to the ODE under consideration, we first solve the related homogenous problem; that is, first we look for   such that

 .

It may seem surprising, but this gives actually a very quick path to the general solution, which goes as follows. Separation of variables (and using  ) gives

 ,

since the function

 

is an antiderivative of  . Thus we have found the solution to the related homogenous problem.

For the determination of a solution   to the actual equation, we now use an Ansatz: Namely we assume

 ,

where   is a function. This Ansatz is called variation of the constant and is due to Leonhard Euler. If this equation holds for  , let's see what condition on   we get for   to be a solution. We want

 , that is (by the product rule and inserting  ):
 .

Putting the exponential on the other side, that is

 

or

 .

Since all the manipulations we did are reversible, all functions of the form

  (  arbitrary)

are solutions. If we set  , we get the general solution form

 .

We want now to prove that these constitute all the solutions to the equation under consideration. Thus, set

 

and let   be any other solution to the inhomogenous problem under consideration. Then   solves the homogenous problem, for

 .

Thus, if we prove that all the homogenous solutions (and in particular the difference  ) are of the form

 ,

then we may subtract

 

from   for an appropriate   to obtain zero, which is why   is then of the desired form.

Thus, let   be any solution to the homogenous problem. Consider the function

 .

We differentiate this function and obtain by the product rule

 

since   is a solution to the homogenous problem. Hence, the function is constant (that is, equal to a constant  ), and solving

 

for   gives the claim.

We have thus arrived at:

Theorem 3.1:

For continuous  , the solutions to the ODE

 

are precisely the functions

  (  arbitrary).

Note that imposing a condition   for some   enforces  , whence we got a unique solution for each initial condition.

Exercises

edit
  • Exercise 3.2.1: First prove that  . Then solve the ODE   for a function existent on   such that   for   arbitrary. Use that a similar version of theorem 3.1 holds when   are only defined on a proper part of  ; this is because the proof carries over.

Clever Ansatz for polynomial RHS

edit

First note that RHS means "Right Hand Side". Let's consider the special case of a 1-dim. first-order linear ODE

  (  arbitrary),

where we used Einstein summation convention; that is,   stands for   for some  . In the notation of above, we have   and  .

Using separation of variables, the solution to the corresponding homogenous problem   is easily seen to equal   for some capital  .

To find a particular solution  , we proceed as follows. We pick the Ansatz to assume that   is simply a polynomial; that is

 

for certain coefficients  .

Exercises

edit
  • Exercise 3.3.1: Find all solutions to the ODE  . (Hint: What does theorem 3.1 say about the number of solutions to that problem with a given fixed initial condition?)

Examples

edit

Example 3.2: