# Linear Algebra with Differential Equations/Printable version

Linear Algebra with Differential Equations

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Linear_Algebra_with_Differential_Equations

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

# Introduction

Translation:

We call an expression in the form

X' = AX + G(t)

homogeneous if G(t)≡0. Now, in previous methods of differential equations, it turned out that X had an exponential of the transcendental number e in its form, so if a uniqueness theorem is developed, we can define a possible answer with this form, set it in the equation, and determine if this answer works and if so how to obtain the answer and its corresponding exponentials.

# Results

So, because the exponential function appeared many times in simpler differential equations, we will guess that the solution for X is X = u${\displaystyle e^{\lambda t}}$ , where u is a coefficient matrix.

Thus:

${\displaystyle \lambda \mathbf {u} e^{\lambda t}=\mathbf {A} \mathbf {u} e^{\lambda t}}$

${\displaystyle \lambda e^{\lambda t}=\mathbf {A} e^{\lambda t}}$

${\displaystyle (\mathbf {A} -\lambda \mathbf {I} )\mathbf {u} =0}$

There is a lie here, we're also going to make one more assumption: a constant matrix for A; but this is the definition for an eigenvalue-eigenvector pair! Thus with a two-by-two matrix there are two linearly independent solutions, and thus by the principle of superposition the constant matrix multiplication by an augmented matrix of these two solutions makes the fundamental set of solutions of which we are trying to look for.

However, due to the property of these eigenvalues (and that we want real-solutions to help analysis in physical models utilizing these differential techniques), there are different ways of creating the fundamental set of solutions to the three possible cases that the pair of eigenvalues could fall under:

# Homogeneous Linear Differential Equations/Real, Distinct Eigenvalues Method

If the eigenvalues for the characteristic equation are real and distinct, mathematically, nothing is really wrong. Thus, by our guess and the existence and uniqueness theorem, for an n-size square matrix, the solution set is determined by:

${\displaystyle \mathbf {X} =\{e^{\lambda _{1}\cdot t};e^{\lambda _{2}\cdot t};...;e^{\lambda _{n-1}\cdot t};e^{\lambda _{n}\cdot t}\}}$

Then since the linear combination of two solutions is also a solution (which can be verified directly from the structure of the problem), we can form the general solution as such:

${\displaystyle \mathbf {X} =c_{1}e^{\lambda _{1}\cdot t}+c_{2}e^{\lambda _{2}\cdot t}+...+c_{n-1}e^{\lambda _{n-1}\cdot t}+c_{n}e^{\lambda _{n}\cdot t}}$

What's interesting is when the eigenvalues are not so simple.

# Homogeneous Linear Differential Equations/Imaginary Eigenvalues Method

When eigenvalues become complex, mathematically, there isn't much wrong. However, in certain physical applications (like oscillations without damping) there is a problem in understanding what exactly does an imaginary answer mean? Thus there is a concerted effort to try to "mathematically hide" the complex variables in order to achieve a more approachable answer for physicists and engineers. Essentially, we have a solution that in part looks like this:

${\displaystyle (\alpha +\beta \cdot i)\cdot e^{r+i\lambda _{1}\cdot t}}$

But by Euler's formula:

${\displaystyle (\alpha +\beta \cdot i)\cdot e^{r}(cos(\lambda _{1}\cdot t)+i\cdot sin(\lambda _{1}\cdot t))}$

Now we distribute the terms:

${\displaystyle e^{r}(\alpha \cdot cos(\lambda _{1}\cdot t)-\beta \cdot sin(\lambda _{1}\cdot t))+i\cdot e^{r}(\beta \cdot cos(\lambda _{1}\cdot t)+\alpha \cdot sin(\lambda _{1}\cdot t))}$

Since this is a linear combination of two terms, ${\displaystyle i}$  is a constant (complex, but still a constant), each part is an element of the set of solutions and the general solution can be constructed therein.

# Homogeneous Linear Differential Equations/Repeated Eigenvalue Method

When the eigenvalue is repeated we have a similar problem as in normal differential equations when a root is repeated, we get the same solution repeated, which isn't linearly independent, and which suggest there is a different solution. Because the case is very similar to normal differential equations, let us try ${\displaystyle \mathbf {X} =\mathbf {u} te^{\mathbf {\lambda } t}}$  for ${\displaystyle \mathbf {X} '=\mathbf {A} \mathbf {X} }$  and we see that this does not work; however, ${\displaystyle \mathbf {X} =(\mathbf {B} t+\mathbf {C} )e^{\mathbf {\lambda } t}}$  DOES work (For the observant reader, this gives a hint to the changes in the Method of Undetermined Coefficients as compared to differential equations without linear algebra).

In fact if we use this we see that ${\displaystyle \mathbf {B} =\mathbf {u} }$  where ${\displaystyle \mathbf {u} }$  is a typical eigenvector; and we see that ${\displaystyle \mathbf {C} =\mathbf {n} }$  where ${\displaystyle \mathbf {n} }$  is a normal eigenvector defined by ${\displaystyle (\mathbf {A} -\lambda \mathbf {I} )\mathbf {C} =\mathbf {B} }$

Thus our fundamental set of solutions is: ${\displaystyle \{\mathbf {u} te^{\lambda t}+\mathbf {n} e^{\lambda t};\mathbf {u} e^{\lambda t}\}}$

Using the same process of derivation, higher-order problems can be solved similarly.

# Introduction

We now tackle the problem of ${\displaystyle \mathbf {G} (t)}$  being nonzero, so that we have the following problem:

${\displaystyle \mathbf {X} '=\mathbf {AX} +\mathbf {G} (t)}$

There are four reasonable ways to solve this.

# Heterogeneous Linear Differential Equations/Diagonalization

First of all (and kind of obvious suggested by the title), ${\displaystyle \mathbf {A} }$  must be diagonalizable. Second, the eigenvalues and eigenvectors of ${\displaystyle \mathbf {A} }$  are found, and form the matrix ${\displaystyle \mathbf {T} }$  which is an augemented matrix of eigenvectors, and ${\displaystyle \mathbf {D} }$  which is a matrix consisting of the corresponding eigenvalues on the main diagonal in the same column as their corresponding eigenvectors. Then with our central problem:

${\displaystyle \mathbf {X} '=\mathbf {AX} +\mathbf {G} (t)}$

We substitute:

${\displaystyle \mathbf {TY} '=\mathbf {ATY} +\mathbf {G} (t)}$

Then left multiply by ${\displaystyle \mathbf {T} ^{-1}}$

${\displaystyle \mathbf {Y} '=\mathbf {T} ^{-1}\mathbf {ATY} +\mathbf {T} ^{-1}\mathbf {G} (t)}$

As a consequence of Linear Algebra we take the following identity:

${\displaystyle \mathbf {D} =\mathbf {T} ^{-1}\mathbf {AT} }$

Thus:

${\displaystyle \mathbf {Y} '=\mathbf {DY} +\mathbf {T} ^{-1}\mathbf {G} (t)}$

And because of the nature of the diagonal the problem is a series of one-dimensional normal differential equations which can be solved for ${\displaystyle \mathbf {Y} }$  and used to find out ${\displaystyle \mathbf {X} }$ .

# Heterogeneous Linear Differential Equations/Method of Undetermined Coefficients

This is very similar to the Method of Undetermined Coefficients encountered in normal differential equations, with some slight exceptions to the "rules" of guessing. Actually, there's only one rule extra. In the normal method of undetermined coefficients when there was a conflict of the characteristic equation with the particular solution, there was a multiplication by the independent variable. In this case we multiply by ${\displaystyle \mathbf {A} t+\mathbf {B} }$  to include more possible solutions. That, and when working through the problem thoughts about the signifigance of getting a trivial solution when finding eigenvalues must be kept in mind. Other than that, it's pretty much as it was, and is a very powerful method (although it can be quite tedious).

# Heterogeneous Linear Differential Equations/Variation of Parameters

As with the variation of parameters in the normal differential equations (a lot of similarities here!) we take a fundamental solution and by using a product with a to-be-found vector, see if we can come upon another independent solution by these means. In other words, since the general solution can be expressed as ${\displaystyle \mathbf {c\psi } }$  where ${\displaystyle \mathbf {c} }$  is the constant matrix and ${\displaystyle \mathbf {\psi } }$  is the augmented set of independent solutions to the homogeneous equation, we try out a form like so:

${\displaystyle \mathbf {X} =\mathbf {u\psi } }$

And determine ${\displaystyle \mathbf {u} }$  to find a unique solution. The math is fairly straightforward and left as an exercise for the reader, and leaves us with:

${\displaystyle \mathbf {X} =\mathbf {\psi } (t)\mathbf {\psi } ^{-1}(t_{0})\mathbf {X} ^{0}+\mathbf {\psi } (t)\int _{t_{0}}^{t}\mathbf {\psi } ^{-1}(s)\mathbf {g} (s)ds}$

... which is a fairly strong, striaghtforward, yet exceedingly complicated formula.

# Heterogeneous Linear Differential Equations/Laplacian Transforms

Yet AGAIN, very similar to the normal technique. The only nuance is how to take a Laplacian operator of a matrix, however, the Laplacian operator by definition is basically an integral: take the operator of each term inside the matrix. The Laplacian operator then boils the problem to an exercise of linear algebra. and the reverse Laplacian operator works the same way: on each term in the matrix. It's nearly identical to how it worked in normal differential equations.

# Some Graphical Analysis

So far we've dealt with ${\displaystyle \mathbf {A} }$  being a constant matrix, and other niceties; but when it is otherwise, and thus a non-linear differential equation, the best way to find a solution is by graphical means. By taking the independent variables on the axis of a graph, we can note several types of behavior that suggest the form of a solution.

So without adue, here are the main types of behaviors, and their suggested causes:

A nodal source (the graph tends away from a point): real, distinct positive eigenvalues.

A nodal sink (the graph approaches in towards a point): real, dinstinct negative eigenvalues.

A saddle point (the graph approaches from one end and deviates away at another): real, disntinct, opposite eigenvalues.

A spiral point (spirals in or away from a point): a complex eigenvalues.

A series of ellipses around a point: a purely imaginary eigenvalue.

A star point (straight lines deviating or coming towards a point): repeated eigenvalues.