# Introduction to Mathematical Physics/Some mathematical problems and their solution/Linear evolution problems, spectral method

## Spectral point of view

Spectral method is used to solve linear evolution problems of type Problem probevollin. Quantum mechanics (see chapters chapmq and chapproncorps ) supplies beautiful spectral problems {\it via} Schr\"odinger equation. Eigenvalues of the linear operator considered (the hamiltonian) are interpreted as energies associated to states (the eigenfunctions of the hamiltonian). Electromagnetism leads also to spectral problems (cavity modes).

Spectral methods consists in defining first the space where the operator $L$  of problem probevollin acts and in providing it with the Hilbert space structure. Functions that verify:

$Lu_{k}(x)=\lambda _{k}u_{k}(x)$

are then seeked. Once eigenfunctions $u_{k(x)}$  are found, the problem is reduced to integration of an ordinary differential equations (diagonal) system.

The following problem is a particular case of linear evolution problem \index{response (linear)} (one speaks about linear response problem)

Problem:

Find $\phi \in V$  such that:

${\frac {d\phi }{dt}}=(H_{0}+H(t))\phi$

where $H_{0}$  is a linear diagonalisable operator and $H(t)$  is a linear operator "small" with respect to $H_{0}$ .

This problem can be tackled by using a spectral method. Section secreplinmq presents an example of linear response in quantum mechanics.

## Some spectral analysis theorems

In this section, some results on the spectral analysis of a linear operator $L$  are presented. Demonstration are given when $L$  is a linear operator acting from a finite dimension space $E$  to itself. Infinite dimension case is treated in specialized books (see for instance ([ma:equad:Dautray5])). let $L$  be an operator acting on $E$ . The spectral problem associated to $L$  is:

Problem:

Find non zero vectors $u\in V$  (called eigenvectors) and numbers $\lambda$  (called eigenvalues) such that:

$Lu=\lambda u$

Here is a fundamental theorem:

Theorem:

Following conditions are equivalent:

1. $\exists u\neq 0Lu=\lambda u$
2. matrix $L-\lambda I$  is singular
3. $det(L-\lambda I)=0$

A matrix is said diagonalisable if it exists a basis in which it has a diagonal form ([ma:algeb:Strang76]).

Theorem:

If a squared matrix $L$  of dimension $n$  has $n$  eigenvectors linearly independent, then $L$  is diagonalisable. Moreover, if those vectors are chosen as columns of a matrix $S$ , then:

$\Lambda =S^{-1}LS{\mbox{ with }}\Lambda {\mbox{ diagonal }}$

Proof:

Let us write vectors $u_{i}$  as column of matrix $S$  and let let us calculate $LS$ :

$LS=L\left({\begin{array}{cccc}\vdots &\vdots &&\vdots \\u_{1}&u_{2}&\ldots &u_{n}\\\vdots &\vdots &&\vdots \\\end{array}}\right)$

$=\left({\begin{array}{cccc}\vdots &\vdots &&\vdots \\\lambda _{1}u_{1}&\lambda _{2}u_{2}&\ldots &\lambda _{n}u_{n}\\\vdots &\vdots &&\vdots \\\end{array}}\right)$

$\left({\begin{array}{cccc}\vdots &\vdots &&\vdots \\u_{1}&u_{2}&\ldots &u_{n}\\\vdots &\vdots &&\vdots \\\end{array}}\right).\left({\begin{array}{cccc}\lambda _{1}&&&\\&\lambda _{2}&&\\&&\ddots &\\&&&\lambda _{n}\end{array}}\right)$

$LS=S\Lambda$

Matrix $S$  is invertible since vectors $u_{i}$  are supposed linearly independent, thus:

$\lambda =S^{-1}LS$

Remark: LABEL remmatrindep If a matrix $L$  has $n$  distinct eigenvalues then its eigenvectors are linearly independent.

Let us assume that space $E$  is a Hilbert space equipped by the scalar product $<.|.>$ .

Definition:

Operator $L^{*}$  adjoint of $L$  is by definition defined by:

$\forall u,v{\mathrel {<}}L^{*}u|v{\mathrel {>}}={\mathrel {<}}u|Lv{\mathrel {>}}$

Definition:

An auto-adjoint operator is an operator $L$  such that $L=L^{*}$

Theorem:

For each hermitic operator $L$ , there exists at least one basis constituted by orthonormal eigenvectors. $L$  is diagonal in this basis and diagonal elements are eigenvalues.

Proof:

Consider a space $E_{n}$  of dimension $n$ . Let $|u_{1}{\mathrel {>}}$  be the eigenvector associated to eigenvalue $\lambda _{1}$  of $A$ . Let us consider a basis the space ($|u_{1}{\mathrel {>}}$  direct sum any basis of $E_{n-1}^{\perp }$ ). In this basis:

$L=\left({\begin{array}{cccc}\lambda _{1}&&v&\\0&&&\\\vdots &&B&\\0&&&\\\end{array}}\right)$

The first column of $L$  is image of $u_{1}$ . Now, $L$  is hermitical thus:

$L=\left({\begin{array}{cccc}\lambda _{1}&0&\ldots &0\\0&&&\\\vdots &&B&\\0&&&\\\end{array}}\right)$

By recurrence, property is prooved..

Theorem:

Eigenvalues of an hermitic operator $L$  are real.

Proof:

Consider the spectral equation:

$L|u{\mathrel {>}}=\lambda |u{\mathrel {>}}$

Multiplying it by ${\mathrel {<}}u|$ , one obtains:

${\mathrel {<}}u|Lu{\mathrel {>}}=\lambda {\mathrel {<}}u|u{\mathrel {>}}LABELuAu$

Complex conjugated equation of uAu is:

${\mathrel {<}}u|L^{*}u{\mathrel {>}}=\lambda ^{*}{\mathrel {<}}u|u{\mathrel {>}}$

${\mathrel {<}}u|u{\mathrel {>}}$  being real and $L^{*}=L$ , one has: $\lambda =\lambda ^{*}$

Theorem:

Two eigenvectors $|u_{1}{\mathrel {>}}$  and $|u_{2}{\mathrel {>}}$  associated to two distinct eigenvalues $\lambda _{1}$  and $\lambda _{2}$  of an hermitic operator are orthogonal.

Proof:

By definition:

$L|u_{1}{\mathrel {>}}=\lambda _{1}|u_{1}{\mathrel {>}}$

$L|u_{2}{\mathrel {>}}=\lambda _{2}|u_{2}{\mathrel {>}}$

Thus:

${\mathrel {<}}u_{2}|Lu_{1}{\mathrel {>}}=\lambda _{1}{\mathrel {<}}u_{2}|u_{1}{\mathrel {>}}$

${\mathrel {<}}u_{1}|Lu_{2}{\mathrel {>}}=\lambda _{2}{\mathrel {<}}u_{1}|u_{2}{\mathrel {>}}$

The difference between previous two equations implies:

$0=(\lambda _{1}-\lambda _{2}){\mathrel {<}}u_{2}|u_{1}{\mathrel {>}}$

which implies the result.

Let us now presents some methods and tips to solve spectral problems.

chapresospec

## Solving spectral problems

The fundamental step for solving linear evolution problems by doing the spectral method is the spectral analysis of the linear operator involved. It can be done numerically, but two cases are favourable to do the spectral analysis by hand: case where there are symmetries, and case where a perturbative approach is possible.

### Using symmetries

Using of symmetries rely on the following fundamental theorem:

Theorem:

If operator $L$  commutes with an operator $T$ , then eigenvectors of $T$  are also eigenvectors of $L$ .

Proof is given in appendix chapgroupes. Applications of rotation invariance are presented at section secpotcent. Bloch's theorem deals with translation invariance (see theorem theobloch at section sectheobloch).

### Perturbative approximation

A perturbative approach can be considered each time operator $U$  to diagonalize can be considered as a sum of an operator $U^{0}$  whose spectral analysis is known and of an operator $U^{1}$  small with respect to $U^{0}$ . The problem to be solved is then the following:\index{perturbation method}

bod

$U\mid \phi {\mathrel {>}}=\lambda \mid \psi {\mathrel {>}}$

Introducing the parameter $\epsilon$ , it is assumed that $U$  can be expanded as:

$U=U_{0}+\epsilon U_{1}+\epsilon ^{2}U_{2}+...$

Let us admit that the eigenvectors can be expanded in $\epsilon$  : For the i$^{th}$  eigenvector:

hyph

$\mid \phi ^{i}{\mathrel {>}}=\mid \phi _{0}^{i}{\mathrel {>}}+\epsilon \mid \phi _{1}^{i}{\mathrel {>}}+\epsilon ^{2}\mid \phi _{2}^{i}{\mathrel {>}}+...$

Equation ( bod) defines eigenvector, only to a factor. Indeed, if $\mid \phi ^{i}{\mathrel {>}}$  is solution, then $a\,e^{i\theta }\mid \phi ^{i}{\mathrel {>}}$  is also solution. Let us fix the norm of the eigenvectors to $1$ . Phase can also be chosen. We impose that phase of vector $\mid \phi ^{i}{\mathrel {>}}$  is the phase of vector $\mid \phi _{0}^{i}{\mathrel {>}}$ . Approximated vectors $\mid \phi ^{i}{\mathrel {>}}$  and $\mid \phi ^{j}{\mathrel {>}}$  should be exactly orthogonal.

${\mathrel {<}}\phi ^{i}\mid \phi ^{j}{\mathrel {>}}=0$

Egalating coefficients of $\epsilon ^{k}$ , one gets:

eqortper

${\mathrel {<}}\phi _{0}^{i}\mid \phi _{k}^{j}{\mathrel {>}}+{\mathrel {<}}\phi _{1}^{i}\mid \phi _{k-1}^{j}{\mathrel {>}}+\ldots +{\mathrel {<}}\phi _{k}^{i}\mid \phi _{0}^{j}{\mathrel {>}}=0$

Approximated eigenvectors are imposed to be exactly normed and ${\mathrel {<}}\phi _{0}^{i}\mid \phi _{j}^{i}{\mathrel {>}}$  real:

${\mathrel {<}}\phi _{0}^{i}\mid \phi _{1}^{i}{\mathrel {>}}=1$

Equalating coefficients in $\epsilon ^{k}$  with $k>1$  in product ${\mathrel {<}}\phi ^{i}\mid \phi ^{i}{\mathrel {>}}=1$ , one gets:

${\mathrel {<}}\phi _{0}^{i}\mid \phi _{k}^{i}{\mathrel {>}}+{\mathrel {<}}\phi _{1}^{i}\mid \phi _{k-1}^{i}{\mathrel {>}}+\ldots +{\mathrel {<}}\phi _{k}^{i}\mid \phi _{0}^{i}{\mathrel {>}}=0.$

Substituting those expansions into spectral equation bod and equalating coefficients of successive powers of $\epsilon$  yields to:

oivj

${\begin{matrix}&&U_{0}\mid \phi _{j}^{i}{\mathrel {>}}+U_{1}\mid \phi _{j-1}^{i}{\mathrel {>}}+...+U_{j}\mid \phi _{0}^{i}{\mathrel {>}}\\&=&\lambda _{0}^{i}\mid \phi _{j}^{i}{\mathrel {>}}+\lambda _{1}^{i}\mid \phi _{j-1}^{i}{\mathrel {>}}+...+\lambda _{j}^{i}\mid \phi _{0}^{i}{\mathrel {>}}\end{matrix}}$

Projecting previous equations onto eigenvectors at zero order, and using conditions eqortper, successive corrections to eigenvectors and eigenvalues are obtained.

### Variational approximation

In the same way that problem

Problem:

Find $u$  such that:

1. $Lu=f,u\in E,x\in \Omega$

2. $u$  satisfies boundary conditions on the border $\partial \Omega$  of $\Omega$ .

can be solved by variational method, spectral problem:

Problem:

Find $u$  and $\lambda$  such that:

1. $Lu-\lambda u=f,u\in E,x\in \Omega$

2. $u$  satisfies boundary conditions on the border $\partial \Omega$  of $\Omega$ .

can also be solved by variational methods. In case where $L$  is self adjoint and $f$  is zero (quantum mechanics case), problem can be reduced to a minimization problem. In particular, one can show that:

Theorem:

The eigenvector $\phi$  with lowest energy $E_{0}$  of self adjoint operator $H$  is solution of problem: Find $\phi$  normed such that:

$J(\phi )=\min _{\psi \in V}J(\psi )$

where $J(\psi )=<\psi |H\psi >$ .

Eigenvalue associated to $\phi$  is $J(\phi )$ .

Demonstration is given in ([ph:mecaq:Cohen73],[ph:mecaq:Pauling60]). Practically, a family of vectors $v_{i}$  of $V$  is chosen and one hopes that eigenvector $\phi$  is well approximated by some linear combination of those vectors:

$\phi =\sum c_{i}v_{i}$

Solving minimization problem is equivalent to finding coefficients $c_{i}$ . At chapter chapproncorps, we will see several examples of good choices of families $v_{i}$ .

Remark: In variational calculations, as well as in perturbative calculations, symmetries should be used each time they occur to simplify solving of spectral problems (see chapter chapproncorps).

1. This is not obvious from a mathematical point of view (see [ma:equad:Kato66])