# Partial Differential Equations/The Malgrange-Ehrenpreis theorem

## Vandermonde's matrix

Definition 10.1:

Let ${\displaystyle n\in \mathbb {N} }$  and let ${\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} }$ . Then the Vandermonde matrix associated to ${\displaystyle x_{1},\ldots ,x_{n}}$  is defined to be the matrix

${\displaystyle {\begin{pmatrix}x_{1}&\cdots &x_{n}\\x_{1}^{2}&\cdots &x_{n}^{2}\\\vdots &\ddots &\vdots \\x_{1}^{n}&\cdots &x_{n}^{n}\end{pmatrix}}}$ .

For ${\displaystyle x_{1},\ldots ,x_{n}}$  pairwise different (i. e. ${\displaystyle x_{k}\neq x_{m}}$  for ${\displaystyle k\neq m}$ ) matrix is invertible, as the following theorem proves:

Theorem 10.2:

Let ${\displaystyle \mathbf {A} }$  be the Vandermonde matrix associated to the pairwise different points ${\displaystyle x_{1},\ldots ,x_{n}}$ . Then the matrix ${\displaystyle \mathbf {B} }$  whose ${\displaystyle k,m}$ -th entry is given by

${\displaystyle \mathbf {b} _{k,m}:={\begin{cases}{\frac {\sum _{1\leq l_{1}<\cdots

is the inverse matrix of ${\displaystyle \mathbf {A} }$ .

Proof:

We prove that ${\displaystyle \mathbf {B} \mathbf {A} =\mathbf {I} _{n}}$ , where ${\displaystyle \mathbf {I} _{n}}$  is the ${\displaystyle n\times n}$  identity matrix.

Let ${\displaystyle 1\leq k,m\leq n}$ . We first note that, by direct multiplication,

${\displaystyle x_{m}\prod _{1\leq l\leq n \atop l\neq k}(x_{l}-x_{m})=\sum _{j=1}^{n}x_{m}^{j}{\begin{cases}\sum _{1\leq l_{1}<\cdots  .

Therefore, if ${\displaystyle \mathbf {c} _{k,m}}$  is the ${\displaystyle k,m}$ -th entry of the matrix ${\displaystyle \mathbf {B} \mathbf {A} }$ , then by the definition of matrix multiplication

${\displaystyle \mathbf {c} _{k,m}=\sum _{j=1}^{n}{\frac {x_{m}^{j}{\begin{cases}\sum _{1\leq l_{1}<\cdots  .${\displaystyle \Box }$

## The Malgrange-Ehrenpreis theorem

Lemma 10.3:

Let ${\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} }$  be pairwise different. The solution to the equation

${\displaystyle {\begin{pmatrix}x_{1}&\cdots &x_{n}\\x_{1}^{2}&\cdots &x_{n}^{2}\\\vdots &\ddots &\vdots \\x_{1}^{n}&\cdots &x_{n}^{n}\end{pmatrix}}{\begin{pmatrix}y_{1}\\\vdots \\y_{n}\end{pmatrix}}={\begin{pmatrix}0\\\vdots \\0\\1\end{pmatrix}}}$

is given by

${\displaystyle y_{k}={\frac {1}{x_{k}\prod _{1\leq l\leq n \atop l\neq k}(x_{l}-x_{k})}}}$ , ${\displaystyle k\in \{1,\ldots ,n\}}$ .

Proof:

We multiply both sides of the equation by ${\displaystyle \mathbf {B} }$  on the left, where ${\displaystyle \mathbf {B} }$  is as in theorem 10.2, and since ${\displaystyle \mathbf {B} }$  is the inverse of

${\displaystyle {\begin{pmatrix}x_{1}&\cdots &x_{n}\\x_{1}^{2}&\cdots &x_{n}^{2}\\\vdots &\ddots &\vdots \\x_{1}^{n}&\cdots &x_{n}^{n}\end{pmatrix}}}$ ,

we end up with the equation

${\displaystyle {\begin{pmatrix}y_{1}\\\vdots \\y_{n}\end{pmatrix}}=\mathbf {B} {\begin{pmatrix}0\\\vdots \\0\\1\end{pmatrix}}}$ .

Calculating the last expression directly leads to the desired formula.${\displaystyle \Box }$