# Correlated Gaussian method in Quantum Mechanics

Explicitely correlated Gaussian functions have been extensively used in quantum-mechanical variational calculations in atomic, molecular, and nuclear physics. This book is an attempt to collect the relevant information about this established tool in computational quantum mechanics.

## Introduction

Often we describe bound-state and scattering problems in nuclear and atomic physics through the Shrödinger equation. Unfortunately modern quantum physics offers problems that we can not solve analytically. Luckily the availability of powerful computers is shifting the emphasis from the analytical computation of the solution toward numerical analysis. During the last century numerous methods were developed in order to approximate solutions numerically, e.g. Monte-Carlo simulation, Hypershperical expansion, variational methods with different trial wave functions etc.

In this section we discuss variational method with trial function in the form of correlated Gaussians which is widely used in the modern physics. Mathematically it is based on the Ritz theorem, that states that for an arbitrary function Ψ from the state space the expectation value of the Hamiltonian (<Ψ|H|Ψ>/<Ψ|Ψ>) is larger then the ground state energy. So choosing different trial wave functions and calculating mean values of the Hamiltonian for this functions allows us to get an upper bound for the ground state energy.

#### Example

To show the idea of the method we consider two particles in 1 dimension interacting through the oscillator potential

$\bigg[-\frac{1}{2}\frac{\partial^2}{\partial{x}_1}-\frac{1}{2}\frac{\partial^2}{\partial{x}_2}+\frac{1}{2}(x_1-x_2)^2\bigg]\phi=E\phi \; .$

It is really simple textbook problem with the ground state solution

$\phi_0=(\frac{\sqrt{2}}{\pi})^{1/4}\exp(-\frac{1}{2\sqrt{2}}(x_1-x_2)^2) \; , E_0=\frac{1}{\sqrt{2}} \; ,$

where we assumed for simplicity, that the total momentum is equal to zero.

To show how the method works we choose the trial wave function in the form of just one gaussian

$f_{tr}=(\frac{4\alpha}{\pi})^{1/4}\exp(-\alpha(x_1-x_2)^2) \; ,$

where we have only one real positive parameter which has to minimize the energy. The idea of the method is to pick this parameter stochastically using just generator of the real numbers. We find out that independently of the seed after 50 attempts we found value of α that gives the ground state energy with 5 significant digits.

Convergence of the expectation value of the Hamiltonian (different curves corresponds to different starting seeds for the generator of the random numbers).

Of course it's really simple example and we can establish energy with high precision just because we are working in the space that contains ground state wave function of the Hamiltonian.

In order to establish excited states it's not enough to use just one Gaussian so we pick trial wave function in more general form

$f_{tr}=\sum_{i=1}^N c_i\exp(-\alpha_i(x_1-x_2-s_i)^2) \; .$

As above we pick parameters $\alpha_i,s_i$ stochastically and then determine linear parameters ci demanding minimal expectation value of the Hamiltonian. Using N=25 and one random set of the parameters $\alpha_i,s_i$ (we assume that $0<\alpha_i<5 ,-1) we get first 6 eigenstates with 5 significant digits.

From this simple example we learn that we are able to approximate solution of the Schrödinger equation without any preliminary knowledge about the system using only random search. The main problem is to estimate how good this approximation is.

↑Jump back a section

## The system under consideration

We are going to consider a non-relativistic quantum mechanical $N$-body system in D dimensions generally described by a Hamiltonian

$H = \sum_{i=1}^{N} -\frac{\hbar^2}{2m_i}\frac{\partial^2}{\partial\mathbf{r}_i} + \sum_{i=1}^{N}\frac{m_i\omega^2}{2}r_i^2 + \sum_{i

where $m_i$ and $\mathbf{r}_i$ are the mass and the coordinate of particle number $i$; the first term is the kinetic energy operator; the second term is the external field, for example Magneto-optical trap; and the third term is the inter-particle interactions.

The goal is to find an $N$-body wave-function $\Psi(\mathbf{r}_1,\dots,\mathbf{r}_N)$ which satisfies the Schrödinger equation equation

$H\Psi=E\Psi$.

In practice finding the solution analytically is not possible. So we have to find approximate solutions to the equation.

↑Jump back a section

## Mathematical formulation for the ground state problem

Let us consider a time-independent physical system whose Hamiltonian H is Hermitian and bounded from below. We want to approximate the discrete eigenvalues of H and its wave functions

$H \Psi_n = E_n \Psi_n \;,$

where we ordered eigenvalues s.t.$E_0

It means that we would like to find such square integrable functions ${f_i}$, that $\langle (H-E_i) f_i| (H-E_i) f_i\rangle \ll \mathit{eps} \times E_i^2 \langle f_i| f_i\rangle$, with some $\mathit{eps}\in \mathbf{R}$. Unfortunately in practice we don't know exact eigenvalues of the Hamiltonian, so first we have to find approximation to the energy $E_i$. The following theorem gives us the receipt. Here we would like to restrict ourselves to the ground state, but using the Min-max theorem one can extend to the whole discrete spectrum of the Hamiltonian

Theorem

The expectation value of the Hamiltonian for any $\phi$ from the state space is equal or larger then the ground state energy $E_0$.

Proof

Apparently the function $\phi$ can be decomposed in the orthogonal basis ${\Psi_n}$: $\phi= \sum a_i \Psi_i$. With this decomposition we write the mean value of the Hamiltonian: $E[\phi]=\frac{<\phi|H|\phi>}{<\phi|\phi>} =\frac{\sum_i |a_i|^2 E_i}{\sum_i |a_i|^2}$, from which follows that $E[\phi]=E_0+\frac{\sum_i |a_i|^2 (E_i-E_0)}{\sum_i |a_i|^2} \geq E_0 \; \Box$.

This statement is often called Ritz theorem and might be seen as a corollary of the Min-max theorem.

This result allows us compute an upper bound for the ground state energy.

The following theorem according to Weinstein allows us to rewrite our initial demand that $\langle (H-E_i) f_i| (H-E_i) f_i\rangle \ll \mathit{eps} \times E_i^2 \langle f_i| f_i\rangle$ in terms of the variance $\sigma^2[\phi]=\frac{<\phi|(H-E)^2|\phi>}{<\phi|\phi>}$

Theorem

There exist at least one eigenvalue in the interval $\bigg[E[\phi]-\sigma[\phi],E[\phi]+\sigma[\phi]\bigg]$ .

Proof

We write $\phi$ in the ${\Psi_n}$ basis, and get $\sigma^2[\phi]=\frac{\sum_i |a_i|^2 (E_i-E)^2}{\sum_i |a_i|^2}$. There exist integer $k$, s.t. $(E_k-E)^2<(E_i-E)^2, \forall i \neq k$. With this we rewrite variance $\sigma^2[\phi]=(E_k-E)^2+\frac{\sum_i |a_i|^2 \bigg((E_i-E)^2-(E_k-E)^2\bigg)}{\sum_i |a_i|^2} \geq (E_k-E)^2 \; \Box .$

This result might be useful if and only if the lower bound can be calculated as close as possible to the ground state energy.

With these theorems we see the way to proceed:

1. Take convenient basis in the state space of the Hamiltonian.

2. Cut the basis size to some finite number.

3. Minimise expectation value of the Hamiltonian in this basis.

4. Enlarge basis and do step 3.

5. Do steps 3,4 as long as needed to insure convergence of the ground state energy.

6. Calculate variance.

7. If variance is larger than some precision value than enlarge basis size and do 3,4,5,6 again, otherwise we are done.

In practice steps 3,4,5 alone can give accurate value of energy. Steps 6,7 are needed for approximation of the wave function. This is due to the following theorem

Theorem

The expectation value of the Hamiltonian is stationary in the neighbourhood of the discrete eigenvalues.

Proof

So in general it is easier to get accurate approximation to the energy than to other observables.

↑Jump back a section

## Basis

We want to start with the first step: take some convenient basis. We would like to define convenient for our problem

1. Simple transformation from one system of coordinates to another.

2. Possibility to eliminate the centre of mass.

3. Easy computations for the overlap and kinetic energy.

↑Jump back a section

## Coordinates

It is of advantage to introduce rescaled coordinates,

$\mathbf{q}_i = \sqrt{\frac{m_i}{m}} \mathbf{r}_i \;,$

where $m$ is a conveniently chosen mass scale. Indeed the kinetic energy $T$ and the harmonic trap $V_h$ have a more symmetric form in the rescaled coordinates,

$T=-\frac{\hbar^2}{2m}\sum_i\frac{\partial}{\partial\mathbf{q}_i^2} \;, V_h = \frac{m\omega^2}{2} \sum_{i=1}^{N}\mathbf{q}_i^2 \;.$

The Jacobian of the transformation from $\mathbf{r}$ to $\mathbf{q}$ is equal

$\frac{\partial(\mathbf{q}_1,\dots,\mathbf{q}_N)}{\partial(\mathbf{r}_1,\dots,\mathbf{r}_N)} = \prod_{i=1}^{N} \left(\frac{m_i}{m}\right)^{3/2} \;.$

A further suitable linear transformation to a new set of coordinates is possible,

$\mathbf{x}_i = \sum_{j=1}^N U_{ij} \mathbf{q}_j \;,$

or, in matrix notation,

$\mathbf{x} = U \mathbf{q} \;,$

where $U$ is the transformation matrix.

If the transformation matrix is unitary, $U^TU=1$, the diagonal form of the kinetic energy and the harmonic trap is preserved in the new coordinates,

$T=-\frac{\hbar^2}{2m}\sum_i\frac{\partial}{\partial\mathbf{x}_i^2} \;, V_h = \frac{m\omega^2}{2} \sum_{i=1}^{N}\mathbf{x}_i^2 \;.$

Last transformation is of particular use if new system has the coordinate

$\mathbf{x_N}\sim \sum_i\mathbf{r_i}\;,$

which can be seen as a center of mass coordinate. It allows us to work with a wave function in the form

$\Psi(\mathbf{r_1},\mathbf{r_2},...,\mathbf{r_N})=\Phi(\mathbf{x_1},\mathbf{x_2},...,\mathbf{x_{N-1}})\phi(\mathbf{x_N})\;,$

where $\phi(\mathbf{x_N})$ is the ground state wave function for the oscillator potential.

↑Jump back a section

## Correlated Gaussians General Case

First we consider trial wave function in the basis of completely general shifted Gaussian, which can be used to describe a system in the external field with anisotropic inter-particle interaction

$\left|\phi\right\rangle=\sum_{k=1}^K C_k \left|A^{(k)},s^{(k)};x\right\rangle$

where

$\left|A,s;x\right\rangle = \exp\left(-\sum_{i,j=1}^{D*n} A_{ij} (x_i-s_i)(x_j-s_j)\right) \equiv e^{-(x-s)^TA(x-s)}\;,$

$A$, a symmetric positive-defined matrix, and $s$, a shift vector, are the non-linear parameters of the Gaussian and n=N-1. With this definition we have $K(\frac{1}{2}D*n(D*n+1)+D*n)$ non-linear variational parameters. To find those one can use deterministic methods (e.g. Powell's method) or methods based on a stochastic search. We use latter approach though we find $K$ linear variational parameters through a full minimization with respect to a given set of non-linear parameters.

↑Jump back a section

## Matrix elements

### Overlap

Correlated Gaussians are generally non-orthogonal and the overlap is therefore non-diagonal,

$N\equiv\langle A',s';x | A,s;x \rangle = \int d^Dx_1\dots\int d^Dx_n e^{-(x-s)^TA(x-s)-(x-s')^TA'(x-s')} = \frac{\pi^{\frac{D*n}{2}}}{\sqrt{\det B}}e^{(-s^TAs-s'^TA's'+\frac{1}{4}v^TB^{-1}v)}\;.$

where we defined $B=A+A'\; , v=2As+2A's'$

### Kinetic energy

Here we calculate kinetic energy

$-\langle A',s';x| \frac{\partial}{\partial x}^T\Lambda \frac{\partial}{\partial x} |A,s;x \rangle = N\times \left( 2 {\rm tr} (A' \Lambda A B^{- 1}) + 4 u^T A' \Lambda A u - 4 u^T (A' \Lambda A s + A \Lambda A's') + 4 s'^T A' \Lambda A s \right)$

where we defined $u=\frac{1}{2} B^{-1}v$ for $\Lambda=\frac{1}{2} I$ with $I$ to be the identity matrix, one can get simpler expression after noticing that

$-\frac{1}{2}\frac{\partial}{\partial x}^T \frac{\partial}{\partial x} |A,s;x \rangle = ({\rm tr}A-2(x-s)^TAA(x-s)) |A,s;x \rangle$

To proceed one has to derive the following identities:

$\langle A',s';x| \left( a^T x \right) |A,s;x \rangle = N\times (a^T u) \;,$
$\langle A',s';x| \left( x^T F x \right)|A,s;x \rangle = N\times \left( u^T F u +\frac{1}{2} {\rm tr} (F B^{- 1}) \right) \;.$

To calculate

$\langle A',s';x| (-\frac{1}{2}\frac{\partial}{\partial x}^T \frac{\partial}{\partial x})(-\frac{1}{2}\frac{\partial}{\partial x}^T \frac{\partial}{\partial x} )|A,s;x \rangle$

which we need to calculate variance we have to calculate the following matrix element

\begin{align} \langle A',s';x| (x^T D x)(x^T D_1 x)|A,s;x \rangle = N \times \bigg( &[u^T D u][u^TD_1u] +\frac{1}{2} [u^T D_1 u ]{\rm tr} (D B^{-1})+ \frac{1}{2} [u^T D u ]{\rm tr} (D_1 B^{-1})+\\ &2u^TDB^{-1}D_1u+\frac{1}{4}{\rm tr}(DB^{-1}){\rm tr}(D_1B^{-1})+{\rm tr}(DB^{-1}D_1B^{-1})\bigg) \;, \end{align}

### Potential energy

Here we calculate matrix element

$\langle A',s';x |V|A,s;x \rangle = \sum_{i

In general we can not write analytical expression for this integral, but we can reduce it to D dimensional integrals. For example, consider just one term from the sum

$I=\int d^Dx_1\dots\int d^Dx_n V_{ij}e^{-(x-s)^TA(x-s)-(x-s')^TA'(x-s')} \; ,$

to simplify this integral we have to make transformation from Jacobi set $(\mathbf{x_1},\mathbf{x_2},...,\mathbf{x_n})$ , where matrices A,A',s,s' are defined to the Jacobi set $(\mathbf{y_1},\mathbf{y_2},...,\mathbf{y_n})$, where $\mathbf{y_1}=\mathbf{r_i}-\mathbf{r_j}$. transformation between those sets are provided through the orthogonal matrix U: x=Uy. With this we write

$I=\int d^Dy_1\dots\int d^Dy_n V_{ij}(\mathbf{y_1})e^{-(y-U^T s)^TU^TAU(y-U^Ts)-(y-U^Ts')^TU^TA'U(y-U^Ts')}=\int d^Dy_1V_{ij}(\mathbf{y_1})f(\mathbf{y_1})\;,$

where

$f(\mathbf{y_1})=\int d^Dy_2\dots\int d^Dy_n e^{-(y-U^T s)^TU^TAU(y-U^Ts)-(y-U^Ts')^TU^TA'U(y-U^Ts')} \;,$

can be found analytically.

If we can write potential as a sum of Gaussians $V_{ij}(\mathbf{y_1})=\sum_k c_k e^{-\sum_{a,b=1}^{a,b=D}(y_a-u_a^{k})^TF_{ab}^{k}(y_b-u_b^{k})}$ then the integral $I$ can be found analytically in the same way as we found overlap.

#### Coulomb

↑Jump back a section

## Particles with spin

To consider particles with spin we add spin part to the trial wave function

$\left|\phi\right\rangle=\sum_{k=1}^K C_k \left|A^{(k)},s^{(k)};x\right\rangle \chi_k \;$

where for particles with spin = 1/2 function $\chi_k$ is just an array of N elements. Each element is an eigenfunction of the spin's projection on the predefined axis. For example $\chi_k=|\frac{1}{2},\frac{1}{2},...,\frac{1}{2}>$ defines the system with all particles have spin in the same direction. Next we define the spin operator $\mathbf{s^{(i)}}$ that acts on the particle number $i$ in the following way

$<\pm \frac{1}{2}| s_z^{(i)} |\pm \frac{1}{2}>=\pm \frac{1}{2}$
$<\pm \frac{1}{2}| s_x^{(i)} |\mp \frac{1}{2}>= \frac{1}{2}$
$<\pm \frac{1}{2}| s_y^{(i)} |\mp \frac{1}{2}>=\mp \frac{\mathbf{i}}{2}$

and zero otherwise.

#### Spin-orbit

Here we discuss spin-orbit potential of the form $V_{ij,LS}=V(|\mathbf{r_i}-\mathbf{r_j}|)\mathbf{L^{(ij)}}{\mathbf{S^{(ij)}}}$, where $\mathbf{S^{(ij)}}=\mathbf{s^{(i)}}+\mathbf{s^{(j)}}$ and $L^{(ij)}_k=-\mathbf{i}\sum_{l,m=1}^{l,m=D}e_{klm}y_l\frac{\partial}{\partial y_m}$ - relative angular momentum, where $e_{klm}$ - Levi-Chevita symbol, and $\mathbf{y}=\mathbf{r_i}-\mathbf{r_j}$. We have to calculate following matrix element

$I_k=\left \langle A',s';x|V(y)e_{klm}y_l\frac{\partial}{\partial y_m}|A,s;x\right\rangle$

again we are making transformation from the Jacobi set $(\mathbf{x_1},...,\mathbf{x_n})$ to the Jacobi set $(\mathbf{y_1}=\mathbf{y},...,\mathbf{y_n})$ using a transformation matrix U: x=Uy.

\begin{align} I_k=&\int d^Dy_1\dots\int d^Dy_n V(\mathbf{y_1})e_{klm}y_l\bigg(-2(U^TAU)_{jm}(y-U^Ts)_j\bigg) \\ & e^{-(y-U^T s)^TU^TAU(y-U^Ts)-(y-U^Ts')^TU^TA'U(y-U^Ts')} , 0
↑Jump back a section

## Correlated Gaussians with Super Vectors

In the previous section we considered completely general setup, which is suitable for any inter-particle potentials and external fields. This approach is far from optimal if for example we are interested in ground state of N bosons with isotropic pairwise interaction, because in this case we know that our ground state must have zero orbital momentum, with this in mind we write the trial wave function in the smaller variational basis:

$\left|\phi\right\rangle=\sum_{k=1}^K C_k \left|A^{(k)},\mathbf{s^{(k)}};x\right\rangle$

where

$\left|A,\mathbf{s};x\right\rangle = \exp\left(-\sum_{i,j=1}^{n} A_{ij} (\mathbf{x_i}-\mathbf{s_i})\cdot(\mathbf{x_j}-\mathbf{s_j})\right) \equiv e^{-(\mathbf{x}-\mathbf{s})^TA(\mathbf{x}-\mathbf{s})}\;.$

If we put shift vectors to be zeros $\mathbf{s_i}=0,\mathbf{s_j}=0$, then the trial wave function treats Cartesian components of vectors $\mathbf{x}$ equivalently, which leads to zero angular momentum, otherwise the wave function will contain all possible angular momentum and we need an effective procedure to build an eigenstate for a given angular momentum. Matrix elements for this trial wave functions can be obtained from the general case, but we write it explicitly.

↑Jump back a section

## Matrix Elements

### Overlap

$N\equiv\langle A',\mathbf{s'};x | A,\mathbf{s};x \rangle = \int d^Dx_1\dots\int d^Dx_n e^{-(\mathbf{x}-\mathbf{s})^TA(\mathbf{x}-\mathbf{s})-(\mathbf{x}-\mathbf{s'})^TA'(\mathbf{x}-\mathbf{s'})} = \bigg(\frac{\pi^n}{\det B}\bigg)^{\frac{D}{2}}e^{(-\mathbf{s}^TA\mathbf{s}-\mathbf{s'}^TA'\mathbf{s'}+\frac{1}{4}\mathbf{v}^TB^{-1}\mathbf{v})}\;.$

where we defined $B=A+A'\; , \mathbf{v}=2A\mathbf{s}+2A'\mathbf{s'}$

### Kinetic energy

\begin{align} -\langle A',\mathbf{s'};x| \frac{\partial}{\partial x}^T\Lambda \frac{\partial}{\partial x} |A,\mathbf{s};x \rangle = & N\times \left( 2 D {\rm tr} (A' \Lambda A B^{- 1}) + 4 \mathbf{u}^T A' \Lambda A \mathbf{u} - 4 \mathbf{u}^T (A' \Lambda A \mathbf{s} + A \Lambda A'\mathbf{s'}) + 4 \mathbf{s'}^T A' \Lambda A \mathbf{s} \right) = \\ & N\times \left( 2 D {\rm tr} (A' \Lambda A B^{- 1}) + 4 (\mathbf{u}-\mathbf{s'})^T A' \Lambda A (\mathbf{u} -\mathbf{s}) \right) \end{align}

where we defined $\mathbf{u}=\frac{1}{2} B^{-1}\mathbf{v}$

### Angular momentum

We consider the matrix element of the operator $O=\sum_{i,j=1}^{i,j=n} a_i b_j [\mathbf{x_i}\times\mathbf{\frac{\partial}{\partial \mathbf{x_j}}}]\equiv [a\mathbf{x}\times b \frac{\partial}{\partial \mathbf{x}}]$, choice of $a,b$ can give the total angular momentum or an angular momentum for appropriate relative coordinate.

First we calculate matrix elements of the form

$\langle A',\mathbf{s'};x |a\mathbf{x}|A,\mathbf{s};x \rangle = N(a \mathbf{u})$
$\langle A',\mathbf{s'};x |[a\mathbf{x}\times b\mathbf{x}]|A,\mathbf{s};x \rangle = N (a \mathbf{u}\times b\mathbf{u})$

and now we can calculate matrix element for operator $O$

$\langle A',\mathbf{s'};x |[a\mathbf{x}\times b \frac{\partial}{\partial \mathbf{x}}]|A,\mathbf{s};x \rangle = -2 \langle A',\mathbf{s'};x |[a\mathbf{x}\times b A \mathbf{x}]| A,\mathbf{s};x \rangle+2\langle A',\mathbf{s'};x |[a\mathbf{x}\times b A \mathbf{s}]| A,\mathbf{s};x \rangle = 2N[a\mathbf{u}\times bA(\mathbf{s}-\mathbf{u})] \;.$

We define total angular momentum to be $\bold{L} \equiv \sum_i^N (\bold{r}_i\times \bold{p}_i)$. If we make transformation to the Jacobi set, than we obtain $\bold{L} = \sum_i^N (\bold{x}_i\times \bold{P}_i)$, where $\bold{P}_i$ is the linear momentum corresponding to the $i$ coordinate. So if we assume that the system as a whole is at rest s.t. $\bold{P}_N=0$ than the following matrix element defines total angular momentum

$=-\hbar^2 \sum_{i=1}^{n}\sum_{k=1}^{n}\langle A',\mathbf{s'};x |(\bold {x}_i\times\frac{\partial}{\partial \bold{x}_i})(\bold {x}_k\times\frac{\partial}{\partial \bold{x}_k})|A,\mathbf{s};x \rangle =\hbar^2 \sum_{i=1}^{n}\sum_{k=1}^{n}\langle A',\mathbf{s'};x |\leftarrow(\bold {x}_i\times\frac{\partial}{\partial \bold{x}_i})(\bold {x}_k\times\frac{\partial}{\partial \bold{x}_k})|A,\mathbf{s};x \rangle$

After simplification (first we rotate to the set of coordinate, where matrix $A$ takes a diagonal form, than we rotate back and rotate to the set where matrix $A'$ takes a diagonal form, and again rotate back) we obtain

$=4\hbar^2 \sum_{i,m,k,l=1}^{n}\langle A',\mathbf{s'};x |A_{im}'(\bold {x}_i\times \bold{s'}_m)A_{kl}(\bold {x}_k\times \mathbf{s}_l)|A,\mathbf{s};x \rangle$

We take the following integral

$\langle A',\mathbf{s'};x |x_m^l x_k^f|A,\mathbf{s};x \rangle = N\bigg(\frac{1}{2}(B^{-1})_{km}\delta^{fl}+\frac{1}{4}(B^{-1})_{kc}v_c^f(B^{-1})_{mn}v_n^l\bigg) ; l,f=1,2,3; m,k = 1,2,...,n$

where $\delta^{fl}$ - is a Kronecker delta.

With this we write total angular momentum

$=4\hbar^2 N \bigg(\bold{s'}^T A'B^{-1}A\bold{s}+\frac{1}{4}\sum_{k,c}(A'B^{-1})_{kc}[\bold{v}_k\times\bold{s'}_c] \cdot \sum_{a,b}(A B^{-1})_{ab}[\bold{v}_a\times\bold{s}_b]\bigg)$

#### Tensor

↑Jump back a section

## Bound state variational calculations

↑Jump back a section