# Electronic Properties of Materials/Quantum Mechanics for Engineers/Particle in a Box

 Electronic Properties of Materials/Quantum Mechanics for Engineers ← Quantum Mechanics for Engineers/The Fundamental Postulates Particle in a Box Quantum Mechanics for Engineers/Momentum Velocity and Position →

This is the fourth chapter of the first section of the book Electronic Properties of Materials.

<ROUGH DRAFT>

So far we've gotten a feel for how the quantum world works and we've walked through the mathematical formalism, but for a theory to be any good, it must be possible to calculate meaningful values. The goal of this course is to show how the properties of solids come from quantum mechanics and the properties of atoms. Before we look at the properties of solids, we need to study how electrons and atoms interact in a quantum picture. Over the next several chapters, we will study this, but first we need to consider atoms in isolation.

So we want to solve the time-independent Schrodinger Equation, ${\textstyle {\hat {H}}={\hat {T}}+{\hat {V}}}$ . As it happens, finding ${\textstyle {\hat {V}}=V(r)}$ for most problems is non-trivial. In atoms, the FIGURE potential goes ${\textstyle \alpha {-Zq^{2} \over r}}$ , but the FIGURE interactions are difficult, as we will prove later. As it happens, the way to approach this is through simplifications and approximations. We're going to start with the simplest calculations and build up from there.

## Time-Dependent Schrodinger Equation

Let's look at a particle in a one-dimensional box with infinite boundaries.

{\begin{aligned}&{\hat {H}}={\hat {T}}+{\hat {V}}\\&{\hat {T}}={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\\&{\hat {V}}=V(x)={\begin{cases}0\quad \ if\ 0

The fact that $V(x)$  only goes from zero to infinity means that we can essentially throw out anything past the defined barriers of our box. Note that here we will solve for $H(x)$  only, not $H(x,t)$ , which implies a separation of variables. Let's check this idea by guessing the solution:

$\Psi (x,t)=X(x)T(t)$

Here the solution is a product of two functions, $X(x)$  and $T(t)$ . To solve, we substitute it into the time-dependent S.E. and rearrange.

{\begin{aligned}i\hbar \ {\partial \over \partial t}\ \Psi (x,t)&={\hat {H}}\ \Psi (x,t)\\i\hbar \ {\partial \over \partial t}\ X(x)\ T(t)&={\hat {H}}\ X(x)\ T(t)\\{1 \over X(t)\ T(t)}*(i\hbar \ x{\partial \over \partial t}\ T&=THx)\\i\hbar \ x{\partial \over \partial t}\ T&=THx\\i\hbar {1 \over xT}\ x{\partial \over \partial t}\ T&={1 \over Tx}THx\\i\hbar {1 \over T}\ {\partial \over \partial t}\ T&={1 \over x}\ Hx\end{aligned}}

Both pure t, ${\textstyle i\hbar \ {1 \over T}{\partial \over \partial t}\ T}$ , and pure x, ${\textstyle {1 \over x}Hx}$ , must be equal to some shared constant, $\alpha$ .

Thus:

{\begin{aligned}&\alpha =i\hbar {1 \over T}{\partial \over \partial t}T\\&\alpha ={1 \over x}Hx\\\end{aligned}}\longrightarrow HX(x)=\alpha X(x)

<$X(\alpha )$ ???>

Look! It's the Time-Independent Schrodinger Equation! This is exactly what we want to solve. As the Hamiltonian operator is the operator of energy we're going to be getting eigenvalues, $\alpha$ , which are measurable values of energy, and eigenfunctions, $X(\alpha )$ , which are the functions corresponding to the energy. It is common for people to rewrite this as:

{\begin{aligned}\alpha &=E_{n}\\X(x)&=\phi _{n}(x)\\\therefore H\phi _{n}(x)&=E_{n}\phi _{n}(x)\end{aligned}}

Returning to the time-dependent part, and rewriting as:

$i\hbar \ {\partial \over \partial t}\ T(t)=E\ T(t)$

<CHECK MATH - VIDEO 13:10>

Taking ${\textstyle T(t)=Ae^{kt}}$  as our guess, one solution is:

{\begin{aligned}i\hbar \ {\partial \over \partial t}\ (Ae^{kt})=i\hbar ;&\quad e^{kt}=EAe^{kt}\\i\hbar k=E;&\quad k={-i \over \hbar }\\T(t)=&\ Ae^{{-i \over \hbar }Et}\end{aligned}}

Always, when $H(x)$ , a solution is:

$\Psi (x,t)=A_{n}e^{{-i \over \hbar }E_{n}t}\phi _{n}(x)$

## The Time-Independent Solution, ${\textstyle \phi _{n}(x)}$ .

The general method to solve this type of problem is to break the space into parts with boundary conditions; each region having its own solution. Then, since the boundaries are what give us the quantization, we use the region interfaces to solve.

<FIGURE> "Title" (Description)

Equations
$\phi _{I}(0)=\phi _{II}(0)$  ${\partial \over \partial x}\phi _{I}(0)={\partial \over \partial x}\phi _{II}(0)$
$\phi _{II}(L)=\phi _{III}(L)$  ${\partial \over \partial x}\phi _{II}(L)={\partial \over \partial x}\phi _{III}(L)$

${\begin{array}{lcl}\phi _{I}(0)=\phi _{II}(0)&{\partial \over \partial x}\phi _{I}(0)={\partial \over \partial x}\phi _{II}(0)\\\phi _{II}(L)=\phi _{III}(L)\quad &{\partial \over \partial x}\phi _{II}(L)={\partial \over \partial x}\phi _{III}(L)\end{array}}$

<PICK ONE^^^>

Regions I and III have a fairly simple solution here:

{\begin{aligned}\infty \phi &=E\phi \\\therefore \ \phi &=0\end{aligned}}

Region II has:

${-\hbar \over 2m}\ {\partial ^{2} \over \partial x^{2}}\ \phi (x)=E\ \phi (x)$

What is a good solution? Let's try planewaves! The general Solution for Planewaves, $Ae^{ikx}+Be^{-ikx}$ , is not very easy to lug around, and wave functions in quantum mechanics are in general complex form.

{\begin{aligned}A\ [cos(kx)+isin(kx)]&+B[cos(kx)-isin(kx)]\\(A+B)cos(kx)&+i(A-B)sin(kx)\\\alpha cos(kx)&+\beta sin(kx)\end{aligned}}

Now apply some boundary conditions...

{\begin{aligned}\phi (0)=\underbrace {\alpha cos(0)} _{=1}+\underbrace {\beta sin(0)} _{=0}&=0\\\therefore \alpha &=0\end{aligned}}

{\begin{aligned}\phi (L)=\beta sin(kL)&=0\\sin(z)&=0,\ where\ z=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n\\kL&=n\pi \rightarrow k={n\pi \over L},\ where\ n=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n\end{aligned}}

$\therefore \phi _{n}(x)=\beta sin({n\pi \over L}x);\ where\ n=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n$

{\begin{aligned}E\ \beta sin({n\pi \over L}x)&={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\ \beta sin({n\pi \over L}x)\\&={-\hbar ^{2} \over 2m}{\partial \over \partial x}\ ({x\pi \over L})\ \beta cos({n\pi \over L}x)\\&={+\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\ \beta sin({n\pi \over L}x)\\&={\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\ \beta sin({n\pi \over L}x)\\E_{n}&={+\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\end{aligned}}

Thus we have the equation for quantized energy, where n is limited to counting numbers ($n=1,2,3,\dots ,n$ ), but we still need to solve for $\beta$ . Given:

$\Phi (x,t)=A\exp[{-i \over \hbar }\ E_{n}t]$

Pick a constant to fix normalization. In this case we choose $A$ .

$\int _{-\infty =0}^{\infty =L}\Psi ^{*}\ \Psi \ dx=1\longrightarrow |A|^{2}\ \int _{0}^{L}sin({n\pi \over L}x)^{2}\ dx=1$

Substitute and solve...

{\begin{aligned}1&=|A|^{2}\ \int _{0}^{L}sin({n\pi \over L}x)^{2}\ dx;\qquad let\ q={n\pi \over L}\\&=|A|^{2}({1 \over 2i})^{2}\int _{0}^{L}(e^{iqx}+e^{-iqx})^{2}\ dx\\&=|A|^{2}({1 \over 2i})^{2}\int _{0}^{L}e^{ziqx}+e^{-ziqx}-2e^{iqx}e^{-iqx}\ dx\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}(e^{2iqL}-1)+{-1 \over 2iq}(e^{-2iqL}-1)-2L]\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}(e^{2iqL}-e^{-iqL})-2L]\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}2i\ sin(2qL)-2L]\\&=|A|^{2}({-1 \over 4})[{L \over n\pi }sin(2{n\pi \over L}L)-2L]\\&=|A|^{2}({-1 \over 4})(-2L)\\&=|A|^{2}{L \over 2}\\&\qquad \qquad \qquad \qquad \therefore \ A={\sqrt {2 \over L}}\end{aligned}}

At the end of the day, we have:

$\Psi _{n}(x,t)={\sqrt {2 \over L}}\ \exp[{-i \over \hbar }\ E_{n}t]\ sin({n\pi \over L}x)$

### Complex Numbers

As a point of honesty, while this solution is true, there are other solutions. Not only can you just put in different values for $n$ , but we can also change the phase of our solution. In quantum mechanics, you will often year that you're solving something to "within the factor of the phase," and when we say about that, we're talking about the phase within complex number space. $\Psi$  is a complex number, be we don't pay attention to the phase of the number. In other words, we can add an arbitrary $e^{i\theta }$  in front of $\Psi$  without consequence.

<Phi* Phi vs Phi^2>

Why? Because we can only measure the magnitude of $\Psi$  as $|\Psi |^{2}$ . However, in certain situations where we are comparing two $\Psi$ , we can measure the difference in their phase. In this course, and most of the time, we just ignore the arbitrary phase factor, $e^{i\theta }$ , and say that we know $\Psi$  to within an arbitrary phase factor.

So now we have a solution, $\Psi (x,t)$ , but the Schrodinger Equation is a linear PDE. What does this mean? If $\phi _{1}$  and $\phi _{2}$  are both solutions to a linear PDE, then $\phi _{3}=\phi _{1}+\phi _{2}$ . Also, in our case we have an infinite number of solutions since $n=1,2,\dots ,n$ , really we need to say that the general solution is:

$\Psi (x,t)=\sum _{n=1}^{\infty }a_{n}\ \Psi _{n}(x,t)$ , were $\Psi _{n}(x,t)$  is our solution and $a_{n}$  are coefficients.

In addition the solutions orthogonal to one another, which is yet another property of linear PDE. This means that:

$\int \phi _{i}^{*}\ \phi _{j}\ dx=\partial _{ij}={\begin{cases}1\quad if\ i=j\\0\quad if\ i\neq j\end{cases}}$ , where $\partial _{ij}$  is another Kronecker Delta function.

The orthogonality of the eigenfunctions is physically important, and mathematically useful, as will be seen.

### Finding the Coefficients

Returning to the problem at hand, how do we determine the coefficients $a_{n}$ ? By solving as an initial value problem. Say that at time $t=0$  we make some measurement that gives us $\Psi (x,0)$  then project the $\Psi$  onto the individual eigenfunctions. So...

{\begin{aligned}\Psi (x,0)&=\sum _{n=1}^{\infty }a_{n}\Psi _{n}(x,0)\\&=\sum _{n=1}^{\infty }a_{n}\exp {[\ 0\ ]}\ sin(n{x\pi \over L})\\&=\sum _{n=1}^{\infty }a_{n}\ sin(n{x\pi \over L})\\&=\sum _{n=1}^{\infty }a_{n}\phi _{n}\end{aligned}}

Where $\phi _{n}$  is the eigenfunction of energy. Now we take:

{\begin{aligned}&\int _{0}^{L}\phi _{n}^{*}\ \Psi (x,0)\ dx\\&\qquad =\int _{0}^{L}\phi _{n}^{*}\ [a_{1}\phi _{1}+a_{2}\phi _{2}+\dots +a_{n}\phi _{n}+\dots ]\ dx\\&\qquad =\underbrace {\int _{0}^{L}a_{1}\phi _{n}^{*}\phi _{1}} _{=0}+\underbrace {\int _{0}^{L}a_{2}\phi _{n}^{*}\phi _{2}} _{=0}+\underbrace {\dots } _{=0}+\underbrace {\int _{0}^{L}a_{n}\phi _{n}^{*}\phi _{n}} _{\neq 0}+\underbrace {\dots } _{=0}\\&\qquad =\int _{0}^{L}a_{n}\phi _{n}^{*}\phi _{n}=a_{n}\end{aligned}}

So for each $n$ , one can find $a_{n}$  by integrating $t$  using orthogonality of $\phi _{n}$ .

What if I measure the energy? The wave function collapses to an eigenfunction of energy.

What does this mean? We can only measure quantized values. ($E_{1},E_{2},E_{3},\dots ,some\ E_{N}$ )

If I measure $E_{s}$ , then

{\begin{aligned}a_{n}&={\begin{cases}1\quad if\ n=5\\0\quad else\end{cases}}\\\Psi &={\sqrt {2 \over L}}\ \exp[{-i \over \hbar }\ E_{s}t]\ sin(5{x\pi \over L})\\\end{aligned}}

$P(x)=\Psi ^{*}\ \Psi$  (The Probability Distribution of Position)

<FIGURE> "Title" (Description)

Where is the particle? Somewhere given by the $P(x)$  equation. Remember ${\hat {H}}$ , and ${\hat {x}}$ , do not commute.

If I measured $x$  instead of $E$ , I would find a distribution of $a_{n}$ . What is the value of energy after measuring $x$ ? We don't know! A measurement of $x$  causes us to lose our knowledge of $E$ . When $\Psi$  is written as a summation of multiple eigenfunctions we say that $\Psi$  is a "superposition" of states. We don't know which state it is in, but we know it has a probability of being in one of the states in the expansion.

Imagine we know that the system is in a state:

$\Psi =a_{1}\phi _{1}+a_{3}\phi _{3}$ , where $\phi _{n}$  are eigenfunction of energy.

What is the expectation of energy? Remember that $\langle c\rangle =\int \Psi c\Psi \ dx$ .

{\begin{aligned}\langle E\rangle &=\int (a_{1}^{*}\phi _{1}^{*}+a_{3}^{*}\phi _{3}^{*})\ {\hat {H}}\ (a_{1}\phi _{1}+a_{3}\phi _{3})\\&=\int a_{1}^{*}\phi _{1}^{*}\ H\ a_{1}\phi _{1}+\int a_{1}^{*}\phi _{1}^{*}\ H\ a_{3}\phi _{3}+\int a_{3}^{*}\phi _{3}^{*}\ H\ a_{1}\phi _{1}+\int a_{3}^{*}\phi _{3}^{*}\ H\ a_{3}\phi _{3}\end{aligned}}

Simplifying each term:
{\begin{aligned}\int a_{j}^{*}\phi _{j}^{*}Ha_{k}\phi _{k}&=a_{j}^{*}a_{k}\int \phi _{j}^{*}\underbrace {H\phi _{k}} _{=E_{k}\phi _{k}}\\&=a_{j}^{*}a_{k}\int \phi _{j}^{*}E_{k}\phi _{k}\\&=a_{j}^{*}a_{k}E_{k}\int \phi _{j}^{*}\phi _{k}\\&=a_{j}a_{k}E_{k}\ \delta {jk}\\\therefore \ \langle E\rangle &=|a_{1}|^{2}E_{1}+|a_{3}|^{2}E_{3}\end{aligned}}

But remember, we also talk about expectation values:

$\langle c\rangle ={\bar {c}}=\sum _{i=1}^{N}c_{i}P(c_{i})$

So...

{\begin{aligned}\langle E\rangle &=|a_{1}|^{2}E_{1}+|a_{3}|^{2}E_{3}\\&=P(E_{1})E_{1}+P(E_{3})E_{3}\end{aligned}}

This means that if we know $\Psi$ , we can determine the probability to measure any $E_{n}$  by projecting $\Psi$  onto eigenfunctions of $E_{n}$ , and $\phi _{n}$ . When we have uncertainty, for example if we don't know if it's energy state one or energy state three, we have a superposition which is saying that we're taking a sum of eigenvalues.

An interesting experiment is to input this problem into Excel, python or any number or computational programmers, and make the given well smaller and smaller. As the well gets smaller, the energies will diverge and the sum becomes absolutely huge. Conversely, as the well gets wider, you will see a convergence to a value at a relatively small sum. You loose information about energy as you increase confinement.

<gif?^^^>

## A Note on Hilbert Space

The way I'm talking about $\Psi$  and $\phi _{j}$  sounds very much like some vector-type language. In truth, $\Psi$  lives in Hilbert space. This is an infinite dimensional function space where each direction is some function $\phi _{n}$ , and we can talk about representing $\Psi$  as a linear sum of $\phi _{n}$  with the coeppilien for each $\phi _{n}$  being the projection of $\Psi$  on $\phi _{n}$ .

<FIGURE> "Title" (Description)

Then in Hilbert space $\int \phi _{n}^{*}\Psi$  must be the, dot friendly, inner product that gives the projection. Measuring must move $\Psi$  to lie directly on $\phi _{n}$ . There are other, incompatible functions, $\{K_{n}\}$ , in Hilbert space such that both $\{\phi _{n}\}$ , and $\{K_{n}\}$  are complete orthogonal sets, and I can express $\Psi$  in terms of either.

$b_{1}X_{1}+b_{2}\phi _{2}=\Psi =a_{1}\phi _{1}+a_{2}\phi _{2}$

Measuring $\phi$  means losing information about $X$  but projecting $\Psi$  onto on of the $\phi _{n}$  directly, and measuring $X$  looses information about $\phi$ .