# Electronic Properties of Materials/Printable version

Electronic Properties of Materials

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Electronic_Properties_of_Materials

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

# Quantum Mechanics for Engineers

This is a section in the book Electronic Properties of Materials

Within this section there are 11 chapters planned.

# Quantum Mechanics for Engineers/Quantum Mechanics Overview

This is the first chapter of the first section of the textbook Electronic Properties of Materials.

## Quantum Mechanics Overview

The origins of quantum mechanics came about in the quantum revolution from 1890 to 1930. During this time several new discoveries facilitated this transition.

1. Light has particle nature in addition to wave nature.
2. Light (photons) and matter are found to interact and develop theory of atomic structure.
3. Matter has wave nature in addition to particle nature.

These discoveries led to the birth of modern quantum mechanics.

## Light Has A Particle Nature

As early as 1877, Boltzman proposed that energy was not continuous, but rather discretized. In 1905, Raleigh-Jean applied this to black bodies, a perfect radiator where radiation is emitted form vibrating atoms that act as little dipoles to create the Raleigh-Jean Theory. This theory takes ${\textstyle \langle E\rangle }$  as the expectation value of the energy giving:

${\displaystyle \langle E\rangle ={\int _{0}^{\infty }Ee^{-\beta E}\operatorname {d} \!E \over \int _{0}^{\infty }e^{-\beta E}\operatorname {d} \!E}}$

This distribution is energy times the distribution over the partition function which produces ${\displaystyle kT}$ . While this generally follows experimental results at large wavelengths, at shorter wavelengths the prediction diverges from experimental results.

In 1901, Plank also took the existing theory and modified it to replace continuous energy with discrete energies giving:

${\displaystyle E=\eta h\nu =\eta h{c \over \lambda }}$

This 'fix' for the UV catastrophe was completed by incorporating Wein's Law (1893).

### Discrete Energies

When we think about energies they may look continuous but they are actually discretized. Furthermore, in 1905 Einstein said that not only is energy quantized, but so is electromagnets. in 1887, Hertz showed through his photoelectric effect experiment

# Quantum Mechanics for Engineers/The Stern-Gerlach Experiment

 Electronic Properties of Materials ← Quantum Mechanics for Engineers/Quantum Mechanics Overview Printable version Quantum Mechanics for Engineers/The Fundamental Postulates →

We discussed in the first chapter a list of historical experiments that highlight the origins of quantum mechanics. In this lecture, I want to present one final experiment. The experiment itself just showed the origin of spin and orbital quantum numbers, but we're going to have to take it a step further and discuss a thought experiment that will demonstrate the fundamental working of quantum mechanics.

## The Experiment

As it happens, for reasons we will discuss during the second half of this class, the Silver (Ag) atom has a very simple magnetic nature. Each atom can be treated as a little dipole with magnetic moment ${\displaystyle \mu }$ .

<EXPLANATION OF EXPERIMENT>

The force on a magnetic moment is:

${\displaystyle F=\nabla (\mu \cdot B)}$

In the z-direction:

${\displaystyle F_{z}={d \over dz}(\mu \cdot B)=\mu _{z}{dB_{z} \over dz}}$

The deflection of the Ag atom is proportional to the z-component of ${\displaystyle \mu }$ .

### Expected Results

Based on this, we expect to see atoms of all different orientations of ${\displaystyle \mu }$ , and random magnetic moments, spread out in a single distribution.

<FIGURE> "Classic Theoretical Results of the Stern-Gerlach Experiment" (Atoms are of all different orientations of u, and there is a single distribution across the screen, centered on the main axis.

But this is not what we see...

### Actual Results

Rather, we see two separate distributions on either side of the main beam.

<FIGURE> "Actual Results of the Stern-Gerlach Experiment" (Two separate distributions, not on the main axis, are seen instead of the single, classically predicted, distribution.)

As it happens, in quantum mechanics, magnetization is tied to angular momentum. (This of electrons zipping about in a circular orbit.) In Gold we are only looking at the spin of an electron. The directional component of ${\textstyle S}$ , say ${\textstyle S_{z}}$ , can only take two values, "up" ${\textstyle \left({\hbar \over 2}\right)}$ , or "down" ${\textstyle \left({-\hbar \over 2}\right)}$ . What we just did was measure ${\textstyle S_{z}}$  of the Silver atoms (electrons?), and separated them into two beams, one with spin-up and the other with spin-down. Is this shocking? Yes. We just took a randomly oriented vector, ${\textstyle S}$ , and measured it's projection, ${\textstyle S_{z}}$ , and found it could only take two values.

## Explaining Quantum Mechanics

Let's keep going. Now that (in principle) we can make a simple measurement we can make a series of thought experiments. Let's pass a beam through a filter, and see what happens...

<FIGURE> "Explaining Quantum Mechanics: The ${\displaystyle SG_{\hat {z}}}$  Box" (Some beam, ${\displaystyle A_{g}}$ , enters the box, ${\displaystyle SG_{\hat {z}}}$ , and is separated based on up and down spin.)

Let's take some beam, ${\displaystyle A_{g}}$ , have it enter the ${\displaystyle SG_{\hat {z}}}$  box which separates the beam based on up and down spin. If we take the output from ${\displaystyle SG_{\hat {z}}}$  measurement, discard the up elements, and remeasure down beam, the resulting beam will still be "down". This is good, no surprise here as this follows with classical logic.

<WHAT IS THIS>

Hypothesis - Polarized sunglasses all y-components are discarded.

1. Not 50/50 in polarized light.
2. Try rotating the box...

Now let's try rotating the ${\textstyle SG_{\hat {z}}}$  box into an ${\textstyle SG_{\hat {y}}}$  box. The ${\displaystyle A_{g}}$  beam is still being split into up and down spin by the first ${\displaystyle SG_{\hat {z}}}$ box, but now that down group is being filtered based on an ${\textstyle SG_{\hat {y}}}$  box, which is an ${\displaystyle SG_{\hat {z}}}$  box that has been rotated 90°.

<FIGURE> "Explaining Quantum Mechanics: The ${\displaystyle SG_{\hat {y}}}$  Component" (Note that the ${\displaystyle SG_{\hat {y}}}$  box is the same as the ${\displaystyle SG_{\hat {z}}}$  box, just rotated 90° to measure the y-component of the vector ${\displaystyle S}$ .)

It looks like both boxes have a base probability of 50/50 for up or down spin. Does this make sense? Maybe?

<FIGURE> "Title" (Description)

Now we filter ${\displaystyle {\hat {y}}}$  to be either up or down 50/50 probability?

Something seems wrong with this picture...

Let's run one more experiment. This is the same as <FIGURE>, but now the up group coming out of the ${\textstyle SG_{\hat {y}}}$  box is again filtered through an ${\textstyle SG_{\hat {z}}}$  box. Looking at the problem, this should result in 100% down spin as the elements were tested to be 100% down spin before they entered the ${\textstyle SG_{\hat {y}}}$  box, but this is not what we see here. Instead the elements coming out of the second ${\textstyle SG_{\hat {z}}}$  box are 50/50 up and down spin.

<FIGURE> "Explaining Quantum Mechanics: The second ${\displaystyle SG_{\hat {z}}}$  box." (Now the ${\displaystyle SG_{\hat {y}}}$  up beam is filtered through a second ${\displaystyle SG_{\hat {z}}}$  box.)

This is definitely weird. ${\displaystyle S}$  is just some vector. If you measure the sign of ${\displaystyle S_{z}}$ , and you can measure it again and again and again, it doesn't change. BUT after you go and measure ${\displaystyle S_{y}}$ , if you look back at ${\displaystyle S_{z}}$  it has once again randomized. Classically, this is like taking a bunch of marbles and splitting it into red and blue marbles. You then split the blue marbles in to large and small, but when you look back at the pile, half of the blue marbles have changed into red!

### Why does this happen?

The components of ${\displaystyle S}$  are "incompatible", as we can only know one component at a time. Before we measure ${\displaystyle S_{z}}$  we can say that the atom's wave function is in a "superposition" of being up and down. By using Born's probabilistic interpretation, or psi wave, we know that the odds of measuring up or down is 50/50. We measure ${\displaystyle S_{z}}$  and the psi wave "collapses" to ${\displaystyle \phi _{S_{z}up}}$ or ${\displaystyle \phi _{S_{z}down}}$ , depending on the measurement. Subsequent measurements have 100% chance to repeat the initial measurement according to the probabilistic interpretation of ${\displaystyle \psi =\phi _{S_{z}down}}$ . In ${\displaystyle \psi =\phi _{S_{z}down}}$ , the system is in a superposition of being ${\displaystyle S_{y,up}+S_{y,down}}$ . If we measure ${\displaystyle S_{y}}$  and find ${\displaystyle S_{y,up}}$ , then we cause the wave function to collapse to ${\displaystyle \psi =\phi _{S_{z}up}}$ . In this state we have no information about ${\displaystyle S_{z}}$ . We lost the information we had measured earlier when psi collapsed into ${\displaystyle \phi _{S_{y}up}}$ .

In the next section we will go over the formalism of quantum mechanics, and will readdress the Stern-Gerlach experiment mathematically.

# Quantum Mechanics for Engineers/The Fundamental Postulates

 Electronic Properties of Materials ← Quantum Mechanics for Engineers/The Stern-Gerlach Experiment Printable version Quantum Mechanics for Engineers/Particle in a Box →

There are four basic postulates that underlie quantum mechanics.

Postulate I: Observables and Operators are Related

Postulate II: Measurement collapses the Wave Function

Postulate III: There exists a state function that allows expectation values to be calculated.

Postulate IV: The wave function evolves according to the time-dependent Schrodinger equation.

## Postulate I

Each self-consistent, well-defined, observable has a linear operator that satisfies the eigenvalue equation, ${\displaystyle {\hat {A}}\phi =a\phi }$ , where ${\displaystyle A}$  is observable, ${\displaystyle {\hat {A}}}$  is the operator, ${\displaystyle a}$  is the measured eigenvalue, and ${\displaystyle \phi }$  is the eigenfunction of ${\displaystyle a}$ . In a given system you have a different eigenfunction for every eigenvalue so often times you will see ${\displaystyle \phi _{a}}$  which specifies that ${\displaystyle \phi }$  is the eigenfunction of ${\displaystyle a}$ . Thus, this postulate links an observable to a mathematical operator.

### What are Mathematical Operators?

An "operator" is thing or mathematical expression which operates on a function and makes it different. For example:

In this function, ${\displaystyle {\hat {D}}_{x}}$  is the mathematical operator defined as the derivative with respect to ${\displaystyle x}$ . This means that if we later have ${\displaystyle x}$  operating on some function of ${\displaystyle x}$ , we can then apply additional operators to the function which change the result, but still follow the same rule. For example, let's apply an operator, ${\displaystyle R_{z90}}$ , which rotates the function 90° about the z-axis.

Furthermore, applying a "divide by three" operator, or an Identity Operator, which leaves the function unchanged, yields similar results.

${\displaystyle {\begin{array}{lcl}{\hat {B}}=\ {\text{divide by three}}&\Longrightarrow &{\hat {B}}\phi ={\phi \over 3}\\{\hat {I}}=\ {\text{identity operator}}&\Longrightarrow &{\hat {I}}g=g\end{array}}}$

#### Physically Significant Operator Observables:

Physically meaningful observables all have operators, which come about in a variety of ways, but the way that you can start to think about them is as operators in the classical world which are further quantized with the addition of ${\displaystyle \hbar }$  and ${\displaystyle i}$ . If you look at these case s long enough, you'll eventually start seeing that there's a pattern to it.

Let's take the example of linear momentum, ${\displaystyle p}$ . I will give it the operator, ${\textstyle {\hat {p}}}$ , a vector which is equal to ${\textstyle -i\hbar \nabla }$ . While you can look at the whole in three dimensions, the gradient allows us to look at it equally in parts so let's simplify this problem and look only at the x component of this vector.

${\displaystyle {\hat {p}}_{x}=-i\hbar \ {\partial \over \partial x}}$

In applying this operator to some function, ${\displaystyle \phi }$ , gives:
${\displaystyle {\hat {p}}_{x}\phi (x)=-i\hbar \ {\partial \over \partial x}\ \phi (x)=p_{x}\phi }$

Solving this differential equation provides one solution by applying the planewave equations:

${\displaystyle \phi =Ae^{ikx}=A[\cos(kx)+i\sin(kx)]}$

The solution is just a planewave with wave number, ${\displaystyle k}$ . ${\textstyle \left(k={2\pi \over \lambda }\right)}$

${\displaystyle \underbrace {-i\hbar {\partial \over \partial x}} _{{\hat {p}}_{x}}*\ \underbrace {\left(Ae^{ikx}\right)} _{\phi }\ =\ \underbrace {-i\hbar (ik)} _{p_{x}}*\ \underbrace {Ae^{ikx}} _{\phi }}$

${\displaystyle p_{x}=\hbar k;\qquad \phi =-Ae^{ikx}}$

This isn't very exciting on its own as ${\displaystyle k}$  and ${\displaystyle P_{x}}$  can take any value, thus it doesn't look "quantized". Physically, this represents a free particle (i.e. a particle alone in an infinite vacuum), and the quantization comes from the boundary conditions we apply.

### Application of Boundary Conditions

<FIGURE> "Born-von Karman Boundary Conditions" (These boundary conditions could be pictured as a box or as a ring.)

Let's apply periodic boundary conditions (PBC) called "Born-von Karman Boundary Conditions". <FIGURE> With this we are essentially putting the particle in a one-dimensional box where it is free to move within the box, but once it leaves the box it loops back around in space and reenters the box from the other side. The box has some size, ${\displaystyle L}$ , which gives us the quantization. This concept can also be pictured as a ring with radius ${\textstyle R={L \over 2\pi }}$ .

These boundary conditions restrict the solutions, because the solutions must match at these boundaries. Thus:

{\displaystyle {\begin{aligned}Ae^{iko}&=Ae^{ikL}\\\phi (o)&=\phi (L)\end{aligned}}}

This isn't obviously solvable so we go in and substitute sine and cosine as described in the planewave equations which gives:
${\displaystyle \underbrace {\cos(ko)} _{=1}+\underbrace {i\sin(ko)} _{=0}=\underbrace {\cos(kL)} _{must\ be\ 1}+\underbrace {i\sin(kL)} _{must\ be\ 0}}$

Since the right hand side of the equation must be equal to a known value, we can conclude that ${\textstyle kL=0,2\pi ,4\pi ,...}$ . Following this logic:
${\displaystyle k={n2\pi \over L};\qquad \phi (x)=Ae^{i{2\pi n \over L}x}}$

Now we have a quantized solution. Going back to the idea of the ring boundary condition, and come upon the de Broglie hypothesis from Chapter 1 (${\textstyle p=\hbar k}$ ), showing us that when Plank initially quantized particles he was thinking of a periodic situation. Additionally, we can develop the Bohr model of the atom by combining these two concepts.

{\displaystyle {\begin{aligned}k={2\pi \over \lambda }={n2\pi \over L}\longrightarrow L=n\lambda =2\pi R\\\longrightarrow n\lambda =2\pi R\end{aligned}}}

<FIGURE> "Bohr Atom Model from de Broglie Equations" (Description)

#### Effect of Boundary Conditions

This is what makes nanoscience interesting! When the dimensions of a structure are small enough they affect the quantization. If we can control the dimensionality at a nanoscale, we can control the quantum nature of electrons.

Another well defined observable is energy. In classical mechanics there are several ways to formulate the equations of motion (Newtonian, Lagrangian, Hamiltonian). I'm not going to talk about these, but you should know that in quantum mechanics the formalism matches classical Hamiltonian formalism. For systems where the kinetic energy depends on momentum and potential energy or position, the Hamiltonian operator takes the simple form:

${\displaystyle {\hat {H}}={\hat {T}}+{\hat {V}}}$ , where ${\displaystyle {\hat {T}}}$  is the kinetic energy and ${\displaystyle {\hat {V}}}$  is the potential energy.

For now we are going to talk about particles in a vacuum which sets the potential energy (${\displaystyle {\hat {V}}}$ ) to zero. For now we are simply looking at the kinetic energy (${\displaystyle {\hat {T}}}$ ). We can take the equation for kinetic energy, ${\textstyle {{\hat {p}}^{2} \over 2m}}$ , from classical mechanics and substitute in our momentum operator, ${\displaystyle -i\hbar \nabla }$ , to get a simplified equation for ${\displaystyle {\hat {T}}}$ , referred to as the Laplacian operator.

{\displaystyle {\begin{aligned}\\{\hat {V}}&=V(r)=0\quad (for\ now)\\{\hat {T}}&={{\hat {p}}^{2} \over 2m}={1 \over 2m}(-i\hbar \nabla )^{2}={-\hbar ^{2} \over 2m}\nabla ^{2}\end{aligned}}}

Simplification of nabla^2:

{\displaystyle {\begin{aligned}\nabla ^{2}&=\nabla \cdot \nabla \\&=\left\langle {\partial \over \partial x},{\partial \over \partial y},{\partial \over \partial z}\right\rangle \\&={\partial ^{2} \over \partial x^{2}}+{\partial ^{2} \over \partial y^{2}}+{\partial ^{2} \over \partial z^{2}}\end{aligned}}}

Once again, we can simplify this to a one dimensional problem, by utilizing the expanded form of ${\displaystyle \nabla ^{2}}$ .
${\displaystyle {\hat {H}}={-\hbar \over 2m}{\partial ^{2} \over \partial x^{2}}}$

We are taking the second derivatives so as the operator is operating it returns the curvature of the function showing us that the kinetic energy operator is proportional to a function's curvature. Thus, solutions with tighter curves will have higher energies than slowly changing functions.

Ideally, we want to solve: ${\displaystyle {\hat {H}}\phi =E\phi }$  (Time-Independent Schrodinger Equation)

${\displaystyle E\phi ={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\phi }$

What solves this? Planewaves! ${\textstyle \left(\phi =Ae^{ikx}+Be^{-ikx}\right)}$  As it turns out, planewaves are a common solution in quantum mechanics!

{\displaystyle {\begin{aligned}E\phi &={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\left[Ae^{ikx}+Be^{-ikx}\right]\\&={-\hbar ^{2} \over 2m}{\partial \over \partial x}\left[Aike^{ikx}+B(-ik)e^{-ikx}\right]\\&={-\hbar ^{2} \over 2m}\left[A(ik)^{2}e^{ikx}+B(ik)^{2}e^{-ikx}\right]\\&=\underbrace {{-\hbar ^{2} \over 2m}\left(-k^{2}\right)} _{E}\ \underbrace {\left[Ae^{ikx}+Be^{-ikx}\right]} _{\phi }\end{aligned}}}

Here we can see that our eigenvalues are ${\displaystyle -{\hbar ^{2} \over 2m}}$ , thus breaking up the equation gives us:

${\displaystyle E={-\hbar ^{2} \over 2m}(-k)^{2};\qquad \phi =Ae^{ikx}+Be^{-ikx}}$

These variables are consistent with our earlier finding that:

${\displaystyle P_{x}=\hbar k;\qquad E={1 \over 2}mv^{2}={P^{2} \over 2m}={(\hbar k)^{2} \over 2m}}$

Note: Our earlier equation ${\textstyle \left(\phi =Ae^{ikx}\right)}$ had one component due to the singe derivative present in the parent equation while our current solution has two components due to the double derivative present in the parent equation.

Here, the momentum is telling us what the value is and the ${\displaystyle A}$  and ${\displaystyle B}$  coefficients are telling us if it travels to the left or to the right. As you may have guessed, the energy and the momentum are commensurate with each other, we can know them both at the same time. In quantum mechanics, if operators "commute" then they share eigenfunctions. We should notice that if ${\displaystyle {\hat {A}}}$  or ${\displaystyle {\hat {B}}}$  are zero, then the eigenfunctions of energy are also the eigenfunctions of momentum. Generally, ${\displaystyle {\hat {A}}}$  and ${\displaystyle {\hat {B}}}$  commute if:

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}=0}$

For example, let's look at momentum and energy, when ${\displaystyle f(x)}$  is some test function:
{\displaystyle {\begin{aligned}\ \left[{\hat {p_{x}}},{\hat {H}}\right]&=\left[-i\hbar {\partial \over \partial x},{-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\right]f(x)\\&=\left[\left(-i\hbar {\partial \over \partial x}\right)\left({-\hbar \over 2m}{\partial ^{2} \over \partial x^{2}}\right)-\left({-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\right)\left(-i\hbar {\partial \over \partial x}\right)\right]f(x)\\&=-i\hbar {-\hbar ^{2} \over 2m}{\partial \over \partial x}{\partial ^{2} \over \partial x^{2}}f(x)-{-\hbar ^{2} \over 2m}(-i\hbar ){\partial ^{2} \over \partial x^{2}}{\partial \over \partial x}f(x)\\&=i{\hbar ^{3} \over 2m}f'''(x)-i{\hbar ^{3} \over 2m}f'''(x)=0\end{aligned}}}

Since ${\displaystyle [{\hat {p_{x}}},{\hat {H}}]=0}$ , ${\displaystyle {\hat {p_{x}}}}$  and ${\displaystyle {\hat {H}}}$  commute.

Let's try a different operator. This time, let's compare position and momentum.

{\displaystyle {\begin{aligned}\ \left[{\hat {p_{x}}},{\hat {x}}\right]f(x)&=\left[-i\hbar {\partial \over \partial x},x\right]f(x)\\&=-i\hbar {\partial \over \partial }xf(x)-x(-i\hbar ){\partial \over \partial x}f(x)\\&=(-i\hbar )\left[{\partial \over \partial x}xf(x)-x{\partial \over \partial x}f(x)\right]\\&=(-i\hbar )\left[x{\partial \over \partial x}f(x)+f(x){\partial \over \partial x}x-x{\partial \over \partial x}f(x)\right]\\&=-i\hbar f(x)\end{aligned}}}

Here, ${\displaystyle [{\hat {p_{x}}},{\hat {x}}]=-i\hbar \neq 0}$ , meaning that ${\displaystyle {\hat {H}}}$  and ${\displaystyle x}$  do not commute. This means that momentum and position do not commute and thus do not share eigenfunctions. As it so happens, this is all tied to observation and the fundamental uncertainty in our knowledge.

Recall the Heisenberg Uncertainty Principle:

${\displaystyle \Delta x\Delta p\geq {\hbar \over 2}}$

When operators commute then we say that the observables associated with the operators are "compatible" meaning that they can be measured simultaneously to arbitrary precision. (Related to the Schwartz inequality...) Without proof, I will tell you that:

If ${\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {C}}\neq 0}$ , then ${\displaystyle \Delta A\Delta B\geq {1 \over 2}|\langle c\rangle |}$ , where ${\displaystyle \langle c\rangle }$  refers to "expectation value".

So, for ${\displaystyle \left[{\hat {x}},{\hat {p_{x}}}\right]=-i\hbar }$ , ${\displaystyle \Delta x\Delta p_{x}\geq {1 \over 2}\hbar }$  (working with ${\displaystyle \left|\left({\hat {x}},{\hat {p_{x}}}\right)\right|^{2}}$ ) *see B&J p.215

This is a BIG DEAL! It means that it is impossible to simultaneously know certain things. (Remember our thought experiment from Chapter 2?) What's more, this is purely a quantum effect. Consider again, momentum. What if we precisely measure the momentum to be ${\displaystyle \hbar k}$ , then the particle's wave function is ${\displaystyle \phi _{k}}$ .

Remember in the probabilistic interpretation:

{\displaystyle {\begin{aligned}\psi ^{*}\psi &=P(x)\\A^{*}e^{-ikx}Ae^{ikx}&=|A|^{2}=P(x)\end{aligned}}}

<FIGURE> "Incompatible Observables" (Constant value ${\displaystyle |A|^{2}}$ )

But ${\displaystyle A}$  is just the normalization constant, so the probability distribution appears as (FIGURE). If we know precisely ${\displaystyle p_{x}}$  then we know nothing about ${\displaystyle x}$ ! It was an equal probability any where in the range ${\displaystyle -\infty  .

Thus, ${\displaystyle x}$  and ${\displaystyle p_{x}}$  are incompatible observables.

## Postulate II

A measurement of observable ${\displaystyle A}$  that yields value ${\displaystyle a}$  leaves the system in state ${\displaystyle \phi _{a}}$ .

${\displaystyle {\hat {A}}\phi _{a}=a\phi _{a}}$

We say that the measurement "collapses the wave function" to ${\displaystyle \phi _{a}}$ , where ${\displaystyle \phi _{a}}$  is the eigenfunction of the particular value measured Immediate subsequent measurements will thus yield the value ${\displaystyle a}$  as the eigenfunction will remain collapsed about that value ${\displaystyle a}$  until another property is measured, as seen in Chapter 2.

What is important here? Before the initial measurement, the expectation of the measurement is given statistically from ${\displaystyle \psi }$ , a superposition of possible states. After the act of measuring leaves ${\displaystyle \phi _{a}}$ , one particular state, for subsequent measurements. Note that this is very similar to solving partial differential equations. When solving a partial differential equation for a particular solution you get a linear superposition of all possible solutions which is analogues to what we see here.

## Postulate III

There exists a state function, called the "wave function" that represents the state of the system at any given instant, and all the information we could know about the system is contained in this state function, ${\displaystyle \Psi }$ , which is continuous and differentiable.

For any observable, ${\displaystyle C}$ , we can find the expectation value, for measuring ${\displaystyle C}$  from ${\displaystyle \Psi }$ .

${\displaystyle \langle c\rangle =\int \Psi ^{*}{\hat {C}}\ \Psi \ dr}$

Here ${\displaystyle \Psi ^{*}}$  is the complex conjugate of ${\displaystyle \Psi \rightarrow (a+bi)^{*}=a-bi}$ , and ${\displaystyle \int dr}$  is an abbreviation for ${\displaystyle \int \int \int dx\ dy\ dz}$

#### Review of Statistics (and the meaning of the "expectation value", ${\displaystyle \langle c\rangle }$ )

In statistics, ${\displaystyle {\bar {c}}}$ , is the expectation value of, ${\displaystyle \langle c\rangle }$ , and when all goes well in sampling theory:

${\displaystyle {\bar {c}}={1 \over N}\sum _{i=1}^{N}ci}$

Within this function, if you know all the possibilities then you can essentially write the state function for the system. Let's say I have a bag with 5 pennies, 3 dimes, and 2 quarters. The probability of me pulling any given coin type out of the bag is:

{\displaystyle {\begin{aligned}{\bar {c}}&={1 \over N}\sum _{i=1}^{N}ci={1 \over 10}(1+1+1+1+1+10+10+10+25+25)\\&={85 \over 10}={8.5}\\&=\left(1\times {5 \over 10}\right)+\left(10\times {3 \over 10}\right)+\left(25\times {2 \over 10}\right)\end{aligned}}}

${\displaystyle {\bar {c}}=\sum _{all\ c}CiP(ci)}$

For continuous probability distribution:

${\displaystyle \int cP(c)dc}$

### State Functions in Quantum Mechanics

Applying this statistical expectation value to our quantum state function gives us:

${\displaystyle \langle c\rangle =\int \Psi ^{*}\underbrace {{\hat {c}}\Psi } _{=c\Psi }\ dr=\int cP(c)dc}$

Where, since ${\displaystyle c}$  is just a number we can simplify ${\displaystyle \Psi ^{*}{\hat {c}}\ \Psi }$  to ${\displaystyle cP(c)}$ .

## Postulate IV

The state function, ${\displaystyle \Psi }$ , develops according to the equation:

${\displaystyle i\hbar {\partial \over \partial x}\Psi (rt)={\hat {H}}\Psi (rt)}$

This is the time dependent Schrodinger Equation and is true for non-relativistic space. (Note that this equation is a postulate, there is no proof for this.) As it happens, to account for relativity we either fix our solutions by perturbation methods or instead solve using the Dirac Equation:

${\displaystyle \left(\beta mc^{2}+\sum _{k=1}^{3}\alpha _{k}p_{k}c\right)\psi (rt)=i\hbar \ {\partial \psi (rt) \over \partial t}}$

These four postulates give us the basis for everything we do in Quantum Mechanics, and the reason they work out is tied to linear Hermitian operators. The solution to the eigenvalue equation has special properties, wherein the eigenfunctions are orthonormal. For an arbitrary system with bound states:

${\displaystyle {\hat {O}}\psi _{m}=o_{n}\psi _{n}}$ ; where ${\displaystyle n=0,1,2,...}$ , and ${\displaystyle o_{n}}$  is the ${\displaystyle n^{th}}$  eigenvalue which corresponds to the ${\displaystyle n^{th}}$  eigenfunction ${\displaystyle \psi _{m}}$ .

### Orthonormality

An orthonormal function...

${\displaystyle \int \psi _{n}^{*}\psi _{m}\ dr=\delta _{nm}\quad {\begin{cases}=1\quad (if\ \ n=m)\\=0\quad (otherwise)\end{cases}}}$

Here, ${\displaystyle \delta _{nm}}$ , is the Kronecker Delta Function. This function is a consequence of the Stern-Louisville Theorem where the set of ident functions, ${\displaystyle \{\psi _{n}}\}$ , span Hilbert space, sometimes only sub-space, the function-space where ${\displaystyle \Psi }$  lives. Hilbert space can be thought of as an equivalent space to Euclidean space, where vectors live, which will have some set of vectors ${\displaystyle \{q_{i}\}}$ . If that set of vectors is orthonormal and span space, then they can act as a basis for all other vectors in that space, and we can write any arbitrary vector ${\displaystyle v}$  as a sum of these vectors ${\displaystyle \{q_{i}\}}$ .

${\displaystyle v=\sum _{i}c_{i}\ q_{i}}$

Those who have taken linear algebra might also remember a bunch of rules about eigenvalues, pertinents, etc... Well, they all will apply to what you're going to see here, and in fact, there is a matrix notation that allows one to directly map all of quantum mechanics to sets of matrices and vectors.

### Hilbert Space

With this orthogonal property, we can express ${\displaystyle \Psi }$  using ${\displaystyle \psi _{n}}$  as a basis.

${\displaystyle \Psi =\sum _{n}c_{n}\psi _{n}}$

Just as with Euclidean space, ${\displaystyle c_{n}}$  are the projection of ${\displaystyle \Psi }$  onto ${\displaystyle \psi _{n}}$ . The value of this being that we can solve for ${\displaystyle c_{n}}$  by taking the equivalent of an inner product. (dot product)

{\displaystyle {\begin{aligned}c_{i}&=\int dr\ \psi _{i}^{*}\ \Psi \\&=\int dr\ \psi _{i}^{*}\ \sum _{n}c_{n}\psi _{n}\\&=\int dr(\psi _{i}^{*}c_{1}\psi _{1}+\psi _{i}^{*}c_{2}\psi _{2}+\cdots +\psi _{i}^{*}c_{i}\psi _{i})\qquad {\begin{cases}\psi _{i}^{*}c_{i}\psi _{i}\neq 0\end{cases}}\end{aligned}}}

The fact that we can have a basis which is orthonormal, spans space, allows us to write the wave function, gives us a way to describe it in Hilbert space, and allows us to describe the coefficients as the projection of the wave function onto that particular eigenfunction, is very important!

Think back to expectation values, where ${\displaystyle {\hat {O}}\psi _{n}=o_{n}\psi _{n}}$ . Solving for each term:

{\displaystyle {\begin{aligned}\langle {\hat {o}}\rangle =\int \psi ^{*}{\hat {o}}\Psi &=\int (c_{1}^{*}\psi _{1}^{*}+c_{2}^{*}\psi _{2}^{*}+\cdots ){\hat {o}}(c_{1}\psi _{1}+c_{2}\psi _{2}+\cdots )\\&=\int c_{i}^{*}\psi _{i}^{*}\ {\hat {o}}\ c_{j}\psi _{j}\\&=c_{i}^{*}c_{j}\int \psi _{i}^{*}\underbrace {{\hat {o}}\psi _{j}} _{o_{j}\psi _{j}}\\&=c_{i}^{*}c_{j}o_{j}\int \psi _{i}^{*}\psi _{j}\\&=c_{i}^{*}c_{j}o_{j}\ \partial _{ij}\end{aligned}}}

Thus, ${\displaystyle \langle o\rangle ={\bar {o}}=\sum _{all\ o}o_{i}p(o_{i})}$

Therefore the probability of measuring a particular value is ${\displaystyle p(o_{i})=c_{i}^{*}c_{i}}$ , given by the coefficient which is the projection of the wave function onto that particular eigenfunction. If you think about this physically in vector space, it kind of makes sense! We're saying that if I have a vector that's mostly in the 1 direction, then it's going to have a behavior that's also "mostly" in the 1 direction. There is still a probability of measuring it in the other directions as well. So, when we talk about superposition, it's as a linear sum of eigenfunctions. Remembering that with each eigenfunction there is a coefficient which is the projection of the wave function onto that eigenfunction, this tells us the probability of measuring any particular value.

We have some operator, ${\textstyle {\hat {S}}_{z}}$ , which operates on some function, ${\textstyle \chi }$ , and returns the value ${\displaystyle s_{z}\chi }$ . This system has only two solutions (in the case of the silver atom):

{\displaystyle {\begin{aligned}&s_{z}={\hbar \over 2},\ k\uparrow \\&s_{z}={-\hbar \over 2},\ k\downarrow \\\end{aligned}}}

When we had that initial beam of atoms, passing through vacuum, initially we didn't know anything about the state; it was randomized.

{\displaystyle {\begin{aligned}\Psi =&{1 \over {\sqrt {2}}}x_{\uparrow }+{i \over {\sqrt {2}}\ x_{\downarrow }}\\&P({\hbar \over 2})={1 \over {\sqrt {2}}}{1 \over {\sqrt {2}}}=0.50\\&P({-\hbar \over 2})={-i \over {\sqrt {2}}}{i \over {\sqrt {2}}}=0.50\end{aligned}}}

This says that the probability of measuring each outcome is 50/50 odds! Furthermore, the wave function is normalized and the sum of the probabilities is equal to one. If this was not true we would have to go through and scale the vector until it is normalized. Now let's say we measure the case and find an "up" spin, meaning that ${\displaystyle \Psi }$  has collapsed to ${\displaystyle x_{\uparrow }}$ . Now that we have measured the case, the probability of further finding an "up" case is now one and the probability of finding a down case is now zero.

What about ${\displaystyle S_{y}}$ ?

${\displaystyle {\hat {S_{y}}}\zeta =s_{y}\zeta }$ ; ${\displaystyle \left\{{s_{y}=+{\hbar \over 2},\quad \zeta _{\uparrow } \atop s_{y}=-{\hbar \over 2},\quad \zeta _{\downarrow }}\right\}}$

This system has two possible results, analogous to the ones shown with ${\textstyle {\hat {S}}_{z}}$ . We can write both systems together as:

${\displaystyle \alpha _{1}x_{\uparrow }+\alpha _{2}x_{\downarrow }=\Psi =\beta _{1}\zeta _{\uparrow }+\beta _{2}\zeta _{\downarrow }}$

The set ${\displaystyle \{x\}}$  and ${\displaystyle \{\zeta \}}$  are incompatible. When we measure one, the vector function snaps to one of the basis, then again with the other.

Most importantly, we can collapse ${\displaystyle \Psi }$  into either ${\displaystyle \{x\}}$  or ${\displaystyle \{\zeta \}}$ , but not both. These two operators are incommensurate, as they don't commute, and if they don't commute they must form different basis sets within Hilbert space. We can write them out sideways, as each set is still equal to the wave function, but information about one set does not tell us anything about the other set.

The collapse of ${\displaystyle \Psi }$  to ${\displaystyle \Psi =\zeta _{\uparrow \downarrow }}$  or ${\displaystyle \Psi =x_{\uparrow \downarrow }}$  is unique to quantum mechanics and is why we can't simultaneously know these two observables!

# Quantum Mechanics for Engineers/Particle in a Box

This is the fourth chapter of the first section of the book Electronic Properties of Materials.

<ROUGH DRAFT>

So far we've gotten a feel for how the quantum world works and we've walked through the mathematical formalism, but for a theory to be any good, it must be possible to calculate meaningful values. The goal of this course is to show how the properties of solids come from quantum mechanics and the properties of atoms. Before we look at the properties of solids, we need to study how electrons and atoms interact in a quantum picture. Over the next several chapters, we will study this, but first we need to consider atoms in isolation.

So we want to solve the time-independent Schrodinger Equation, ${\textstyle {\hat {H}}={\hat {T}}+{\hat {V}}}$ . As it happens, finding ${\textstyle {\hat {V}}=V(r)}$  for most problems is non-trivial. In atoms, the FIGURE potential goes ${\textstyle \alpha {-Zq^{2} \over r}}$ , but the FIGURE interactions are difficult, as we will prove later. As it happens, the way to approach this is through simplifications and approximations. We're going to start with the simplest calculations and build up from there.

## Time-Dependent Schrodinger Equation

Let's look at a particle in a one-dimensional box with infinite boundaries.

{\displaystyle {\begin{aligned}&{\hat {H}}={\hat {T}}+{\hat {V}}\\&{\hat {T}}={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\\&{\hat {V}}=V(x)={\begin{cases}0\quad \ if\ 0

The fact that ${\displaystyle V(x)}$  only goes from zero to infinity means that we can essentially throw out anything past the defined barriers of our box. Note that here we will solve for ${\displaystyle H(x)}$  only, not ${\displaystyle H(x,t)}$ , which implies a separation of variables. Let's check this idea by guessing the solution:

${\displaystyle \Psi (x,t)=X(x)T(t)}$

Here the solution is a product of two functions, ${\displaystyle X(x)}$  and ${\displaystyle T(t)}$ . To solve, we substitute it into the time-dependent S.E. and rearrange.

{\displaystyle {\begin{aligned}i\hbar \ {\partial \over \partial t}\ \Psi (x,t)&={\hat {H}}\ \Psi (x,t)\\i\hbar \ {\partial \over \partial t}\ X(x)\ T(t)&={\hat {H}}\ X(x)\ T(t)\\{1 \over X(t)\ T(t)}*(i\hbar \ x{\partial \over \partial t}\ T&=THx)\\i\hbar \ x{\partial \over \partial t}\ T&=THx\\i\hbar {1 \over xT}\ x{\partial \over \partial t}\ T&={1 \over Tx}THx\\i\hbar {1 \over T}\ {\partial \over \partial t}\ T&={1 \over x}\ Hx\end{aligned}}}

Both pure t, ${\textstyle i\hbar \ {1 \over T}{\partial \over \partial t}\ T}$ , and pure x, ${\textstyle {1 \over x}Hx}$ , must be equal to some shared constant, ${\displaystyle \alpha }$ .

Thus:

{\displaystyle {\begin{aligned}&\alpha =i\hbar {1 \over T}{\partial \over \partial t}T\\&\alpha ={1 \over x}Hx\\\end{aligned}}\longrightarrow HX(x)=\alpha X(x)}

<${\displaystyle X(\alpha )}$ ???>

Look! It's the Time-Independent Schrodinger Equation! This is exactly what we want to solve. As the Hamiltonian operator is the operator of energy we're going to be getting eigenvalues, ${\displaystyle \alpha }$ , which are measurable values of energy, and eigenfunctions, ${\displaystyle X(\alpha )}$ , which are the functions corresponding to the energy. It is common for people to rewrite this as:

{\displaystyle {\begin{aligned}\alpha &=E_{n}\\X(x)&=\phi _{n}(x)\\\therefore H\phi _{n}(x)&=E_{n}\phi _{n}(x)\end{aligned}}}

Returning to the time-dependent part, and rewriting as:

${\displaystyle i\hbar \ {\partial \over \partial t}\ T(t)=E\ T(t)}$

<CHECK MATH - VIDEO 13:10>

Taking ${\textstyle T(t)=Ae^{kt}}$  as our guess, one solution is:

{\displaystyle {\begin{aligned}i\hbar \ {\partial \over \partial t}\ (Ae^{kt})=i\hbar ;&\quad e^{kt}=EAe^{kt}\\i\hbar k=E;&\quad k={-i \over \hbar }\\T(t)=&\ Ae^{{-i \over \hbar }Et}\end{aligned}}}

Always, when ${\displaystyle H(x)}$ , a solution is:

${\displaystyle \Psi (x,t)=A_{n}e^{{-i \over \hbar }E_{n}t}\phi _{n}(x)}$

## The Time-Independent Solution, ${\textstyle \phi _{n}(x)}$ .

The general method to solve this type of problem is to break the space into parts with boundary conditions; each region having its own solution. Then, since the boundaries are what give us the quantization, we use the region interfaces to solve.

<FIGURE> "Title" (Description)

Equations
${\displaystyle \phi _{I}(0)=\phi _{II}(0)}$  ${\displaystyle {\partial \over \partial x}\phi _{I}(0)={\partial \over \partial x}\phi _{II}(0)}$
${\displaystyle \phi _{II}(L)=\phi _{III}(L)}$  ${\displaystyle {\partial \over \partial x}\phi _{II}(L)={\partial \over \partial x}\phi _{III}(L)}$

${\displaystyle {\begin{array}{lcl}\phi _{I}(0)=\phi _{II}(0)&{\partial \over \partial x}\phi _{I}(0)={\partial \over \partial x}\phi _{II}(0)\\\phi _{II}(L)=\phi _{III}(L)\quad &{\partial \over \partial x}\phi _{II}(L)={\partial \over \partial x}\phi _{III}(L)\end{array}}}$

<PICK ONE^^^>

Regions I and III have a fairly simple solution here:

{\displaystyle {\begin{aligned}\infty \phi &=E\phi \\\therefore \ \phi &=0\end{aligned}}}

Region II has:

${\displaystyle {-\hbar \over 2m}\ {\partial ^{2} \over \partial x^{2}}\ \phi (x)=E\ \phi (x)}$

What is a good solution? Let's try planewaves! The general Solution for Planewaves, ${\displaystyle Ae^{ikx}+Be^{-ikx}}$ , is not very easy to lug around, and wave functions in quantum mechanics are in general complex form.

{\displaystyle {\begin{aligned}A\ [cos(kx)+isin(kx)]&+B[cos(kx)-isin(kx)]\\(A+B)cos(kx)&+i(A-B)sin(kx)\\\alpha cos(kx)&+\beta sin(kx)\end{aligned}}}

Now apply some boundary conditions...

{\displaystyle {\begin{aligned}\phi (0)=\underbrace {\alpha cos(0)} _{=1}+\underbrace {\beta sin(0)} _{=0}&=0\\\therefore \alpha &=0\end{aligned}}}

{\displaystyle {\begin{aligned}\phi (L)=\beta sin(kL)&=0\\sin(z)&=0,\ where\ z=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n\\kL&=n\pi \rightarrow k={n\pi \over L},\ where\ n=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n\end{aligned}}}

${\displaystyle \therefore \phi _{n}(x)=\beta sin({n\pi \over L}x);\ where\ n=0,\ \pm 1,\ \pm 2,\ \dots \ ,\ \pm n}$

{\displaystyle {\begin{aligned}E\ \beta sin({n\pi \over L}x)&={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\ \beta sin({n\pi \over L}x)\\&={-\hbar ^{2} \over 2m}{\partial \over \partial x}\ ({x\pi \over L})\ \beta cos({n\pi \over L}x)\\&={+\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\ \beta sin({n\pi \over L}x)\\&={\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\ \beta sin({n\pi \over L}x)\\E_{n}&={+\hbar ^{2} \over 2m}\ ({n\pi \over L})^{2}\end{aligned}}}

Thus we have the equation for quantized energy, where n is limited to counting numbers (${\displaystyle n=1,2,3,\dots ,n}$ ), but we still need to solve for ${\displaystyle \beta }$ . Given:

${\displaystyle \Phi (x,t)=A\exp[{-i \over \hbar }\ E_{n}t]}$

Pick a constant to fix normalization. In this case we choose ${\displaystyle A}$ .

${\displaystyle \int _{-\infty =0}^{\infty =L}\Psi ^{*}\ \Psi \ dx=1\longrightarrow |A|^{2}\ \int _{0}^{L}sin({n\pi \over L}x)^{2}\ dx=1}$

Substitute and solve...

{\displaystyle {\begin{aligned}1&=|A|^{2}\ \int _{0}^{L}sin({n\pi \over L}x)^{2}\ dx;\qquad let\ q={n\pi \over L}\\&=|A|^{2}({1 \over 2i})^{2}\int _{0}^{L}(e^{iqx}+e^{-iqx})^{2}\ dx\\&=|A|^{2}({1 \over 2i})^{2}\int _{0}^{L}e^{ziqx}+e^{-ziqx}-2e^{iqx}e^{-iqx}\ dx\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}(e^{2iqL}-1)+{-1 \over 2iq}(e^{-2iqL}-1)-2L]\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}(e^{2iqL}-e^{-iqL})-2L]\\&=|A|^{2}({1 \over 2i})^{2}[{1 \over 2iq}2i\ sin(2qL)-2L]\\&=|A|^{2}({-1 \over 4})[{L \over n\pi }sin(2{n\pi \over L}L)-2L]\\&=|A|^{2}({-1 \over 4})(-2L)\\&=|A|^{2}{L \over 2}\\&\qquad \qquad \qquad \qquad \therefore \ A={\sqrt {2 \over L}}\end{aligned}}}

At the end of the day, we have:

${\displaystyle \Psi _{n}(x,t)={\sqrt {2 \over L}}\ \exp[{-i \over \hbar }\ E_{n}t]\ sin({n\pi \over L}x)}$

### Complex Numbers

As a point of honesty, while this solution is true, there are other solutions. Not only can you just put in different values for ${\displaystyle n}$ , but we can also change the phase of our solution. In quantum mechanics, you will often year that you're solving something to "within the factor of the phase," and when we say about that, we're talking about the phase within complex number space. ${\displaystyle \Psi }$  is a complex number, be we don't pay attention to the phase of the number. In other words, we can add an arbitrary ${\displaystyle e^{i\theta }}$  in front of ${\displaystyle \Psi }$  without consequence.

<Phi* Phi vs Phi^2>

Why? Because we can only measure the magnitude of ${\displaystyle \Psi }$  as ${\displaystyle |\Psi |^{2}}$ . However, in certain situations where we are comparing two ${\displaystyle \Psi }$ , we can measure the difference in their phase. In this course, and most of the time, we just ignore the arbitrary phase factor, ${\displaystyle e^{i\theta }}$ , and say that we know ${\displaystyle \Psi }$  to within an arbitrary phase factor.

So now we have a solution, ${\displaystyle \Psi (x,t)}$ , but the Schrodinger Equation is a linear PDE. What does this mean? If ${\displaystyle \phi _{1}}$  and ${\displaystyle \phi _{2}}$  are both solutions to a linear PDE, then ${\displaystyle \phi _{3}=\phi _{1}+\phi _{2}}$ . Also, in our case we have an infinite number of solutions since ${\displaystyle n=1,2,\dots ,n}$ , really we need to say that the general solution is:

${\displaystyle \Psi (x,t)=\sum _{n=1}^{\infty }a_{n}\ \Psi _{n}(x,t)}$ , were ${\displaystyle \Psi _{n}(x,t)}$  is our solution and ${\displaystyle a_{n}}$  are coefficients.

In addition the solutions orthogonal to one another, which is yet another property of linear PDE. This means that:

${\displaystyle \int \phi _{i}^{*}\ \phi _{j}\ dx=\partial _{ij}={\begin{cases}1\quad if\ i=j\\0\quad if\ i\neq j\end{cases}}}$ , where ${\displaystyle \partial _{ij}}$  is another Kronecker Delta function.

The orthogonality of the eigenfunctions is physically important, and mathematically useful, as will be seen.

### Finding the Coefficients

Returning to the problem at hand, how do we determine the coefficients ${\displaystyle a_{n}}$ ? By solving as an initial value problem. Say that at time ${\displaystyle t=0}$  we make some measurement that gives us ${\displaystyle \Psi (x,0)}$  then project the ${\displaystyle \Psi }$  onto the individual eigenfunctions. So...

{\displaystyle {\begin{aligned}\Psi (x,0)&=\sum _{n=1}^{\infty }a_{n}\Psi _{n}(x,0)\\&=\sum _{n=1}^{\infty }a_{n}\exp {[\ 0\ ]}\ sin(n{x\pi \over L})\\&=\sum _{n=1}^{\infty }a_{n}\ sin(n{x\pi \over L})\\&=\sum _{n=1}^{\infty }a_{n}\phi _{n}\end{aligned}}}

Where ${\displaystyle \phi _{n}}$  is the eigenfunction of energy. Now we take:

{\displaystyle {\begin{aligned}&\int _{0}^{L}\phi _{n}^{*}\ \Psi (x,0)\ dx\\&\qquad =\int _{0}^{L}\phi _{n}^{*}\ [a_{1}\phi _{1}+a_{2}\phi _{2}+\dots +a_{n}\phi _{n}+\dots ]\ dx\\&\qquad =\underbrace {\int _{0}^{L}a_{1}\phi _{n}^{*}\phi _{1}} _{=0}+\underbrace {\int _{0}^{L}a_{2}\phi _{n}^{*}\phi _{2}} _{=0}+\underbrace {\dots } _{=0}+\underbrace {\int _{0}^{L}a_{n}\phi _{n}^{*}\phi _{n}} _{\neq 0}+\underbrace {\dots } _{=0}\\&\qquad =\int _{0}^{L}a_{n}\phi _{n}^{*}\phi _{n}=a_{n}\end{aligned}}}

So for each ${\displaystyle n}$ , one can find ${\displaystyle a_{n}}$  by integrating ${\displaystyle t}$  using orthogonality of ${\displaystyle \phi _{n}}$ .

What if I measure the energy? The wave function collapses to an eigenfunction of energy.

What does this mean? We can only measure quantized values. (${\displaystyle E_{1},E_{2},E_{3},\dots ,some\ E_{N}}$ )

If I measure ${\displaystyle E_{s}}$ , then

{\displaystyle {\begin{aligned}a_{n}&={\begin{cases}1\quad if\ n=5\\0\quad else\end{cases}}\\\Psi &={\sqrt {2 \over L}}\ \exp[{-i \over \hbar }\ E_{s}t]\ sin(5{x\pi \over L})\\\end{aligned}}}

${\displaystyle P(x)=\Psi ^{*}\ \Psi }$  (The Probability Distribution of Position)

<FIGURE> "Title" (Description)

Where is the particle? Somewhere given by the ${\displaystyle P(x)}$  equation. Remember ${\displaystyle {\hat {H}}}$ , and ${\displaystyle {\hat {x}}}$ , do not commute.

If I measured ${\displaystyle x}$  instead of ${\displaystyle E}$ , I would find a distribution of ${\displaystyle a_{n}}$ . What is the value of energy after measuring ${\displaystyle x}$ ? We don't know! A measurement of ${\displaystyle x}$  causes us to lose our knowledge of ${\displaystyle E}$ . When ${\displaystyle \Psi }$  is written as a summation of multiple eigenfunctions we say that ${\displaystyle \Psi }$  is a "superposition" of states. We don't know which state it is in, but we know it has a probability of being in one of the states in the expansion.

Imagine we know that the system is in a state:

${\displaystyle \Psi =a_{1}\phi _{1}+a_{3}\phi _{3}}$ , where ${\displaystyle \phi _{n}}$  are eigenfunction of energy.

What is the expectation of energy? Remember that ${\displaystyle \langle c\rangle =\int \Psi c\Psi \ dx}$ .

{\displaystyle {\begin{aligned}\langle E\rangle &=\int (a_{1}^{*}\phi _{1}^{*}+a_{3}^{*}\phi _{3}^{*})\ {\hat {H}}\ (a_{1}\phi _{1}+a_{3}\phi _{3})\\&=\int a_{1}^{*}\phi _{1}^{*}\ H\ a_{1}\phi _{1}+\int a_{1}^{*}\phi _{1}^{*}\ H\ a_{3}\phi _{3}+\int a_{3}^{*}\phi _{3}^{*}\ H\ a_{1}\phi _{1}+\int a_{3}^{*}\phi _{3}^{*}\ H\ a_{3}\phi _{3}\end{aligned}}}

Simplifying each term:
{\displaystyle {\begin{aligned}\int a_{j}^{*}\phi _{j}^{*}Ha_{k}\phi _{k}&=a_{j}^{*}a_{k}\int \phi _{j}^{*}\underbrace {H\phi _{k}} _{=E_{k}\phi _{k}}\\&=a_{j}^{*}a_{k}\int \phi _{j}^{*}E_{k}\phi _{k}\\&=a_{j}^{*}a_{k}E_{k}\int \phi _{j}^{*}\phi _{k}\\&=a_{j}a_{k}E_{k}\ \delta {jk}\\\therefore \ \langle E\rangle &=|a_{1}|^{2}E_{1}+|a_{3}|^{2}E_{3}\end{aligned}}}

But remember, we also talk about expectation values:

${\displaystyle \langle c\rangle ={\bar {c}}=\sum _{i=1}^{N}c_{i}P(c_{i})}$

So...

{\displaystyle {\begin{aligned}\langle E\rangle &=|a_{1}|^{2}E_{1}+|a_{3}|^{2}E_{3}\\&=P(E_{1})E_{1}+P(E_{3})E_{3}\end{aligned}}}

This means that if we know ${\displaystyle \Psi }$ , we can determine the probability to measure any ${\displaystyle E_{n}}$  by projecting ${\displaystyle \Psi }$  onto eigenfunctions of ${\displaystyle E_{n}}$ , and ${\displaystyle \phi _{n}}$ . When we have uncertainty, for example if we don't know if it's energy state one or energy state three, we have a superposition which is saying that we're taking a sum of eigenvalues.

An interesting experiment is to input this problem into Excel, python or any number or computational programmers, and make the given well smaller and smaller. As the well gets smaller, the energies will diverge and the sum becomes absolutely huge. Conversely, as the well gets wider, you will see a convergence to a value at a relatively small sum. You loose information about energy as you increase confinement.

<gif?^^^>

## A Note on Hilbert Space

The way I'm talking about ${\displaystyle \Psi }$  and ${\displaystyle \phi _{j}}$  sounds very much like some vector-type language. In truth, ${\displaystyle \Psi }$  lives in Hilbert space. This is an infinite dimensional function space where each direction is some function ${\displaystyle \phi _{n}}$ , and we can talk about representing ${\displaystyle \Psi }$  as a linear sum of ${\displaystyle \phi _{n}}$  with the coeppilien for each ${\displaystyle \phi _{n}}$  being the projection of ${\displaystyle \Psi }$  on ${\displaystyle \phi _{n}}$ .

<FIGURE> "Title" (Description)

Then in Hilbert space ${\displaystyle \int \phi _{n}^{*}\Psi }$  must be the, dot friendly, inner product that gives the projection. Measuring must move ${\displaystyle \Psi }$  to lie directly on ${\displaystyle \phi _{n}}$ . There are other, incompatible functions, ${\displaystyle \{K_{n}\}}$ , in Hilbert space such that both ${\displaystyle \{\phi _{n}\}}$ , and ${\displaystyle \{K_{n}\}}$  are complete orthogonal sets, and I can express ${\displaystyle \Psi }$  in terms of either.

${\displaystyle b_{1}X_{1}+b_{2}\phi _{2}=\Psi =a_{1}\phi _{1}+a_{2}\phi _{2}}$

Measuring ${\displaystyle \phi }$  means losing information about ${\displaystyle X}$  but projecting ${\displaystyle \Psi }$  onto on of the ${\displaystyle \phi _{n}}$  directly, and measuring ${\displaystyle X}$  looses information about ${\displaystyle \phi }$ .

# Quantum Mechanics for Engineers/Momentum Velocity and Position

 Electronic Properties of Materials ← Quantum Mechanics for Engineers/Particle in a Box Printable version Quantum Mechanics for Engineers/Degeneracy →

** ROUGH DRAFT**

Next, we are going to talk about momentum, position. What makes this discussion particularly useful is that it provides the basis for later parts of this course where we start talking about the velocity of electrons moving in a material; relevant to the conductivity of a material. First we must define velocity in quantum mechanics in terms of position and momentum.

## A Closer Look at the Free Particle

Looking back at our free particle from <CHAPTER>. We solved the free particle already and our resulting Hamiltonian was ${\displaystyle {\hat {H}}=H(x)}$ . (Note that this is for a 1D particle. The solutions are valid for 2D and 3D, but for the purpose of this exercise we will constrain ourselves to 1D.)

Additionally, our wave function, ${\textstyle \Psi (x,t)=X(x)T(t)}$  where ${\textstyle T(t)}$  gives the time evolution of the system, is separable. You can prove to yourselves that substituting these in will definitely get a Schrodinger equation that separates into a time dependent and position dependent part. Furthermore, our time dependent part looks like: ${\textstyle T(t)=\exp[-i\ {E \over \hbar }\ t]}$ . Here ${\textstyle -i\ {E \over \hbar }\ t}$  must be dimensionless, which means that ${\textstyle {E \over \hbar }}$  is the frequency (${\textstyle \omega }$ ) with units of ${\textstyle 1 \over t}$ , and ${\textstyle E=\hbar \omega }$ .

Similarly in blackbody radiation, we use ${\textstyle E=nh\nu }$ , or ${\textstyle E=n(2\pi \hbar )}$ , where ${\textstyle \hbar }$  is the reduced Plank constant. By multiplying this by the relationship between frequency and angular frequency, ${\textstyle \omega \over 2\pi }$ , we get ${\textstyle E=\hbar \omega }$ .

Going back to the position dependent part of our original equation, we know that this is just another planewave, as proved in <CHAPTER>. Once again our planewave function was: ${\textstyle X(x)=Ae^{ikx}+Be^{-ikx}}$ , and our solution was: ${\textstyle E={\hbar ^{2}k^{2} \over 2m}}$ . In this case, because it is a free particle, ${\textstyle k}$  is a continuous variable; we haven't quantized this at all.

<MATH CHECK>

We know also that momentum and energy commute: ${\textstyle [{{\hat {p}},\ {\hat {H}}}]=0}$

In fact we solved momentum in 1D which provided us the solution ${\displaystyle X(x)=Ae^{ikx}}$ , where ${\displaystyle k}$  is still a continuum value. In this case it could be positive infinity or minus infinity. Just remember that ${\displaystyle k}$  in the case of a planewave is the wave vector, and it's telling you the direction and the wavelength of the wave.

Finally, we also found that ${\textstyle p=\hbar k}$ , for the free particle, which means that if we measure a particular value of momentum, for instance, we will get a particular value for ${\displaystyle k}$  (${\displaystyle k'}$ ). Once we measure this particular value, the wave function collapses and we can write the solution as:

{\displaystyle {\begin{aligned}\Psi (x,t)&=Ae^{ikx}e^{-i{E \over \hbar }t}\\&=Ae^{ikx}e^{i(kx-\omega t)}\end{aligned}}}

Having the commutation of energy and momentum equal zero means that we can simultaneously measure these two properties.

<MATH CHECK^^^>

### Velocity

<FIGURE> "Classic Particle Movement" (Description)

So let's say we've got a particular value we'll call ${\displaystyle k'}$ , and this free particle is going to have some sinusoid, ${\displaystyle \sin(x)}$ . <FIGURE> If we wait some small amount of time, and look at it again, the wave will have propagated. This is, after all, what planewaves do. Now let's say that after that "certain amount of time" the planewave propagated by some ${\displaystyle \Delta x}$ , shown in <FIGURE>, which means that this new sinusoid is now ${\displaystyle sin(x+\epsilon )}$ , where ${\displaystyle \epsilon >0}$ . Essentially, if there is some ${\displaystyle f(x,t)}$ , it can be rewritten as ${\displaystyle f(x-\nu _{p}t)}$ .

<MATH CHECK> sin wave propagation (+) or (-)

<FIGURE> "Sinewave Translation" (Propagates towards +x)

Now, if this wave is propagating, then we can talk about the velocity which also propagates. From wave mechanics, we have that the velocity is equal to the angular frequency divided by the wave vector (${\textstyle \omega \over k}$ ). Multiplying both the top and the bottom by ${\displaystyle \hbar }$ , and substituting variables from our eigenfunction solution gives us:

${\displaystyle \nu _{p}={\omega \over k}={\hbar \omega \over \hbar k}={E \over \hbar k}={{\hbar ^{2}k^{2} \over 2m} \over hk}={\hbar k \over 2m}={p \over 2m}={v_{c}\ m \over 2m}={v_{c} \over 2}}$

<ASIDE> Extra math from notes with no home:

{\displaystyle {\begin{aligned}e^{i(kx-\omega t)}&=e^{ik(x-{\omega \over k})t}\\&=e^{ik(x-v_{particle}t)}\\\ \\\end{aligned}}}

<END ASIDE>

## Particles vs. Planewaves

In this instance we solved for a delocalized particle, and found the phase velocity. Notice how this equation describes the relationship between the classical velocity (${\textstyle v_{c}}$ ) of a particle and the velocity of the propagation of a particular planewave, referred to as the phase velocity (${\textstyle \nu _{p}}$ ). Most importantly, these two velocities are NOT the same. As it turns out, a real particle will be localized.

When talking about the particles that we are interested in, which have a classical velocity, they simply don't travel as a particle, they travel as a wave packet. <FIGURE> Inside these wave packets there are lots of waves with different ${\textstyle k}$  values, and the packet as a while moves with the same group velocity ${\displaystyle \nu _{g}}$  equivalent to our classical velocity.

Particles are not single planewaves. They are a superposition of planewaves, and tend to group themselves together in these wave packets which have a group velocity of the entire group of waves in superposition. Additionally, they are within some sort of envelope function which also travels at the group velocity, equivalent to the classical velocity.

### Superposition

<MATH CHECK> phi or psi in linear wave state equation?

Imagine a superposition of plane waves. In our first example the states in superposition were discrete. They were a summation of states where ${\displaystyle \Psi =a_{1}\phi _{1}+a_{2}\phi _{2}+...}$ . This form has our wave function as a linear superposition of wave states (${\displaystyle \phi }$ ). Each state is a particular solution to our Schrödinger equation where each coefficient provides us with a projection of the wavefunction into these particular basis states. (Thinking back to our eigenfunctions as a basis in Hilbert space.)

This equation is equal to the infinite sum: ${\textstyle \sum _{j=i}^{\infty }a_{j}\phi _{j}}$ . Note that most of the time, when dealing in practical matters, energy is considered finite so infinite distributions are rare.

<VOCAB> dispertized?

Alternatively, instead of thinking about energy, which is <dispertized>, we generally talk about a continuous distribution. For example, if instead of talking about energies, we express this in terms of momentum. As we saw already, a particle in free space can take any value for momentum giving us the continuous distribution:

${\displaystyle \Psi (x,t)={1 \over {\sqrt {2\pi \hbar }}}\ \int _{-\infty }^{+\infty }\underbrace {\exp \left[\ i\left[p_{x}x-E(p_{x})t\right]\ {1 \over \hbar }\right]} _{Basis\ Function}\ \phi (p_{x})\ \operatorname {d} \!p_{x}}$

Here we simply replaced the sum from the infinite energy equation with an integral and integrated over all the allowed values of momentum. The resulting equation is that of the wave packet. Here ${\textstyle \phi (p_{x})}$  is our coefficient. This is a direct analogy to the summation from earlier as now instead of summing over all these coefficients, we are integrating over them instead, but what are these coefficients?

<FIGURE>

This coefficient is simply a function of ${\displaystyle p}$ , representing the probabilities of finding the particle in one of these particular states. It can be thought of as ${\displaystyle P(p_{x})=|\phi (t)|^{2}}$ . Physically this describes the distribution shown in <FIGURE> where the probability of measuring the particle at a particular momentum is related to the value of our coefficient in front of our basis states.

Now let's simplify our equation and say that ${\displaystyle \beta (p_{x})=p_{x}x-E(p_{x})t}$ , providing us with:

${\displaystyle \psi (x,t)={1 \over {\sqrt {2\pi \hbar }}}\ \int _{-\infty }^{+\infty }e^{i\beta (p_{n}) \over \hbar }}$

Looking at this solution, we know that the whole wave function, and the coefficients ${\textstyle \phi (p_{x})}$ , must be well-behaved. The coefficients are well-behaved as they are just some statistical distribution, going out to zero on either end and integrating to one. On the other hand, ${\textstyle e^{i\beta (p_{n}) \over \hbar }}$  will oscillate rapidly, so the only way that our wave function can be well-behaved on the whole is if ${\textstyle e^{i\beta (p_{n}) \over \hbar }}$  is a constant defined by:

${\displaystyle \left.{\partial \beta (p_{x}) \over \partial p_{x}}\right\vert _{\underbrace {p_{x}=p_{0}} _{centered\ on\ \phi _{max}}}=0}$

Solving this relationship looks like:
{\displaystyle {\begin{aligned}{\partial \beta \over \partial p_{x}}&={\partial \over \partial p_{x}}(p_{x}x-E(p_{x})\ t)\\&=x-t\ {\partial E(p_{x}) \over \partial p_{x}}=0\\x&=t\ {\partial \over \partial p_{x}}E(p_{x})\end{aligned}}}

Looking simply at the units in the final equation, we have ${\textstyle length=time*\left({length \over time}\right)}$ , meaning that ${\textstyle {\partial \over \partial p_{x}}E(p_{x})}$  has the units ${\textstyle \left({length \over time}\right)}$  or ${\displaystyle velocity}$  (${\textstyle \nu _{g}}$ ). Going back to our definitions of energy and momentum we can further transform ${\textstyle \nu _{g}}$ :

{\displaystyle {\begin{aligned}\nu _{g}&={\partial E(p_{x}) \over \partial p_{x}}\quad {\begin{cases}E=\hbar \omega \\p_{x}=\hbar k\end{cases}}\\&={\hbar \partial \omega (k) \over \hbar \partial k}\\&={\partial \omega (k) \over \partial k}\end{aligned}}}

Here, ${\displaystyle \omega (k)}$  and ${\displaystyle E(k)}$  are called "dispersion relations". They are essentially the energy/velocity of a particle vs. the wave number, ${\displaystyle k}$ . They are important and researchers spend huge amounts of time, money, and resources to determine them for various material systems. For example, the band structure of a material is a dispersion relation. <CHAPTER REF> The group velocity, ${\displaystyle \nu _{g}}$ , is the scope of the dispersion. When we talk about electrons moving in a crystal we talk about the group velocity, the magnitude of which generally depends on ${\displaystyle k}$ .

<FIGURE> "Title" (Description)

## The Momentum Space Representation

Looking closer at these wave packets, let's begin by rewriting our planewave equation, putting time dependence into the general coefficient function and setting ${\displaystyle t=0}$  to get rid of the energy variable. This results in the ${\displaystyle \psi (x,t)}$  equation:

${\displaystyle \psi (x,t)={1 \over {\sqrt {2\pi \hbar }}}\ \int _{-\infty }^{+\infty }\operatorname {d} p_{x}\ \phi (p_{x},t)\ \exp[i\ p_{x}\ x{1 \over \hbar }]}$

Now let's apply a Fourier Transform to our equation: {\displaystyle {\begin{aligned}{\mathfrak {F}}[\psi (x,t)]&=\phi (p_{x},t)\\{\mathfrak {F}}^{-1}[\phi (p_{x},t)]&=\psi (x,t)\end{aligned}}}

Putting this transformation into the above planewave equation results in:

${\displaystyle \phi (p_{x},t)={1 \over {\sqrt {2\pi \hbar }}}\int _{-\infty }^{+\infty }\operatorname {d} x\ \psi (x,t)\ \exp[-i\ p_{x}\ x{1 \over \hbar }]}$

If the set ${\displaystyle \left\{\psi _{n}(x,t)\right\}}$  are orthogonal to one another and normalized, then ${\displaystyle \left\{\phi _{n}(p_{x},t)\right\}}$  are ${\displaystyle {\mathfrak {F}}\left\{\psi _{n}(x,t)\right\}}$  also. We refer to this as the momentum-space representation of the wavefunction and Fourier space has certain properties which makes this representation extremely useful. Truthfully, there is only one wavefunction, (it is a state function!!) but here it is projected on to momentum representation where as ${\displaystyle \psi (x,t)}$  is projected onto position representation.

Let's consider a physically meaningful distribution. In this case, the equation for gaussian momentum is:

${\displaystyle \phi (p_{x})=c\exp \left[-{(p_{x}-p_{o})^{2} \over 2(\Delta p_{x})^{2}}\right]}$

<FIGURE> "Gaussian Momentum" (Description)

<MATH CHECK> Everything below...

To define ${\displaystyle c}$ , let's use a well-known relationship: ${\displaystyle \int _{-\infty }^{+\infty }\phi ^{*}(p_{x})\ \phi (p_{x})\ \operatorname {d} \!p_{x}=1}$

${\displaystyle \int _{-\infty }^{+\infty }|c|^{2}\exp \left[{-1 \over (\Delta p_{x})^{2}}(p_{x}-p_{o})^{2}\right]\operatorname {d} \!p_{x}}$

Using a "well-known" relationship to find ${\displaystyle c}$ :

${\displaystyle \int _{-\infty }^{+\infty }e^{-\alpha u^{2}}e^{-\beta u}\operatorname {d} \!u=\left({\pi \over \alpha }\right)^{(1/2)}\exp \left[{\beta ^{2} \over 4\alpha }\right]}$

{\displaystyle {\begin{aligned}|c|^{2}(\Delta {p_{x}}^{2}\pi )^{1 \over 2}&=1\\c&=(\Delta {p_{x}}^{2}\pi )^{-1 \over 4}\end{aligned}}}

thus...

${\displaystyle \phi (p_{x})=(\Delta {p_{x}}^{2}\pi )^{-1 \over 4}\exp \left[{-(p_{x}-p_{o})^{2} \over 2(\Delta p_{x})^{2}}\right]}$

Substituting this into ${\displaystyle \psi (x,t)}$  and solving...

{\displaystyle {\begin{aligned}\psi (x,t)&={1 \over {\sqrt {2\pi \hbar }}}\int _{-\infty }^{+\infty }\operatorname {d} \!p_{x}(\Delta {p_{x}}^{2}\pi )^{-{1 \over 4}}\exp \left[{-(p_{x}-p_{o}) \over 2(\Delta p_{x})^{2}}\right]\exp \left[i\ p_{x}\ x{1 \over \hbar }\right]\\&={1 \over {\sqrt {2\pi \hbar \Delta p_{x}{\sqrt {\pi }}}}}\int _{-\infty }^{+\infty }\operatorname {d} \!p_{x}\exp \left[{-(p_{x}-p_{o}) \over 2(\Delta p_{x})^{2}}\right]\exp \left[i\ p_{x}\ x{1 \over \hbar }\right]\underbrace {\exp \left[i\ x{1 \over \hbar }(p_{o}-p_{o})\right]} _{p_{o}=1}\\&={\exp \left[{ix \over \hbar }p_{o}\right] \over {\sqrt {2\pi \hbar \Delta p_{x}{\sqrt {\pi }}}}}\int _{-\infty }^{+\infty }\operatorname {d} \!p_{x}\exp \left[{-1 \over 2\Delta {p_{x}}^{2}}{(p_{x}-p_{o})^{2}}\right]\exp \left[{ix \over \hbar }{(p_{x}-p_{o})}\right]\\&={\exp \left[{ixp_{o} \over \hbar }\right] \over k}\left({\pi 2\Delta {p_{x}}^{2} \over 2\pi \hbar \Delta p_{x}{\sqrt {\pi }}}\right)^{1 \over 2}\exp \left[{ixp_{o} \over \hbar }\right]\exp \left[{-x^{2} \over 2\left({\hbar \over \Delta p_{x}}\right)^{2}}\right]\\&=\left({\Delta p_{x} \over \hbar {\sqrt {\pi }}}\right)^{1 \over 2}\exp \left[{ixp_{o} \over \hbar }\right]\exp \left[{-x^{2} \over 2\left({\hbar \over \Delta p_{x}}\right)^{2}}\right]\end{aligned}}}

${\displaystyle \psi (x,t)}$  is itself a gaussian centered on ${\displaystyle x=0}$ .

Width of a Gaussian: ${\displaystyle {\hbar \over \Delta p_{x}}}$

${\displaystyle \Delta x\Delta p={\hbar \over \Delta p}\Delta p=\hbar }$

As ${\displaystyle \Delta p}$  becomes large, ${\displaystyle \Delta x}$  becomes small and vice versa.

In the limit, ${\displaystyle \Delta p\longrightarrow 0}$

{\displaystyle {\begin{aligned}\phi (p_{x})&\rightarrow p_{o}\\\psi (x)&\rightarrow \exp \left[{ixp_{o} \over \hbar }\right]\end{aligned}}}

Watch the evolution of the ${\displaystyle \psi _{in}}$  over time...

Substitute ${\displaystyle \psi (p_{x})=(\Delta {p_{x}}^{2}\pi )^{-{1 \over 4}}\exp \left[{-(p_{x}-p_{o}) \over 2(\Delta p_{x})^{2}}\right]}$  into ${\displaystyle \Psi (x,t)=(2\pi \hbar )^{-{1 \over 2}}\int _{-\infty }^{+\infty }\exp \left[{i(p_{x}x-E(p_{x})t) \over \hbar }\right]\phi (p_{x})\ \operatorname {d} \!p_{x}}$

If you remember, ${\displaystyle E(p_{x})={{p_{x}}^{2} \over 2m}}$  is our plane wave solution from <LINK>.

Solving the integral gives us:

${\displaystyle \Psi (x,t)={\pi }^{-{1 \over 4}}\left[{{\Delta p_{x} \over \hbar } \over 1+{i\Delta {p_{x}}^{2}t \over m\hbar }}\right]^{1 \over 2}\exp \left[{{{ip_{o}x \over \hbar }-\left({\Delta p_{x} \over \hbar }\right)^{2}{x^{2} \over 2}}-{i\ {p_{o}}^{2}t \over 2m\hbar } \over 1+{i\Delta {p_{x}}^{2} \over m\hbar }}\right]}$

Plot ${\displaystyle P(x,t)=\Psi ^{*}\Psi }$

Just because you're a theorist doesn't mean you shouldn't learn by experimentation. Let's put some numbers in and see how this wave function behaves.

<FIGURE> "Example Graph 1" (t=0)

<FIGURE> "Example Graph 2" (t=5000)

<FIGURE> "Example Graph 3" (t=10000)

# Quantum Mechanics for Engineers/Degeneracy

Degeneracy is often talked about in electronics and quantum mechanic in reference to electrons which have the same energy level. In this case, since energy is an eigenvalue, you end up with two electrons with different eigenfunctions which still share the same eigenvalue. We call a quantum state "degenerate" if two or more eigenfunctions have the same eigenvalue as in the case of the electrons, but how does this happen? Well, there are three separate ways; symmetry, exchange, and accidental.

## Degeneracy by Symmetry

This is the form of degeneracy associated with the hybridization of orbitals. Atoms behavior in the x, y and z-directions is the same assuming a spherical potential which generally applies to atoms in isolation.

<FIGURE> "Particle in a 2D Box" (Description)

Imagine a particle in another box, once again with infinite potential on all sides, but this time, in a 2D, giving us the Hamiltonian equation:

${\displaystyle {\hat {H}}={{{\hat {p}}_{x}}^{2} \over 2m}+{{{\hat {p}}_{y}}^{2} \over 2m}+V(x)+V(y)}$

This problem easily breaks into component parts: ${\displaystyle {\hat {H}}={\hat {H}}_{x}+{\hat {H}}_{y}}$

Substituting in the Schrödinger equation and working through the math finds that:

{\displaystyle {\begin{aligned}\phi _{n_{1}n_{2}}&=\beta \sin \left({n_{x}\pi \over L_{x}}x\right)\sin \left({n_{y}\pi \over L_{y}}y\right)\\E_{n_{1}n_{2}}&={\hbar ^{2} \over 2m}\left({n_{x}\pi \over L_{x}}\right)^{2}+{\hbar ^{2} \over 2m}\left({n_{y}\pi \over L_{y}}\right)^{2}\end{aligned}}}

Looking at these time-independent eigenfunctions, when ${\displaystyle L_{x}=L_{y}}$ , then we find that ${\displaystyle E_{\alpha \beta }=E_{\beta \alpha }}$  but ${\displaystyle \phi _{\alpha \beta }\neq \phi _{\beta \alpha }}$