Electronic Properties of Materials/Printable version


Electronic Properties of Materials

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Electronic_Properties_of_Materials

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

Quantum Mechanics for Engineers

This is a section in the book Electronic Properties of Materials

Within this section there are 11 chapters planned.


Quantum Mechanics for Engineers/Quantum Mechanics Overview

This is the first chapter of the first section of the textbook Electronic Properties of Materials.


Quantum Mechanics Overview edit

The origins of quantum mechanics came about in the quantum revolution from 1890 to 1930. During this time several new discoveries facilitated this transition.

  1. Light has particle nature in addition to wave nature.
  2. Light (photons) and matter are found to interact and develop theory of atomic structure.
  3. Matter has wave nature in addition to particle nature.

These discoveries led to the birth of modern quantum mechanics.

Light Has A Particle Nature edit

 

As early as 1877, Boltzmann proposed that energy was not continuous, but rather discretized. In 1905, Raleigh-Jean applied this to black bodies, a perfect radiator where radiation is emitted form vibrating atoms that act as little dipoles to create the Raleigh-Jean Theory. This theory takes   as the expectation value of the energy giving:

 

This distribution is energy times the distribution over the partition function which produces  . While this generally follows experimental results at large wavelengths, at shorter wavelengths the prediction diverges from experimental results.

 

In 1901, Plank also took the existing theory and modified it to replace continuous energy with discrete energies giving:

 

This 'fix' for the UV catastrophe was completed by incorporating Wein's Law (1893).

Discrete Energies edit

When we think about energies they may look continuous but they are actually discretized. Furthermore, in 1905 Einstein said that not only is energy quantized, but so is electromagnets. in 1887, Hertz showed through his photoelectric effect experiment


Quantum Mechanics for Engineers/The Stern-Gerlach Experiment

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Quantum Mechanics Overview Printable version Quantum Mechanics for Engineers/The Fundamental Postulates → 

We discussed in the first chapter a list of historical experiments that highlight the origins of quantum mechanics. In this lecture, I want to present one final experiment. The experiment itself just showed the origin of spin and orbital quantum numbers, but we're going to have to take it a step further and discuss a thought experiment that will demonstrate the fundamental working of quantum mechanics.

The Experiment edit

 
Stern–Gerlach experiment: Silver atoms travelling through an inhomogeneous magnetic field, and being deflected up or down depending on their spin; (1) furnace, (2) beam of silver atoms, (3) inhomogeneous magnetic field, (4) classically expected result, (5) observed result

As it happens, for reasons we will discuss during the second half of this class, the Silver (Ag) atom has a very simple magnetic nature. Each atom can be treated as a little dipole with magnetic moment  .

<EXPLANATION OF EXPERIMENT>

The force on a magnetic moment is:

 

In the z-direction:

 

The deflection of the Ag atom is proportional to the z-component of  .

Expected Results edit

Based on this, we expect to see atoms of all different orientations of  , and random magnetic moments, spread out in a single distribution.


<FIGURE> "Classic Theoretical Results of the Stern-Gerlach Experiment" (Atoms are of all different orientations of u, and there is a single distribution across the screen, centered on the main axis.

But this is not what we see...

Actual Results edit

Rather, we see two separate distributions on either side of the main beam.


<FIGURE> "Actual Results of the Stern-Gerlach Experiment" (Two separate distributions, not on the main axis, are seen instead of the single, classically predicted, distribution.)

As it happens, in quantum mechanics, magnetization is tied to angular momentum. (This of electrons zipping about in a circular orbit.) In Gold we are only looking at the spin of an electron. The directional component of  , say  , can only take two values, "up"  , or "down"  . What we just did was measure   of the Silver atoms (electrons?), and separated them into two beams, one with spin-up and the other with spin-down. Is this shocking? Yes. We just took a randomly oriented vector,  , and measured it's projection,  , and found it could only take two values.

Explaining Quantum Mechanics edit

Let's keep going. Now that (in principle) we can make a simple measurement we can make a series of thought experiments. Let's pass a beam through a filter, and see what happens...

<FIGURE> "Explaining Quantum Mechanics: The   Box" (Some beam,  , enters the box,  , and is separated based on up and down spin.)


Let's take some beam,  , have it enter the   box which separates the beam based on up and down spin. If we take the output from   measurement, discard the up elements, and remeasure down beam, the resulting beam will still be "down". This is good, no surprise here as this follows with classical logic.


<WHAT IS THIS>

Hypothesis - Polarized sunglasses all y-components are discarded.

  1. Not 50/50 in polarized light.
  2. Try rotating the box...


Now let's try rotating the   box into an   box. The   beam is still being split into up and down spin by the first  box, but now that down group is being filtered based on an   box, which is an   box that has been rotated 90°.


<FIGURE> "Explaining Quantum Mechanics: The   Component" (Note that the   box is the same as the   box, just rotated 90° to measure the y-component of the vector  .)


It looks like both boxes have a base probability of 50/50 for up or down spin. Does this make sense? Maybe?

<FIGURE> "Title" (Description)

Now we filter   to be either up or down 50/50 probability?

Something seems wrong with this picture...


Let's run one more experiment. This is the same as <FIGURE>, but now the up group coming out of the   box is again filtered through an   box. Looking at the problem, this should result in 100% down spin as the elements were tested to be 100% down spin before they entered the   box, but this is not what we see here. Instead the elements coming out of the second   box are 50/50 up and down spin.


<FIGURE> "Explaining Quantum Mechanics: The second   box." (Now the   up beam is filtered through a second   box.)

This is definitely weird.   is just some vector. If you measure the sign of  , and you can measure it again and again and again, it doesn't change. BUT after you go and measure  , if you look back at   it has once again randomized. Classically, this is like taking a bunch of marbles and splitting it into red and blue marbles. You then split the blue marbles in to large and small, but when you look back at the pile, half of the blue marbles have changed into red!

Why does this happen? edit

The components of   are "incompatible", as we can only know one component at a time. Before we measure   we can say that the atom's wave function is in a "superposition" of being up and down. By using Born's probabilistic interpretation, or psi wave, we know that the odds of measuring up or down is 50/50. We measure   and the psi wave "collapses" to  or  , depending on the measurement. Subsequent measurements have 100% chance to repeat the initial measurement according to the probabilistic interpretation of  . In  , the system is in a superposition of being  . If we measure   and find  , then we cause the wave function to collapse to  . In this state we have no information about  . We lost the information we had measured earlier when psi collapsed into  .

In the next section we will go over the formalism of quantum mechanics, and will readdress the Stern-Gerlach experiment mathematically.


Quantum Mechanics for Engineers/The Fundamental Postulates

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/The Stern-Gerlach Experiment Printable version Quantum Mechanics for Engineers/Particle in a Box → 

There are four basic postulates that underlie quantum mechanics.

Postulate I: Observables and Operators are Related

Postulate II: Measurement collapses the Wave Function

Postulate III: There exists a state function that allows expectation values to be calculated.

Postulate IV: The wave function evolves according to the time-dependent Schrodinger equation.

Postulate I edit

Each self-consistent, well-defined, observable has a linear operator that satisfies the eigenvalue equation,  , where   is observable,   is the operator,   is the measured eigenvalue, and   is the eigenfunction of  . In a given system you have a different eigenfunction for every eigenvalue so often times you will see   which specifies that   is the eigenfunction of  . Thus, this postulate links an observable to a mathematical operator.

What are Mathematical Operators? edit

An "operator" is thing or mathematical expression which operates on a function and makes it different. For example:

 

In this function,   is the mathematical operator defined as the derivative with respect to  . This means that if we later have   operating on some function of  , we can then apply additional operators to the function which change the result, but still follow the same rule. For example, let's apply an operator,  , which rotates the function 90° about the z-axis.

 


Furthermore, applying a "divide by three" operator, or an Identity Operator, which leaves the function unchanged, yields similar results.

 

Physically Significant Operator Observables: edit

Physically meaningful observables all have operators, which come about in a variety of ways, but the way that you can start to think about them is as operators in the classical world which are further quantized with the addition of   and  . If you look at these case s long enough, you'll eventually start seeing that there's a pattern to it.

Let's take the example of linear momentum,  . I will give it the operator,  , a vector which is equal to  . While you can look at the whole in three dimensions, the gradient allows us to look at it equally in parts so let's simplify this problem and look only at the x component of this vector.

 
In applying this operator to some function,  , gives:
 

Solving this differential equation provides one solution by applying the planewave equations:

 

The solution is just a planewave with wave number,  .  

 
 

This isn't very exciting on its own as   and   can take any value, thus it doesn't look "quantized". Physically, this represents a free particle (i.e. a particle alone in an infinite vacuum), and the quantization comes from the boundary conditions we apply.

Application of Boundary Conditions edit

<FIGURE> "Born-von Karman Boundary Conditions" (These boundary conditions could be pictured as a box or as a ring.)

Let's apply periodic boundary conditions (PBC) called "Born-von Karman Boundary Conditions". <FIGURE> With this we are essentially putting the particle in a one-dimensional box where it is free to move within the box, but once it leaves the box it loops back around in space and reenters the box from the other side. The box has some size,  , which gives us the quantization. This concept can also be pictured as a ring with radius  .

These boundary conditions restrict the solutions, because the solutions must match at these boundaries. Thus:

 
This isn't obviously solvable so we go in and substitute sine and cosine as described in the planewave equations which gives:
 
Since the right hand side of the equation must be equal to a known value, we can conclude that  . Following this logic:
 

Now we have a quantized solution. Going back to the idea of the ring boundary condition, and come upon the de Broglie hypothesis from Chapter 1 ( ), showing us that when Plank initially quantized particles he was thinking of a periodic situation. Additionally, we can develop the Bohr model of the atom by combining these two concepts.

 

<FIGURE> "Bohr Atom Model from de Broglie Equations" (Description)

Effect of Boundary Conditions edit

This is what makes nanoscience interesting! When the dimensions of a structure are small enough they affect the quantization. If we can control the dimensionality at a nanoscale, we can control the quantum nature of electrons.

Another well defined observable is energy. In classical mechanics there are several ways to formulate the equations of motion (Newtonian, Lagrangian, Hamiltonian). I'm not going to talk about these, but you should know that in quantum mechanics the formalism matches classical Hamiltonian formalism. For systems where the kinetic energy depends on momentum and potential energy or position, the Hamiltonian operator takes the simple form:

 , where   is the kinetic energy and   is the potential energy.

For now we are going to talk about particles in a vacuum which sets the potential energy ( ) to zero. For now we are simply looking at the kinetic energy ( ). We can take the equation for kinetic energy,  , from classical mechanics and substitute in our momentum operator,  , to get a simplified equation for  , referred to as the Laplacian operator.

 

Simplification of nabla^2:

 
Once again, we can simplify this to a one dimensional problem, by utilizing the expanded form of  .
 

We are taking the second derivatives so as the operator is operating it returns the curvature of the function showing us that the kinetic energy operator is proportional to a function's curvature. Thus, solutions with tighter curves will have higher energies than slowly changing functions.

Ideally, we want to solve:   (Time-Independent Schrodinger Equation)

 

What solves this? Planewaves!   As it turns out, planewaves are a common solution in quantum mechanics!

 


Here we can see that our eigenvalues are  , thus breaking up the equation gives us:

 

These variables are consistent with our earlier finding that:

 
Note: Our earlier equation  had one component due to the singe derivative present in the parent equation while our current solution has two components due to the double derivative present in the parent equation.

Here, the momentum is telling us what the value is and the   and   coefficients are telling us if it travels to the left or to the right. As you may have guessed, the energy and the momentum are commensurate with each other, we can know them both at the same time. In quantum mechanics, if operators "commute" then they share eigenfunctions. We should notice that if   or   are zero, then the eigenfunctions of energy are also the eigenfunctions of momentum. Generally,   and   commute if:

 
For example, let's look at momentum and energy, when   is some test function:
 

Since  ,   and   commute.

Let's try a different operator. This time, let's compare position and momentum.

 

Here,  , meaning that   and   do not commute. This means that momentum and position do not commute and thus do not share eigenfunctions. As it so happens, this is all tied to observation and the fundamental uncertainty in our knowledge.

Recall the Heisenberg Uncertainty Principle:

 
When operators commute then we say that the observables associated with the operators are "compatible" meaning that they can be measured simultaneously to arbitrary precision. (Related to the Schwartz inequality...) Without proof, I will tell you that:

If  , then  , where   refers to "expectation value".

So, for  ,   (working with  ) *see B&J p.215

This is a BIG DEAL! It means that it is impossible to simultaneously know certain things. (Remember our thought experiment from Chapter 2?) What's more, this is purely a quantum effect. Consider again, momentum. What if we precisely measure the momentum to be  , then the particle's wave function is  .

Remember in the probabilistic interpretation:

 
<FIGURE> "Incompatible Observables" (Constant value  )

But   is just the normalization constant, so the probability distribution appears as (FIGURE). If we know precisely   then we know nothing about  ! It was an equal probability any where in the range  .

Thus,   and   are incompatible observables.

Postulate II edit

A measurement of observable   that yields value   leaves the system in state  .

 

We say that the measurement "collapses the wave function" to  , where   is the eigenfunction of the particular value measured Immediate subsequent measurements will thus yield the value   as the eigenfunction will remain collapsed about that value   until another property is measured, as seen in Chapter 2.

What is important here? Before the initial measurement, the expectation of the measurement is given statistically from  , a superposition of possible states. After the act of measuring leaves  , one particular state, for subsequent measurements. Note that this is very similar to solving partial differential equations. When solving a partial differential equation for a particular solution you get a linear superposition of all possible solutions which is analogues to what we see here.

Postulate III edit

There exists a state function, called the "wave function" that represents the state of the system at any given instant, and all the information we could know about the system is contained in this state function,  , which is continuous and differentiable.

For any observable,  , we can find the expectation value, for measuring   from  .

 
Here   is the complex conjugate of  , and   is an abbreviation for  

Review of Statistics (and the meaning of the "expectation value",  ) edit

In statistics,  , is the expectation value of,  , and when all goes well in sampling theory:

 

Within this function, if you know all the possibilities then you can essentially write the state function for the system. Let's say I have a bag with 5 pennies, 3 dimes, and 2 quarters. The probability of me pulling any given coin type out of the bag is:

 
 

For continuous probability distribution:

 

State Functions in Quantum Mechanics edit

Applying this statistical expectation value to our quantum state function gives us:

 

Where, since   is just a number we can simplify   to  .

Postulate IV edit

The state function,  , develops according to the equation:

 

This is the time dependent Schrodinger Equation and is true for non-relativistic space. (Note that this equation is a postulate, there is no proof for this.) As it happens, to account for relativity we either fix our solutions by perturbation methods or instead solve using the Dirac Equation:

 

These four postulates give us the basis for everything we do in Quantum Mechanics, and the reason they work out is tied to linear Hermitian operators. The solution to the eigenvalue equation has special properties, wherein the eigenfunctions are orthonormal. For an arbitrary system with bound states:

 ; where  , and   is the   eigenvalue which corresponds to the   eigenfunction  .

Orthonormality edit

An orthonormal function...

 


Here,  , is the Kronecker Delta Function. This function is a consequence of the Stern-Louisville Theorem where the set of ident functions,  , span Hilbert space, sometimes only sub-space, the function-space where   lives. Hilbert space can be thought of as an equivalent space to Euclidean space, where vectors live, which will have some set of vectors  . If that set of vectors is orthonormal and span space, then they can act as a basis for all other vectors in that space, and we can write any arbitrary vector   as a sum of these vectors  .

 

Those who have taken linear algebra might also remember a bunch of rules about eigenvalues, pertinents, etc... Well, they all will apply to what you're going to see here, and in fact, there is a matrix notation that allows one to directly map all of quantum mechanics to sets of matrices and vectors.

Hilbert Space edit

With this orthogonal property, we can express   using   as a basis.

 

Just as with Euclidean space,   are the projection of   onto  . The value of this being that we can solve for   by taking the equivalent of an inner product. (dot product)

 

The fact that we can have a basis which is orthonormal, spans space, allows us to write the wave function, gives us a way to describe it in Hilbert space, and allows us to describe the coefficients as the projection of the wave function onto that particular eigenfunction, is very important!

Think back to expectation values, where  . Solving for each term:

 
Thus,  

Therefore the probability of measuring a particular value is  , given by the coefficient which is the projection of the wave function onto that particular eigenfunction. If you think about this physically in vector space, it kind of makes sense! We're saying that if I have a vector that's mostly in the 1 direction, then it's going to have a behavior that's also "mostly" in the 1 direction. There is still a probability of measuring it in the other directions as well. So, when we talk about superposition, it's as a linear sum of eigenfunctions. Remembering that with each eigenfunction there is a coefficient which is the projection of the wave function onto that eigenfunction, this tells us the probability of measuring any particular value.

Return to Stern-Gerlach edit

We have some operator,  , which operates on some function,  , and returns the value  . This system has only two solutions (in the case of the silver atom):

 

When we had that initial beam of atoms, passing through vacuum, initially we didn't know anything about the state; it was randomized.

 

This says that the probability of measuring each outcome is 50/50 odds! Furthermore, the wave function is normalized and the sum of the probabilities is equal to one. If this was not true we would have to go through and scale the vector until it is normalized. Now let's say we measure the case and find an "up" spin, meaning that   has collapsed to  . Now that we have measured the case, the probability of further finding an "up" case is now one and the probability of finding a down case is now zero.

What about  ?

 ;  

This system has two possible results, analogous to the ones shown with  . We can write both systems together as:

 

The set   and   are incompatible. When we measure one, the vector function snaps to one of the basis, then again with the other.

Most importantly, we can collapse   into either   or  , but not both. These two operators are incommensurate, as they don't commute, and if they don't commute they must form different basis sets within Hilbert space. We can write them out sideways, as each set is still equal to the wave function, but information about one set does not tell us anything about the other set.

The collapse of   to   or   is unique to quantum mechanics and is why we can't simultaneously know these two observables!


Quantum Mechanics for Engineers/Particle in a Box

This is the fourth chapter of the first section of the book Electronic Properties of Materials.

<ROUGH DRAFT>

So far we've gotten a feel for how the quantum world works and we've walked through the mathematical formalism, but for a theory to be any good, it must be possible to calculate meaningful values. The goal of this course is to show how the properties of solids come from quantum mechanics and the properties of atoms. Before we look at the properties of solids, we need to study how electrons and atoms interact in a quantum picture. Over the next several chapters, we will study this, but first we need to consider atoms in isolation.

So we want to solve the time-independent Schrodinger Equation,  . As it happens, finding   for most problems is non-trivial. In atoms, the FIGURE potential goes  , but the FIGURE interactions are difficult, as we will prove later. As it happens, the way to approach this is through simplifications and approximations. We're going to start with the simplest calculations and build up from there.

Time-Dependent Schrodinger Equation edit

 
Particle in a 1D Box

Let's look at a particle in a one-dimensional box with infinite boundaries.

 

The fact that   only goes from zero to infinity means that we can essentially throw out anything past the defined barriers of our box. Note that here we will solve for   only, not  , which implies a separation of variables. Let's check this idea by guessing the solution:

 

Here the solution is a product of two functions,   and  . To solve, we substitute it into the time-dependent S.E. and rearrange.

 


Both pure t,  , and pure x,  , must be equal to some shared constant,  .

Thus:

 


< ???>

Look! It's the Time-Independent Schrodinger Equation! This is exactly what we want to solve. As the Hamiltonian operator is the operator of energy we're going to be getting eigenvalues,  , which are measurable values of energy, and eigenfunctions,  , which are the functions corresponding to the energy. It is common for people to rewrite this as:

 


Returning to the time-dependent part, and rewriting as:

 


<CHECK MATH - VIDEO 13:10>

Taking   as our guess, one solution is:

 

Always, when  , a solution is:

 

The Time-Independent Solution,  . edit

The general method to solve this type of problem is to break the space into parts with boundary conditions; each region having its own solution. Then, since the boundaries are what give us the quantization, we use the region interfaces to solve.

<FIGURE> "Title" (Description)

Equations
   
   

 

<PICK ONE^^^>


Regions I and III have a fairly simple solution here:

 

Region II has:

 

What is a good solution? Let's try planewaves! The general Solution for Planewaves,  , is not very easy to lug around, and wave functions in quantum mechanics are in general complex form.

 

Now apply some boundary conditions...


 

 

 

 

Thus we have the equation for quantized energy, where n is limited to counting numbers ( ), but we still need to solve for  . Given:

 

Pick a constant to fix normalization. In this case we choose  .

 

Substitute and solve...

 


At the end of the day, we have:

 

Complex Numbers edit

As a point of honesty, while this solution is true, there are other solutions. Not only can you just put in different values for  , but we can also change the phase of our solution. In quantum mechanics, you will often year that you're solving something to "within the factor of the phase," and when we say about that, we're talking about the phase within complex number space.   is a complex number, be we don't pay attention to the phase of the number. In other words, we can add an arbitrary   in front of   without consequence.

 

<Phi* Phi vs Phi^2>

Why? Because we can only measure the magnitude of   as  . However, in certain situations where we are comparing two  , we can measure the difference in their phase. In this course, and most of the time, we just ignore the arbitrary phase factor,  , and say that we know   to within an arbitrary phase factor.

So now we have a solution,  , but the Schrodinger Equation is a linear PDE. What does this mean? If   and   are both solutions to a linear PDE, then  . Also, in our case we have an infinite number of solutions since  , really we need to say that the general solution is:

 , were   is our solution and   are coefficients.

In addition the solutions orthogonal to one another, which is yet another property of linear PDE. This means that:

 , where   is another Kronecker Delta function.

The orthogonality of the eigenfunctions is physically important, and mathematically useful, as will be seen.

Finding the Coefficients edit

Returning to the problem at hand, how do we determine the coefficients  ? By solving as an initial value problem. Say that at time   we make some measurement that gives us   then project the   onto the individual eigenfunctions. So...

 


Where   is the eigenfunction of energy. Now we take:

 

So for each  , one can find   by integrating   using orthogonality of  .


What if I measure the energy? The wave function collapses to an eigenfunction of energy.

What does this mean? We can only measure quantized values. ( )

If I measure  , then

 

  (The Probability Distribution of Position)

<FIGURE> "Title" (Description)

Where is the particle? Somewhere given by the   equation. Remember  , and  , do not commute.

If I measured   instead of  , I would find a distribution of  . What is the value of energy after measuring  ? We don't know! A measurement of   causes us to lose our knowledge of  . When   is written as a summation of multiple eigenfunctions we say that   is a "superposition" of states. We don't know which state it is in, but we know it has a probability of being in one of the states in the expansion.

Imagine we know that the system is in a state:

 , where   are eigenfunction of energy.

What is the expectation of energy? Remember that  .

 
Simplifying each term:
 

But remember, we also talk about expectation values:

 

So...

 

This means that if we know  , we can determine the probability to measure any   by projecting   onto eigenfunctions of  , and  . When we have uncertainty, for example if we don't know if it's energy state one or energy state three, we have a superposition which is saying that we're taking a sum of eigenvalues.

An interesting experiment is to input this problem into Excel, python or any number or computational programmers, and make the given well smaller and smaller. As the well gets smaller, the energies will diverge and the sum becomes absolutely huge. Conversely, as the well gets wider, you will see a convergence to a value at a relatively small sum. You loose information about energy as you increase confinement.

<gif?^^^>

A Note on Hilbert Space edit

The way I'm talking about   and   sounds very much like some vector-type language. In truth,   lives in Hilbert space. This is an infinite dimensional function space where each direction is some function  , and we can talk about representing   as a linear sum of   with the coeppilien for each   being the projection of   on  .

<FIGURE> "Title" (Description)

Then in Hilbert space   must be the, dot friendly, inner product that gives the projection. Measuring must move   to lie directly on  . There are other, incompatible functions,  , in Hilbert space such that both  , and   are complete orthogonal sets, and I can express   in terms of either.

 


Measuring   means losing information about   but projecting   onto on of the   directly, and measuring   looses information about  .


Quantum Mechanics for Engineers/Momentum Velocity and Position

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Particle in a Box Printable version Quantum Mechanics for Engineers/Degeneracy → 


** ROUGH DRAFT**

Next, we are going to talk about momentum, position. What makes this discussion particularly useful is that it provides the basis for later parts of this course where we start talking about the velocity of electrons moving in a material; relevant to the conductivity of a material. First we must define velocity in quantum mechanics in terms of position and momentum.

A Closer Look at the Free Particle edit

Looking back at our free particle from <CHAPTER>. We solved the free particle already and our resulting Hamiltonian was  . (Note that this is for a 1D particle. The solutions are valid for 2D and 3D, but for the purpose of this exercise we will constrain ourselves to 1D.)

Additionally, our wave function,   where   gives the time evolution of the system, is separable. You can prove to yourselves that substituting these in will definitely get a Schrodinger equation that separates into a time dependent and position dependent part. Furthermore, our time dependent part looks like:  . Here   must be dimensionless, which means that   is the frequency ( ) with units of  , and  .

Similarly in blackbody radiation, we use  , or  , where   is the reduced Plank constant. By multiplying this by the relationship between frequency and angular frequency,  , we get  .


Going back to the position dependent part of our original equation, we know that this is just another planewave, as proved in <CHAPTER>. Once again our planewave function was:  , and our solution was:  . In this case, because it is a free particle,   is a continuous variable; we haven't quantized this at all.


<MATH CHECK>

We know also that momentum and energy commute:  

In fact we solved momentum in 1D which provided us the solution  , where   is still a continuum value. In this case it could be positive infinity or minus infinity. Just remember that   in the case of a planewave is the wave vector, and it's telling you the direction and the wavelength of the wave.

Finally, we also found that  , for the free particle, which means that if we measure a particular value of momentum, for instance, we will get a particular value for   ( ). Once we measure this particular value, the wave function collapses and we can write the solution as:

 
Having the commutation of energy and momentum equal zero means that we can simultaneously measure these two properties.

<MATH CHECK^^^>

Velocity edit

<FIGURE> "Classic Particle Movement" (Description)

So let's say we've got a particular value we'll call  , and this free particle is going to have some sinusoid,  . <FIGURE> If we wait some small amount of time, and look at it again, the wave will have propagated. This is, after all, what planewaves do. Now let's say that after that "certain amount of time" the planewave propagated by some  , shown in <FIGURE>, which means that this new sinusoid is now  , where  . Essentially, if there is some  , it can be rewritten as  .

<MATH CHECK> sin wave propagation (+) or (-)

<FIGURE> "Sinewave Translation" (Propagates towards +x)

Now, if this wave is propagating, then we can talk about the velocity which also propagates. From wave mechanics, we have that the velocity is equal to the angular frequency divided by the wave vector ( ). Multiplying both the top and the bottom by  , and substituting variables from our eigenfunction solution gives us:

 

<ASIDE> Extra math from notes with no home:

 
<END ASIDE>

Particles vs. Planewaves edit

In this instance we solved for a delocalized particle, and found the phase velocity. Notice how this equation describes the relationship between the classical velocity ( ) of a particle and the velocity of the propagation of a particular planewave, referred to as the phase velocity ( ). Most importantly, these two velocities are NOT the same. As it turns out, a real particle will be localized.

 
Schrödinger equation wave packet

When talking about the particles that we are interested in, which have a classical velocity, they simply don't travel as a particle, they travel as a wave packet. <FIGURE> Inside these wave packets there are lots of waves with different   values, and the packet as a while moves with the same group velocity   equivalent to our classical velocity.

Particles are not single planewaves. They are a superposition of planewaves, and tend to group themselves together in these wave packets which have a group velocity of the entire group of waves in superposition. Additionally, they are within some sort of envelope function which also travels at the group velocity, equivalent to the classical velocity.

Superposition edit

<MATH CHECK> phi or psi in linear wave state equation?

Imagine a superposition of plane waves. In our first example the states in superposition were discrete. They were a summation of states where  . This form has our wave function as a linear superposition of wave states ( ). Each state is a particular solution to our Schrödinger equation where each coefficient provides us with a projection of the wavefunction into these particular basis states. (Thinking back to our eigenfunctions as a basis in Hilbert space.)

This equation is equal to the infinite sum:  . Note that most of the time, when dealing in practical matters, energy is considered finite so infinite distributions are rare.

<VOCAB> dispertized?

Alternatively, instead of thinking about energy, which is <dispertized>, we generally talk about a continuous distribution. For example, if instead of talking about energies, we express this in terms of momentum. As we saw already, a particle in free space can take any value for momentum giving us the continuous distribution:

 

Here we simply replaced the sum from the infinite energy equation with an integral and integrated over all the allowed values of momentum. The resulting equation is that of the wave packet. Here   is our coefficient. This is a direct analogy to the summation from earlier as now instead of summing over all these coefficients, we are integrating over them instead, but what are these coefficients?

<FIGURE>

This coefficient is simply a function of  , representing the probabilities of finding the particle in one of these particular states. It can be thought of as  . Physically this describes the distribution shown in <FIGURE> where the probability of measuring the particle at a particular momentum is related to the value of our coefficient in front of our basis states.

Now let's simplify our equation and say that  , providing us with:

 

Looking at this solution, we know that the whole wave function, and the coefficients  , must be well-behaved. The coefficients are well-behaved as they are just some statistical distribution, going out to zero on either end and integrating to one. On the other hand,   will oscillate rapidly, so the only way that our wave function can be well-behaved on the whole is if   is a constant defined by:

 
Solving this relationship looks like:
 

Looking simply at the units in the final equation, we have  , meaning that   has the units   or   ( ). Going back to our definitions of energy and momentum we can further transform  :

 

Here,   and   are called "dispersion relations". They are essentially the energy/velocity of a particle vs. the wave number,  . They are important and researchers spend huge amounts of time, money, and resources to determine them for various material systems. For example, the band structure of a material is a dispersion relation. <CHAPTER REF> The group velocity,  , is the scope of the dispersion. When we talk about electrons moving in a crystal we talk about the group velocity, the magnitude of which generally depends on  .

<FIGURE> "Title" (Description)

The Momentum Space Representation edit

Looking closer at these wave packets, let's begin by rewriting our planewave equation, putting time dependence into the general coefficient function and setting   to get rid of the energy variable. This results in the   equation:

 

Now let's apply a Fourier Transform to our equation:  


Putting this transformation into the above planewave equation results in:

 

If the set   are orthogonal to one another and normalized, then   are   also. We refer to this as the momentum-space representation of the wavefunction and Fourier space has certain properties which makes this representation extremely useful. Truthfully, there is only one wavefunction, (it is a state function!!) but here it is projected on to momentum representation where as   is projected onto position representation.

Let's consider a physically meaningful distribution. In this case, the equation for gaussian momentum is:

 

<FIGURE> "Gaussian Momentum" (Description)

<MATH CHECK> Everything below...

To define  , let's use a well-known relationship:  

 

Using a "well-known" relationship to find  :


 

 

thus...

 

Substituting this into   and solving...

 

  is itself a gaussian centered on  .

Width of a Gaussian:  

 

As   becomes large,   becomes small and vice versa.

In the limit,  

 


Watch the evolution of the   over time...

Substitute   into  

If you remember,   is our plane wave solution from <LINK>.

Solving the integral gives us:

 

Plot  

Just because you're a theorist doesn't mean you shouldn't learn by experimentation. Let's put some numbers in and see how this wave function behaves.

<FIGURE> "Example Graph 1" (t=0)

<FIGURE> "Example Graph 2" (t=5000)

<FIGURE> "Example Graph 3" (t=10000)


Quantum Mechanics for Engineers/Degeneracy

Degeneracy is often talked about in electronics and quantum mechanic in reference to electrons which have the same energy level. In this case, since energy is an eigenvalue, you end up with two electrons with different eigenfunctions which still share the same eigenvalue. We call a quantum state "degenerate" if two or more eigenfunctions have the same eigenvalue as in the case of the electrons, but how does this happen? Well, there are three separate ways; symmetry, exchange, and accidental.

Degeneracy by Symmetry edit

This is the form of degeneracy associated with the hybridization of orbitals. Atoms behavior in the x, y and z-directions is the same assuming a spherical potential which generally applies to atoms in isolation.

<FIGURE> "Particle in a 2D Box" (Description)

Imagine a particle in another box, once again with infinite potential on all sides, but this time, in a 2D, giving us the Hamiltonian equation:

 

This problem easily breaks into component parts:  

Substituting in the Schrödinger equation and working through the math finds that:

 

Looking at these time-independent eigenfunctions, when  , then we find that   but   when  . These are different planewave eigenfunctions with the same energy or eigenvalue, which completes the definition of degeneracy by symmetry.

<MATH> The below math has no home :(

When  , the solution to the eigenvalue problem is separable. Thus:

 

Accidental Degeneracy edit

Occasionally, two eigenfunctions will just happen to have the same eigenvalue for energy making them "accidentally" degenerate. Accidental degeneracy simply refers to when two planewaves,  , share the same eigenvalue by accident and not due to symmetry or exchange.

Degeneracy by Exchange edit

<FIGURE> "Two Particles in a 1D Box" (Description)

Now let's look at a one dimensional box with two non-interacting particles. This box will once again have infinite potentials outside of the box, as seen in <FIGURE>. As these particles are non-interacting, their potentials never see each other.

Our separable Hamiltonian for this situation will look like:

 

The Hamiltonian solution for each particle solves to:

 

Combining these equations with our two particle Hamiltonian gives us:

 
<IS x AN EIGENVALUE???>


With this notation,  refers to particle one with quantum number  , while   refers to particle two with quantum number  . Similarly,   and   refer to particle one at position   or mass  , while   and   refer to particle two at position   or mass  . Explicitly, these particles can be at different positions and have different masses, but when the two particles end up having the same eigenvalue for position or mass,   or  , there is degeneracy due to exchange.

<FIGURE> "Title" (Snapshots of classical mechanics)

This has to do with the loss of determinism. In a classical picture, if we take a snap-shot of the system, then wait a second and take another, we can tell which particle is which as the system is deterministic. <FIGURE> Quantum Mechanics is not this way. In the quantum world there is uncertainty which means I can't really tell you that there are "two particles in this box located here, at  , and there, at  ." Even if it is known that the wavefunctions are centered on those points, in reality that is just the center of a probabilistic distribution of where the particle might be. <FIGURE>

<FIGURE> "Title" (Note that these are really probability distributions as opposed to positions.)

Applying our classical method to this case, even if by some capacity we could look at the box and define each particle, looking away and looking back means that we can't tell which particle is which anymore, because the wave functions are overlapping. At any point both particles one and two are present in a superposition of particles.

Going back to our equations, this means is that  , and   have the exact same energy regardless of if you switch   and  . This is how exchange leads to degeneracy. We have lost that deterministic world view making it impossible to definitively tell the particles apart if they have the same eigenvalues, as if they had different masses to begin with, we would be able to tell them apart regardless.

Mathematically speaking,  , and switching the n values is the same as switching the particles.

Implications of Exchange edit

Comparative to degeneracy by symmetry and accidental degeneracy, degeneracy by exchange has several long reaching implications which should be considered. Imagine we have a system of   non-interactive identical particles. We know wave functions define an interchange operator that operates on our wave function ( ) to switch the variables of two of the particles.

 

Each of these q's represent a collection of variables and quantum numbers that represent a given particle. The written order of these variables further corresponds to each particle, precisely representing them and the state that these particles are in. Given this expression we can identify an interchange operator ( ) . This interchange operator takes two particles and switches them, essentially switching the   for one particle with the   of another particle. Applying an interchange operator to the wavefunction looks like:

 

The interchange operator doesn't change much as it simply moves around the parameter which were already in the system. Furthermore, the interchange operator doesn't change the energy of the system which means that our interchange operator and the Hamiltonian operator commute, where  . While this was already implied by the non-interacting nature of our particles, this further proves that we can solve for each of these elements independently, thus the wavefunctions ( ) are eigenfunctions of   and  .

As such, when this operator, the interchange operator, operates on some wavefunction, we know that it has to return an eigenvalue ( ). All good operators in Hilbert space will obey this relationship.

As it stands to reason, applying the same interchange operator to the same wavefunction twice will result in the original wave function. The operator switches all the particle parameters, and then switches them back.

What are the values of  ? swapping   and   twice returns to initial state.

 
In this case, we give these eigenvalues names and when   we say that the wavefunction is "symmetric under exchange," and when  , we say that the wavefunction is "antisymmetric under exchange." It's also worth noting that even though each version of the interchange operator commutes with the Hamiltonian, they don't necessarily commute with each other. It's not universally true that for any exchange that the two are equivalent. This is important because it limits the way that we can express the wavefunctions.

Let's define another operator,   as a permutation operator which is a series of interchange operators,  , that rearrange the variables in the wavefunction ( ). This gives us:

 

When   operates, it returns a new wave function with the variables,  , with different ordering. If   contains an even number of swaps, we call it an even permutation, and if has an odd number of swaps, we call it an odd permutation. It's worth nothing that there exists an   number of total permutations, and these permutation operators will not commute with each other just as the interchange operators don't commute with each other.

Unfortunately, this doesn't make sense. In general,   operators of different arrangements do not commute because  . This means that the wavefunction may be an eigenstate of the Hamiltonian and some permutation,  , but not be an eigenstate of another permutation,  . This means that the nature of our wavefunction is now limited. This means that we must express our wavefunction is such a way that is is either totally symmetric or totally antisymmetric. These are two special wavefunctions that commute with the Hamiltonian AND with all   possible  .

 

In the case of the symmetric wavefunction the permutation operator acting on the symmetric is equal to the symmetric wavefunction for all permutations. Similarly, in the case of the antisymmetric operator acting on the antisymmetric wavefunction is equal to the antisymmetric wavefunction for even permutation operators and equal to the negative wavefunction for odd permutation operators. This also serves as our definition for symmetric and antisymmetric operators.

Postulate Zero edit

From what we know of the world,   and   are sufficient to describe all systems of identical particles. Particles that are totally symmetric are called Bosons and obey Bose-Einstein statistics (photon, phonon, cooper pair), while particles that are totally are called Fermions and obey Fermi-Dirac statistics (electrons, neutrino, quarks). This is called "Postulate Zero" since we can't actually prove this as fact, it is simply a pattern that every known particle follows, and happens to work out through the relationships of the commutators and our operators.

This means that when we write our solution, we have to make certain that our solution is either totally symmetric or totally antisymmetric. If for example, we wrote a solution to two non-interactive particles in a 1D box that wasn't either totally symmetric or totally antisymmetric, we would need to add different variations of our solution together to achieve either total symmetry or total antisymmetry. This might look like:

 
As you can see, operating on the wavefunction ( ) returns the negative wavefunction. You can do the same with a symmetric wavefunction. If I tell you that the particles are Fermions or Bosons then you know right away which   or   they reside in. That said, these particles are supposed to be non-interacting and thus the Hamiltonians can't "see" each other, yet they become entangled.
 

These are NOT a simple product of single particle states. In quantum mechanics we say that the states are said to be "entangled", and even though the particles do not interact their wavefunctions are entangled. This means that a measurement of one particle has consequences on the other. By looking at the   you can see that there is a superposition.

Expressing Wave Functions edit

Imagine   and  . Which particle has energy  ? We don't know. Each particle is a superposition of being in state   and  . (Many body physics is neat stuff, but also extremely complicated.)

How do we express a general symmetric or antisymmetric wavefunction for any system? Symmetric wavefunctions are simple, it's just a sum of interchanges ( ). On the other hand, antisymmetric wavefunctions are a little more complicated. To express an antisymmetric wavefunction we use a Slater determinant:

Say  , so  .

 
 

This is an N x N determinant. For an example, look at some three particle system of Fermions:

 

A symmetric wavefunction would have the same determinant, but all the terms are added together for the summation.

Pauli Exclusion Principle edit

In the above example, each particle is in a unique state  . What would happen if  ? Then all of the terms would cancel out and the wavefunction would equal zero. This is the crux of the Pauli Exclusion Principle in that Fermions must have unique quantum numbers, because if they share quantum numbers then their antisymmetric wavefunction disappears. This comes from the fact that electrons are indistinguishable particles.

<CITATION> "Quantum Teleportation Between Distant Matter Qubits"

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Momentum Velocity and Position Printable version Quantum Mechanics for Engineers/Hydrogen → 


Quantum Mechanics for Engineers/Hydrogen

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Degeneracy Printable version Quantum Mechanics for Engineers/Variational Methods → 


This brings us to looking at atoms in materials. This sections provides the complete derivation of the Hydrogen Atom with relevant insights. Currently, we can solve the Hydrogen atom and the ionized Helium atom, as single electron systems. Higher order atoms, or many body problems, can be further observed through perturbation methods as discussed in the next chapter.

In this chapter, you should begin to see where the mathematics of atoms comes from, as much of the conversation around atoms is in terms of atomic orbitals which are initially derived here.

** INCOMPLETE **

Simple Hydrogen Atom edit

 
A simple hydrogen atom.

Imagine a "simple" Hydrogen atom. This simple Hydrogen atom has a nucleus with some momentum, the proton has some mass, and both have some position relative to the origin. The electron also has some position relative to the origin, some momentum and some mass.

If we want to talk about this we have to write the Hamiltonian. Given that   is the kinetic energy of the proton,   is the kinetic energy of the electron and   is the coulomb potential then the Hamiltonian classically looks like:

 

The problem with this equation is that it's complicated. Here, we are dealing with Cartesian space and two particles with unique  ,  , and   which is a many-body problem and quite complicated. That said, the many-body accounts for the movement of the hydrogen atom, which isn't that relevant here. We want to change to a more natural coordinate system which will allow us to distinguish hydrogen atom translation from <FIGURE> interaction, and which focuses on the movement of the electron around the proton. To accomplish this we will change our static coordinate system with the particle system moving in space to a center of mass system. Ideally, we also want a simplified version of  .

<FIGURE> "Center of Mass System" (The first step to shifting the coordinate system is to identify the center of mass.)

Looking at <FIGURE>, we can define a new coordinate system, where we have some center of mass, which we want to use as our new origin, and a distance from the original origin to this new origin, called  . Additionally,   is the distance between the nucleus and the electron, and this combined system has some momentum for the center of mass,  , and some momentum for the electron,  . In doing this transform, we have identified the center of mass as:

 
There are two approaches to this transformation from   to  .

The Physical Approach edit

<FIGURE> "Relative Velocity in a Two Body System" (The relative velocity of these two particles is  .)

The first approach is to appeal to the physics of the situation utilizing relative and total momentum. As such,   would be the total momentum of the system, and the relative velocity of the system is  . Similarly, the relative momentum of the system,  , where   is the reduced mass ( ):

 

Applying this to our momentum equation and simplifying gets us:

 

This equation for relative momentum is then combined with our earlier equations for the classical Hamiltonian, and for total momentum ( ) to find the momentum of the electron ( ) and the momentum of the proton ( ), in terms of the relative and absolute momentums.

 

Now, this is a classical Hamiltonian. Let's turn it into a quantum Hamiltonian! Remembering that the quantum version of momentum is  , we get:

 

While this first approach is physically intuitive, it's not purely quantum. Physics people tend to do a lot of really bad math which happens to work out, and this solution has issues as it is not generalizable.

The General Approach edit

The second, better, approach is to use our already derived quantum Hamiltonian. This approach has the benefit of being generalizable and can be used to solve other many body systems.

 

Remember that the Laplacian operator is:  

Remember that the chain rule for the transformation of differential operators.

 

Using the relations  , and   to perform the coordinate transformation,   to  . With some algebra, it results in the same Hamiltonian as the earlier equation.

 

We now have  , and since each section is dependent only on one parameter, this means our solution is separable as  . Looking specifically at  , this is just a free particle,  . We've already solved this problem and found   is planewaves. Alternatively, looking at   shows us that this is an  , with mass correction in the kinetic energy term, moving around a central potential:  

 
The total energy is just the sum of the energy due to the translation of the atom on the whole, plus the energy associated with interactions between the proton and electron.

<VIDEO> 24:38

How big is the reduced mass correction? Tiny.  

Experimentally, it is possible to distinguish hydrogen and deuterium by spectral shifts. As more particles are added this method can be extended. For three particles:

<FIGURE> "Title" (Description)

<MATHY STUFF> (related to figure)

The problem we want to solve is:  


But this is a spherically symmetric potential. Cartesian coordinates aren't the best choice. Switch to spherical coordinates.

<FIGURE> "Spherical Axis" (Description)

Spherical coordinates are highly relevant to the following mathematical calculations, now is a good time to pause and familiarize yourself if needed.

In Spherical Coordinates:

 

Wow! What a mess! How do we solve this? #SeparationOfVariables

Let  

 

Multiply Left By:  

 

So both sides equal a constant,  . Thus:

 

Let's look closer at the second one:

<MATH>

As it happens, this is an operator. First I'll tell you the answer, and then I'll show you where it originates.

The operator is  , where   is angular momentum. The eigenequation is:

 , where   is an integer and  .

Angular Momentum edit

Before we can proceed with studying Hydrogen, we need to learn a little about angular momentum. In Quantum Mechanics there are two types of angular momentum. "Orbital" momentum is analogus to the classically understood angular momentum where  , and "Spin" momentum which has no classical equivalent. We will talk about spin later, and for now we will focus on orbital angular momentum.

Orbital Angular Momentum edit

Classically:   is a vector. Thus:

 

Consider the commutation of the   operator.

 

Which means that we cannot simultaneously measure all components of  . Very odd properties for a vector!

What about the magnitude of  ?

 


Therefore simultaneous eigenfunctions of   and any one component of   can be known. For this discussion to be useful we need to switch from cartesian coordinates to spherical. This requires a bunch of algebra, simple yet tedious. Here I will skip it and just provide the equations:

 

... and substituting into   provides:

 

It is noteworthy that in many problems the solution is invariant to rotation, so any direction we point we can define as   and use the simple operator   and  . Let's start by looking at the eigensolutions for  .

 

What are good solutions for   ?

 

 

Boundary Conditions:  ; implying that  

 

Since  , they share an eigenfunction.

  -> What solution?

The same:

 


  depends only on   so the separable solution is:

 

which results in the same eigenvalues with  

Returning to  ... edit

 


Separating   and   and substituting  

 

This is what we have to solve. With the appropriate substitutions and even more algebra, this can be transformed into the Legendre equation. With even more algebra, we can get a solution in terms of the associated Legendre functions.

 
Where  , and  .


  are the associated Legendre functions:

 

Most mathematical software packages (mathematica, maple, MATLAB, etc...) have these built in.\

 
Where  , and  .

These are called "Spherical Harmonics" which are either normalized on a sphere with a unity radius, or orthonormal as:

 


... where   means "to integrate over a sphere", and  . Note that   are a "complete set" meaning that any function   can be written.

 

This is analogous to planewaves in a cartesian space and all around, a good use full function.

As a matter of notation we designate states with:  

For many-body systems we denote the total orbital angular momentum with a capital  , where  .

Here,   is called the "orbital angular momentum quantum number", and   is called the "magnetic quantum number". Furthermore,  ,  ,  ,  , and operators   all lack creative names.

<Liboff, Chapter 9>

Back to the Hydrogen Atom... edit
 
edit

Since  , if we let  , and plug our previous equation in we get:

 

Here,   is the true Coulumb potential, and   is called "the angular momentum barrier.

<FIGURE> "Title" (Description)

<FIGURE> "Title" (Descripiton)

Simplifying   further...
 
edit

Looking at the first term...

 

Substituting   in we get...

 

Which combines with the initial equation to get:  

This equation is nice an compact, but not solvable. To get something solvable we substitute   back into the equation...

 

Introduce two dimensionless variables to substitution in to equation.

 

This gets us:

 


Consider solutions for  . In the limit of the large   (Large  ), the equation simplifies to:

 


Here, our solution is  , but as   approaches infinity,   will approach zero. Therefore,   equals zero, which means that:

 


Consider the other limit where   approaches zero. The problem becomes:

 


Guessing the solution  , which gives us:

 


Which means that   as  , and we're going to want a solution that looks like this:

 

<FIGURE> "Title" description

Search for solutions as a polynomial expansion:  

The substitution for which results in:

 
This is the Laguerre Differential Equation and the solution is known. The polynomial expansion is found to be finite. As it turns out, we tend to find exact solutions in quantum mechanics is manipulate the problem until it is a known PDE with an existing solution.

The Solution:  

When  ,   equals the Rydberg constant (~13.6 eV), and  is the Bohr radius (~ 0.529 Å). When   and   are substituted:

 


Exactly the energy from Bohr's Atom (Lecture 1). Note that Bohr's idea of quantized angular momentum is important since it is the angular momentum barrier that prevents the electron from spiraling into the nucleus.

The Radial Wave Function edit

 

Where  , and   to within an arbitrary phase factor (selected form of the solution).

The Associated Laguerre Polynomials edit
 
edit

If we assume that  , we get the quantum numbers of hydrogen:

 
<SOURCE> Bransden & Joachain 1983

  (since our   is real)

<FIGURE> "The eigenfunctions of the bound states." (Description)

The number of nodes in  ... As   increases,   gets pushed out from the origin.

<FIGURE> "Radial Nodes" (Description)

The Solution so Far.... edit

 

Where:  

Also:  

which provides:  

This result shares eigenfunctions and can be simultaneously measured!

<FIGURE> "Title" (Description)

Highly degenerate energy levels.   eigenfunctions per level.

<FIGURE> "Title" (Description)

We've solved for the states of the given  . Now what? We want to think about an atom. The nucleus is sitting at the origin. We then want to put electrons in. Hydrogen has one electron, but we could also make an   or a   ion. This is a very simplistic view, but we'll use it for out thought experiment.

Electrons are Fermions so no two can have the same Quantum Numbers.

<FIGURE> "Title" (Comparing a standard Hydrogen atom with an   ion. Notice how both   electrons have  ,  , and  .)

Notice how both   electrons in FIGURE have  ,  , and  . This is seemingly contradictory to what we've already learned, but if we apply an electrical field to the ion we get something akin to FIGURE. The electron energies slit in what is called the Zeeman Effect.

<FIGURE> "Zeeman Effect" (Description)

Magnetic fields interact with angular momentum called "spin". There is no physical meaning to the word "spin", rather it is just an intrinsic angular momentum of purely quantum nature. Electrons have spin quantum numbers where  . We'll call   spin up  , and   spin down  . Including spin, there are now four quantum numbers ( ,  ,  ,  ). The wave function is given ( ). And the levels of the system fill according to FIGURE.

<FIGURE> "Title" (Description)

Including spin, the degeneracy is now  . Relativistic corrections called "fine structures" in part lift this degeneracy. These "fine structures" include relativistic correction to  ,   coupling, and Darwin Term.

Table of   from Bransden and Joachain 2000
 

Content

The complete normalized hydrogenic wave functions corresponding to the first three shells. (Bransden & Joachain 2000)
Shell Quantum Numbers Spectroscopic

Notation

Wave Function:  
     
K 1 0 0 1s  
L 2 0 0 2s  
2 1 0 2p0  
2 1 ±1 2p±1  
M 3 0 0 3s  
3 1 0 3p0  
3 1 ±1 3p±1  
3 2 0 3d0  
3 2 ±1 3d±1  
3 2 ±2 3d±2  


Quantum Mechanics for Engineers/Variational Methods

This is the eighth chapter of the first section of the book Electronic Properties of Materials.

**INCOMPLETE**

An extremely useful fact is that the time-independent Schrödinger Equation is equivalent to a variation principle. The energy is a functional, or a function of functions, of the wavefunction.

 

The   is minimized when   is the ground state wavefunction. This can be proven by calculus of variation methods or by methods of Lagrange multipliers. Here, we're going to show this by a practical example.

A Practical Example edit

Say   are a complete set of orthonormal eigenfunctions of  .

 

  is an arbitrary square-integrable function, in that you can take the integral of   without singularity.

We can write   as:

 

Further subtracting the lowest possible energy, called the ground state ( ), from both sides gets us:

 

Since   is always greater than or equal to   for all  , the right side of this equation must always be greater than zero.

This equality has a very practical importance. It means that if   is not the ground state wave function the energy will be larger than  . As it also happens, if   and  , then   is  . (for many non-degenerate  )

So... say you have a difficult   and can't solve it, but you have a "good" guess for  , say  . If you can find some way to tweak   to minimize   then  . This allows for Rayleigh-Ritz Variational Method

Rayleigh-Ritz Variational Method edit

  1. Guess:   which has a good form, where   are a set or variational parameters.
  2. Calculate  
  3. Solve  , for each  

To find the set of   that minimizes   and returns the best   given the chosen form of  .

Rayleigh-Ritz Example edit

Take as an Example, the Hydrogen atom. What if we can't solve for it? We can try making a good guess. Let's see how close a reasonable guess is.

<INSERT MATH>

Excellent Guesses will get you close to the true ground state. Good guesses will still do "ok" but not great. The shortcoming of this method is that you can't know if your guess for   is close unless you already know the general solution. The best approach is to make several educated guesses based on asybiotic behavior of   in the extreme limits.

<LENNARD-JONES>

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Hydrogen Printable version Quantum Mechanics for Engineers/Perturbation Methods → 


Quantum Mechanics for Engineers/Perturbation Methods

This is the ninth chapter of the first section of the book Electronic Properties of Materials.

**INCOMPLETE**


Most operators (Hamiltonians) are not simple. Fortunately, with a bit of effort, we can sometimes rewrite the operator ( ) as  , where   is a Hamiltonian for which we know the solution.

 

Here,   are non degeneratory orthogonal eigenfunctions, and   is a small perturbation to the  . Additionally,   is a real arbitrary parameter, and when  , we have:

 

The problem we want to solve is:  

The perturbation is small and in the limit   goes to zero.

 

We will assume that   and   can be written as powers of  .

 

Substituting:

 

Multiply through & collect common properties to form equations for each power of  :

 

The powers of   are just our unperturbed  . We will begin by looking at powers of  .

Rearrange:

 

multiply left by   and integrate

 

Begin with left term. These operators are Hermitian. They have special properties, namely that they obey the postulates of quantum mechanics, including a few revations that are useful for proofs. One such property is:

 

Which we will use here:

 

Thus, our entire term equals zero. As a result:

 

Therefore, the first order correction to the eigenvalue is:

 

Following the same steps we can find the higher order perturbations:

 

Most simple theories do not require these higher order corrections, but how do we get the wavefunctions in the first place? Lets assume that   where   coefficient is the projection of   onto  . Returning to our original   term, gather:

 

Rearranging and substituting gives us:

 

Multiplying the right side by  and integrating gets us:

 

When  , we loose all   terms, giving us:  

However, when   we get:

 

Since   does not seem to be determined from these equations, there is an uncomfortable degree of arbitrariness in selecting  . Require normalization:

 

Where:  

Thus:

 

which is a projection of   onto  .

  is a complex number.

Complex number formula:  

What is  ? Here, we choose  .

<FIGURE> "Title" (Description)

In quantum mechanics usually, but not always,   can have arbitrary phase  , so long as magnitude of   is correct. Here we choose  . Therefore:

 

This dictates that all of   is orthogonal to  .

As an example, consider adding a correction to the hydrogen atom, what is actually a fairly common occurrence.

 

This last equation is the influence of gravitational attraction between the positive ion and the negative ion. This is a first order energy correction.

 

When working with degenerate wavefunctions, the problem becomes slightly more complicated because the interactions amongst the degenerate wavefunctions must be carefully accounted for. That said, this is just bookkeeping. The general procedure for Rayleigh-Schrodinger perturbation theory is as outlined here.

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Variational Methods Printable version Quantum Mechanics for Engineers/Many Electron Atoms → 


Quantum Mechanics for Engineers/Many Electron Atoms

This is the tenth chapter of the first section of the book Electronic Properties of Materials.


We now have a solutions for Hydrogen.

 


In undergraduate chemistry class, we were shown this solution and told "... and so it goes for the rest of the periodic table..." However, the truth is, it isn't so simple.

Consider Lithium:

<FIGURE> "Diagram of Lithium Ion" (Description)

We can apply center of mass corrections but for simplicity, assume that the nucleus has  .

What does hydrogen look like?

 
 


Without the electron-electron interaction the problem would be much simpler.

 

This makes for a separable solution:

 

Here, each   terms is just the hydrogen wave function with a slight modification for  .

 

Here   is the atomic number.

Since electrons are fermions,   must be totally anti-symmetric as seen in 2 Particles in a Box in Chapter 4.

 

This is known as the Slater Determinant.

Approximating Lithium edit

Unfortunately, we can't just discard electron to electron interactions. The magnitude of   and   are equivalent. So, what do we do? Solve the problem as is? Well, after you solve  , you could quite possibly earn a Nobel Prize and/or a Fields Medal which makes this a wholly impractical method for this course. Instead we will use approximate methods for quantum mechanical problems more complicated than our earlier Hydrogen solution, and we currently have two primary methods for doing this.

Thomas-Fermi Model edit

The Thomas-Fermi Model is a semi-classical statistical method where we replace the exact potential   and   with an effective potential,  , the screened coulomb potential.

Consider an electron gas with so many electrons that you don't have to count them one at a time, but instead just consider a continuous charge distribution. If we put a positive charge in the electron cloud then the positive charge would attract the electrons so the electron distribution would no longer be uniform. That said, the negative charges nearest the positive charge "screen" the electrons further out such that the fringe electrons don't feel as strong of an attraction.

Working out the details of this method requires a discussion of electron gasses, provided later in this text. Meanwhile, you should know that this hypothetical situation turns into a spherically distributed electron gas where the charge density,  , is slowly varying. Although   does change with  , at each  , we can treat   as a uniform gas. Additionally, a point charge  , placed in a uniform electron gas has a screened potential of:

 

This is sometimes called a Yukawa Potential. Also,   is the Thomas-Fermi screening length and is related to the density of quantum states of the highest filled state, called the density of states at the Fermi Level.

Finally, for charge neutral atoms, where the number of electrons equals the atomic number, with a slowly varying electron gas in a spherically symetric central potential:

 

<FIGURE> "Electron Gas Potential" (Description)

The truth of the matter is that the Thomas-Fermi Model isn't a great solution for the multi-electron atom. Back in the 1920's it was quite successful and is still useful for simple approximations and the initial input to more advanced methods.

Hartree-Fock Method edit

A better method is the iterative Hartree-Fock Method, or the Self-Consistent Field Method. For this method, we start by assuming that we can write:

 
This assumption further implies that the can say that  . So, what is a reasonable  ?

 

Now, this gives us:

 

  is all the potential terms not involving electron-electron interaction. For the single atom,  , but this method is generally applicable to more complex systems such as molecules. Hartree-Fock is the intellectual forbearer of several modern quantum chemical techniques.

<Source> "Primer on Calculus of Variation and Lagrange Multipliers"

Energy Function edit

The functional we are interested in is the energy, where  . We want to find the extrema subject to the constraints   for all  . We will do this by using Lagrangian multipliers   such that the integral we're interested in is:

 

Applying Calculus of Variation edit

We want to use calculus of variation to find the functions that make this function stationary (an extrema). Consider this function:

 
We want to vary the function to look for the stability condition. In other words, we want to change   at an arbitrary constant  .
 

Here   is a small constant (we take the limit of   as it goes to zero), and   is an arbitrary function that deviates   (this must be continuous over the limits of integration. This gives us:

 

The Maclaurin expansion in powers of   is:

 
 

  in the limit as   goes to zero.


Let's return to our problem:

 

Writing this function out in detail gives us:

 

Note that you could alternatively carry through   in this step, but in the end you don't need them. I chose to utilize   instead of   for purely aesthetic reasons. This gives us:

 

Looking at the   term in the summation,  , we can expand to:

 

For the   and   terms, there exists two conditions:

  1. When  , the terms inside the summation become  , but integrating this will go to zero because   and   are orthogonal.
  2. The only non-zero terms come about for  

Thus, dividing through by   gives us  . Here, the   term is again zero for  , because the orthogonal functions come out from  . The resulting term is:

 

Because   depends on both   and  , we retain the wavefunctions   inside the summation.

The functional variation now looks like:

 

Which can be solved as   equations.

 

Where   is the Hartree Term which is:  

This looks like a Schrodinger equation, but is it? Yes, sort of...   are Hartree wave functions. Remember that these are single electron   since we started with  .   are Lagrangian multipliers for enforce normalization. However, we can use these   and   to study the system.

Hartree Equation edit

The Hartree equation has a physically intuitive form (if anything in quantum mechanics is physically intuitive). THe first term is kinetic energy, the second term is the electron-ion or any external potential energy (such as E or B), and the third term is the electron-electron potential energy defined by the sum:

 

Here   is the charge density of the   electron. So this sum can be thought of as an electron at   interacting with some charge at  .

What's wrong with this picture? Well, excluding our single particle assumption, we're putting several electrons into this system, which are fermions, but our wavefunction is not totally anti-symmetric. What we need to do is write   using a Slater determinant.

 

Put this back into our original energy function and apply calculus of variation methods to the whole expression to find our answer. This was done by Fock (and Slater) in 1930. The resulting set of expressions is exactly the same except for one additional term.

 

The evaluation of   is quite tricky. The expression for the expectation value is:

 

In many cases it is acceptable to approximate using the so-called "free electron exchange"  

Self-Consistent Field Loop edit

These equations are not trivial to solve. Notice that the operator depends on  , but to get   you need   which requires solving the set of equations. This is where we solve by iterative, self-consistent methods. This technique is the basis for all of modern quantum mechanical methods.

Write  , where   is all the terms, not  . It will depend on   or   or both.

  1. Guess  . You can use known solutions, random guesses, the Thomas-Fermi, or any other method of guessing you wish.
  2. Solve the system of equations to find  .
  3. Calculate from  .
  4. If   end. Otherwise return to step 2 and keep looping until   to within a given tolerance.

This is called a self-consistent field loop (S.C.F.).

These type of SCF methods can be applied to all sorts of systems, but since this chapter is about many-electron problems, we will look at the Herman-Skillman atomic data, calculated from the Hartree-Fock method in 1963.

<SOURCE> "Herman-Skillman" atomic data"

Electronic Properties of Materials
 ← Quantum Mechanics for Engineers/Perturbation Methods Printable version Quantum Mechanics for Engineers/Density Functional Theory → 


Quantum Mechanics for Engineers/Density Functional Theory

This is the eleventh chapter of the first section of the book Electronic Properties of Materials.