Electronic Properties of Materials/Quantum Mechanics for Engineers/Particle in a Box

Electronic Properties of Materials/Quantum Mechanics for Engineers
 ← Quantum Mechanics for Engineers/The Fundamental Postulates Particle in a Box Quantum Mechanics for Engineers/Momentum Velocity and Position → 


This is the fourth chapter of the first section of the book Electronic Properties of Materials.

<ROUGH DRAFT>

So far we've gotten a feel for how the quantum world works and we've walked through the mathematical formalism, but for a theory to be any good, it must be possible to calculate meaningful values. The goal of this course is to show how the properties of solids come from quantum mechanics and the properties of atoms. Before we look at the properties of solids, we need to study how electrons and atoms interact in a quantum picture. Over the next several chapters, we will study this, but first we need to consider atoms in isolation.

So we want to solve the time-independent Schrodinger Equation, . As it happens, finding for most problems is non-trivial. In atoms, the FIGURE potential goes , but the FIGURE interactions are difficult, as we will prove later. As it happens, the way to approach this is through simplifications and approximations. We're going to start with the simplest calculations and build up from there.

Time-Dependent Schrodinger Equation edit

 
Particle in a 1D Box

Let's look at a particle in a one-dimensional box with infinite boundaries.

 

The fact that   only goes from zero to infinity means that we can essentially throw out anything past the defined barriers of our box. Note that here we will solve for   only, not  , which implies a separation of variables. Let's check this idea by guessing the solution:

 

Here the solution is a product of two functions,   and  . To solve, we substitute it into the time-dependent S.E. and rearrange.

 


Both pure t,  , and pure x,  , must be equal to some shared constant,  .

Thus:

 


< ???>

Look! It's the Time-Independent Schrodinger Equation! This is exactly what we want to solve. As the Hamiltonian operator is the operator of energy we're going to be getting eigenvalues,  , which are measurable values of energy, and eigenfunctions,  , which are the functions corresponding to the energy. It is common for people to rewrite this as:

 


Returning to the time-dependent part, and rewriting as:

 


<CHECK MATH - VIDEO 13:10>

Taking   as our guess, one solution is:

 

Always, when  , a solution is:

 

The Time-Independent Solution, . edit

The general method to solve this type of problem is to break the space into parts with boundary conditions; each region having its own solution. Then, since the boundaries are what give us the quantization, we use the region interfaces to solve.

<FIGURE> "Title" (Description)

Equations
   
   

 

<PICK ONE^^^>


Regions I and III have a fairly simple solution here:

 

Region II has:

 

What is a good solution? Let's try planewaves! The general Solution for Planewaves,  , is not very easy to lug around, and wave functions in quantum mechanics are in general complex form.

 

Now apply some boundary conditions...


 

 

 

 

Thus we have the equation for quantized energy, where n is limited to counting numbers ( ), but we still need to solve for  . Given:

 

Pick a constant to fix normalization. In this case we choose  .

 

Substitute and solve...

 


At the end of the day, we have:

 

Complex Numbers edit

As a point of honesty, while this solution is true, there are other solutions. Not only can you just put in different values for  , but we can also change the phase of our solution. In quantum mechanics, you will often year that you're solving something to "within the factor of the phase," and when we say about that, we're talking about the phase within complex number space.   is a complex number, be we don't pay attention to the phase of the number. In other words, we can add an arbitrary   in front of   without consequence.

 

<Phi* Phi vs Phi^2>

Why? Because we can only measure the magnitude of   as  . However, in certain situations where we are comparing two  , we can measure the difference in their phase. In this course, and most of the time, we just ignore the arbitrary phase factor,  , and say that we know   to within an arbitrary phase factor.

So now we have a solution,  , but the Schrodinger Equation is a linear PDE. What does this mean? If   and   are both solutions to a linear PDE, then  . Also, in our case we have an infinite number of solutions since  , really we need to say that the general solution is:

 , were   is our solution and   are coefficients.

In addition the solutions orthogonal to one another, which is yet another property of linear PDE. This means that:

 , where   is another Kronecker Delta function.

The orthogonality of the eigenfunctions is physically important, and mathematically useful, as will be seen.

Finding the Coefficients edit

Returning to the problem at hand, how do we determine the coefficients  ? By solving as an initial value problem. Say that at time   we make some measurement that gives us   then project the   onto the individual eigenfunctions. So...

 


Where   is the eigenfunction of energy. Now we take:

 

So for each  , one can find   by integrating   using orthogonality of  .


What if I measure the energy? The wave function collapses to an eigenfunction of energy.

What does this mean? We can only measure quantized values. ( )

If I measure  , then

 

  (The Probability Distribution of Position)

<FIGURE> "Title" (Description)

Where is the particle? Somewhere given by the   equation. Remember  , and  , do not commute.

If I measured   instead of  , I would find a distribution of  . What is the value of energy after measuring  ? We don't know! A measurement of   causes us to lose our knowledge of  . When   is written as a summation of multiple eigenfunctions we say that   is a "superposition" of states. We don't know which state it is in, but we know it has a probability of being in one of the states in the expansion.

Imagine we know that the system is in a state:

 , where   are eigenfunction of energy.

What is the expectation of energy? Remember that  .

 
Simplifying each term:
 

But remember, we also talk about expectation values:

 

So...

 

This means that if we know  , we can determine the probability to measure any   by projecting   onto eigenfunctions of  , and  . When we have uncertainty, for example if we don't know if it's energy state one or energy state three, we have a superposition which is saying that we're taking a sum of eigenvalues.

An interesting experiment is to input this problem into Excel, python or any number or computational programmers, and make the given well smaller and smaller. As the well gets smaller, the energies will diverge and the sum becomes absolutely huge. Conversely, as the well gets wider, you will see a convergence to a value at a relatively small sum. You loose information about energy as you increase confinement.

<gif?^^^>

A Note on Hilbert Space edit

The way I'm talking about   and   sounds very much like some vector-type language. In truth,   lives in Hilbert space. This is an infinite dimensional function space where each direction is some function  , and we can talk about representing   as a linear sum of   with the coeppilien for each   being the projection of   on  .

<FIGURE> "Title" (Description)

Then in Hilbert space   must be the, dot friendly, inner product that gives the projection. Measuring must move   to lie directly on  . There are other, incompatible functions,  , in Hilbert space such that both  , and   are complete orthogonal sets, and I can express   in terms of either.

 


Measuring   means losing information about   but projecting   onto on of the   directly, and measuring   looses information about  .