Lecture 24: Hydrogen atom (conclusion). The simplest quantum system and emergent angular momentum.

L24.1 More on the hydrogen atom degeneracies and orbits (23:21)

L24.2 The simplest quantum system (13:55)

L24.3 Hamiltonian and emerging spin angular momentum (15:42)

L24.4 Eigenstates of the Hamiltonian (14:03)

L24.1 More on the hydrogen atom degeneracies and orbits (23:21)

MITOCW | watch?v=fXlzY2l1-4w
PROFESSOR: We spoke about the hydrogen atom. And in the hydrogen atom, we drew the spectrum, so the
table, the data of spectrum of a quantum system. So this is a question that I you in general to be aware of. What
do we mean by a diagram of the energy levels in a central potential? So this is something we did for the hydrogen
atom.
But in general, the diagram of energy eigenstates in a central potential looks like this. You put
the energies here. And they could be bound states that have negative energy. They could be
even bound states with positive energy, depending on the system you’re discussing.
You remember the harmonic oscillator, the potential is naturally defined to be positive. And all
these energy states that the harmonic oscillator has represent bound states. They are
normalizeable wave functions. In fact, you don’t have scattering states because the potential
just reaches forever.
So in general, for a central potential, however, the system is shown like that. And we plot here
l. But l is not a continuous variable, so we’ll put l equals 0 here, l equals 1, l equals 2, l equals
3.
And then you start plotting the energy levels. You solve the radial equation. Remember, the
radial equation is a Schrodinger equation, minus h squared over 2m, d second u d r squared,
plus V effective of r u equals Eu. And V effective of r is the V of r that your system had, plus a
contribution from angular momentum, 2m.
So this radial equation is a collection of radial equations. We’ve tried to emphasize that many
times already. And you solve it first for l equal 0, then for l equal 1, for l equal 2, you go on and
on. So you solve it for l equal 0.
And just like as in any one-dimensional problem, for l equals 0, you solve this equation and
you find energy levels. So you sketch them like that. Those are the energy levels for l equals
0. This is the ground state of the l equal 0 radial equation.
Now just to remind you, this is the hard part of solving the Schrodinger equation. Because at
the end of the day, the psi is u of r over r times some Ylm. So the l that you have here, the
term is a Y and the m is arbitrary. In fact, the u doesn’t know about the m value.
So here you have these energy levels. And then what happened with this hydrogen atom is
you keep solving, of course, for all the l’s. And in general, when you solve for l equals 1, you
may find some levels like this.
It’s a miracle when the levels coincide. There’s no reason why they should coincide. They
happen to coincide for the hydrogen atom. And that’s because of a very special symmetry of
the 1 over r potential orbit.
Then for l equal 2, you solve it. And for l equal 3, you solve. And you find these states, and
that’s the diagram of states of a central potential. For the hydrogen atom, of course, first, all
the energies were negative and the energy levels coincide.
So this is the ground state of the l equal 0 radial equation. This is the ground state of the l
equal 1 radial equation. This is the ground state of the l equal 2 radial equation. This is the
ground state of the whole system. So this is what we call plotting the spectrum in a radial
potential problem. And it’s a generic form.
So we were doing this for the hydrogen atom last time to try to understand the various orbits.
And we had for the hydrogen atom, V of r was a potential like this, minus e squared over r.
And then you have sometimes the l contribution that this function diverges towards the origin,
1 over r squared. And by the time you add them together, This is the original potential, so this
could be thought as l equals 0 case.
Then if you have some l over here and some other l, maybe like that, these are the various
potentials that you get. And in general, you may want to figure out, for example, if you have an
energy level, some particular energy, what are the turning points. So let’s consider for that
case that’s just one curve that we care about.
And the electron will be going from some value of the radius, so this is the plot of the effective
potential of the function of radius. And we’ll go from one to another. They could be called r
minus to r plus.
And our semi-classical interpretation, which is roughly good if you’re talking about high
quantum numbers, high principle quantum numbers, high l quantum numbers, is that you have
an ellipse and the radial distance to the center where the proton is located goes from r plus to
r minus. The electron is bouncing back and forth. That is the classical picture.
In the quantum mechanical picture, you expect something somewhat similar. There’s going to
be a wave function, maybe a wave function here, psi squared. And it’s going to be vanishingly
small before this point. And then by the time you get here, it’s going to be very fast, and then
decay again. So the probability distribution will mimic the time spent by the particles, as we
used to argue before.
So let’s do a little exercise of calculating the turning points. So how do we do that? Well, we set
h squared l times l plus 1 over 2mr squared, minus e squared over r-- that’s the effective
potential-- equal to the energy of some level n, principle quantum number n. So it would be
minus e squared over 2a0, 1 over n squared. That’s the value of the energy En.
And the solutions of this quadratic equation are going to give us the r plus and the r minus of
the orbit. So it’s probably worthwhile to do a little transformation and to say r equal a0 times x,
where x is unit free. And then the equation becomes h squared, l times l plus 1, over 2ma0
squared, times l times l plus 1, over x squared, minus e squared over a0x, is equal to minus e
squared over 2a0, 1 over n squared.
So the unit should work out. We should get a nice equation without units. So what must be
happening is that the coefficient in front of here, h squared 2ma0 squared, let’s take the other
a0 and separate it out, and transform this. Remember, a0 was h squared over m e squared.
So here we get h squared over 2ma0. And that a0 now has an h squared. And there’s m e
squared. So the h squared cancels, the m cancels. And this is e squared over 2a0. So this
whole coefficient is e squared over 2a0, which is nice because now the e squared over a0, the
e squared over a0, a squared over a0, cancel.
And we get l times l plus 1 over x squared-- I’ve canceled all this factor-- minus 2 over x is
equal to minus 1 over n squared. So I multiplied by the inverse of this quantity. It clears up the
factor in the first term. It produces an extra factor of 2 in the second term. And you’ve got a
nice simple quadratic equation.
AUDIENCE: Question?
PROFESSOR: Yes?
AUDIENCE: Where do you get the extra [INAUDIBLE] l, l plus 1 [? come from? ?]
PROFESSOR: Oh, there’s no such thing. There’s too many of them. Thank you very much. Yes, too many, 1
over x squared. Thank you. So let’s move this to the other side-- plus 1 over n squared equals
0.
So this is the main equation. And we can write this-- well, the solution for 1 over x is a
quadratic equation in 1 over x. So I’ll write it here. 1 plus/minus square root of 1 minus l times l
plus 1, over n squared, divided by l times l plus 1. That’s just from the quadratic formula.
So then you invert it. So x is now l times l plus 1, over 1 plus/minus square root of 1 minus l, l
plus 1 over n squared. And we multiply by the opposite factor to clear the square root. So 1
minus/plus square root of 1 minus l times l plus 1 over n squared.
And the same factor here-- 1 minus/plus square root of 1 minus l, l plus 1 over n squared, all
of these things. But that’s not so bad. You get l times l plus 1, times this factor that you had in
the numerator-- that still is the same-- 1 minus l, l plus 1 over n squared.
Now we’re after an interesting piece of information, the two sizes of the ellipse, so it’s worth
simplifying what you got. This equation is not nice enough, so we’re simplifying. And in the
denominator, you have a plus b times a minus b. So it’s 1 minus 1 minus l, l plus 1, over n
squared.
And the good thing is that the denominator, the 1 cancels. And you get the plus, l times l plus 1
over n squared, that cancels, that one. So that, at the end of the day, we get a pretty nice
formula.
The formula is x is equal to n squared, 1 minus/plus, 1 minus l times l plus 1, over n squared,
like that. So if we wish, it’s r plus/minus is x multiplied by a0, so it’s n squared a0, 1 minus/plus
square root of 1 minus l times l plus 1, over n squared.
OK, so the ellipse is defined by those two values that we have here. But the surprising thing is
that the sum over the ellipse, and its eccentricity, is dramatically affected by the values of l. In
fact, for l equals largest, l will be comparable to n. Remember, l can go up to n minus 1.
So at that point, this is essentially 1. You cancel this and there’s nothing here. So r plus and r
minus become about the same for l equals n minus 1. r plus is almost the same as r minus
And the orbit is circular, completely circular.
On the other hand, for l equals 0, the orbit is completely elliptical in that the radius for l equal
0, this is 0, 1, plus/minus 1. So sometimes it’s twice this value, sometimes it’s 0. So you have
the case where r minus can be 0, and you have just an orbit that is like that, r minus 0, so
extremely elliptical-- elliptical. Of course, that is the semi-classical approximation. So it’s more
reliable when you have a reasonable l.
And finally we can say here, for example, one interesting thing, that r plus plus r minus, over 2,
which is this-- r plus plus r minus is the total longest axis of the ellipse, divided by 2, the center
of the ellipse, not the focus, that distance is independent of l, so you have n squared a0.
So a typical Rydberg atom will have n equal 100. So this is an example-- n equal 100, l equals
60, in which case, for n equal 100, r plus/minus is equal to 10,000a0, n squared a0, times this
factor, which in one case is 1.8 and in the other case is 0.2. That’s what you get for l equal 60
and n equal 100. So this orbit, you have r plus about 18,000a0, and r minus about 2,000a0.
And all of our orbits satisfy this property that if you have this, r minus and r plus, all of the
orbits with different l have the same r plus plus r minus. So if this is the total length of the
major axis when the orbit becomes circular, it’s the same. And this distance, when the orbit
becomes very elliptical, is the same as well.
I mentioned last time that this nice property is degeneracy. We’re here, if you’re keeping n
fixed but changing l, you’re going from all these ellipses. This is for l equals n minus 1. And this
one is for l equals 0. And all this ellipses arc here.
And those are all the semi-classical picture of those degenerate states in the diagram of the
hydrogen atom. The diagram of the hydrogen atom was something like this. And you’re
looking at all the degenerate states that you have there. And they are degenerate, and you
would say, well, why are ellipses that look like that degenerate.
Well, even Kepler apparently knew that in Kepler’s laws. That he observed that the period of
motion of an orbit just depended on the semi-major axis. So periods are related to energies.
And it’s reasonable that we have this thing in quantum mechanics.
Now this degeneracy, I want to just finish up by emphasizing what you have here. When
somebody asks what is the number of states you have here, well, you have to be precise in
what you’re counting. The number of full physical states of the quantum system is one here,
one here, one here. But each one of this corresponds to l equals 1, so each one of this is triply
degenerate, because m can be minus 1, 0, and 1.
So here-- three states, three states, three states. Here-- this is five states, five states, five
states, because they all have l equal 2. And l equal 2 goes m from minus 2 to plus 2. So we
don’t actually put three things here. I think that would be confusing. We could not put five and
we cannot see. But it should be remembered that there’s the implicit extra degeneracy here
associated with the azimuthal quantum number that we sometimes just don’t represent it in a
figure.

L24.2 The simplest quantum system (13:55)

MITOCW | watch?v=-8mPXAsX3DY
PROFESSOR: The simplest quantum system. In order to decide what could be the simplest quantum system
you could say a particle in a box. It’s very simple, but in a sense it’s not all that simple. It has
infinitely many states. All these functions on an interval, and then the energy is where infinitely
many of them, so not that simple. OK, infinite bound says something with one bound. OK, a
delta function potential just one bound state, but it has infinitely many scattering states. It’s still
complicated. What could be simpler? Suppose you have the Schrodinger equation. H psi. And
we work in general we know that this thing has energy eigenstates, and probably we should
focus on them. So Psi equal E to the minus I, Et over H bar. Little psi, and then have H Psi
equal E psi.
That is quantum mechanics, and you could say, well it’s up to me to decide what the
Hamiltonian is. If I want to invent the simplest quantum mechanical system. On the other hand,
there are some things that should be true. These are complex numbers, energies, H must be
an operator that has units of energy. And we also saw that if we want probabilities that are
going to be associated with PSI squared to be conserved we need H to be Hermitian. There
should be some notion of inner product. Some sort of operation that gives us numbers we
used to defy PSI that gives a number. To complex numbers in general, and has the property
of somewhat conjugates this thing. It has this, and integrates, but maybe if you’re doing the
simplest quantum mechanical system in the world it will be simpler than an integral. Integrals
are complicated.
But anyway we have something like that, and we want H to be Hermitian. Let me write this in
for any operator A, this is equal to a dagger Psi. And that’s a Hermitian conjugate. That’s a
general definition, and we want H to be Hermitian. H dagger equal H. OK, in some sense you
could say that’s quantum mechanics for you. It’s a Schrodinger equation, a Hamiltonian, an
inner product, a notion of Hermitian operators, and then you’re supposed to solve it. And what
we’ve done is solve this for a whole semester, and try to understand some physics out of it.
But we started with the notion that something simple would be a particle living in one
dimension, and that’s a very reasonable thought. Motivated from classical mechanics that
surely we have particles that move, and moving in three dimensions is more complicated. We
waited towards the end of the semester to do three dimensions, but moving in one dimension
is already kind of interesting, and complicated. We had Psi of X that represented the fact that
the particle could be anywhere here. How can I simplify this? The key to simplifying this is
maybe not to be too attached to the physics for a while, and try to visualize what could you
describe that was simpler.
Suppose the particle could only live at two points X1, and X2. The particle can be here, or
here. Now we’ve re-aligned down to just two points. It can only be this point, or that point. And
you say, that’s very in physical. But let’s wait a second, and think of this. What does that
mean? We used to have Psi effects that could be anywhere, and we wrote it as a function. If I
think of this the simplest thing OK, the simplest thing is a particle is just at one point. There is
only one point. The whole world for the particle is one point, and it’s there. But that probably is
not too interesting because the particle is there. The probability defined there is always one,
and what can you do with It?
But if you have two points there’s room for funny things to happen. We’ll assume that the
particle can be in two points. From F of this Psi effects will go to a new Psi effects that has two
pieces of information. The value of PSI at x1, and the value of Psi at X2. And those are two
numbers alpha, and beta. Alpha squared would be the probability to be at the X1. Beta
squared would be the probability to be at X2. And this may remind you already of something
we’re doing with interferometers. In which the photon could be in the upper branch, or the
lower branch, and you have two numbers. This is somewhat analogous except that the
interferometer you could eventually put more beam splitters, and maybe later three branches,
or four branches, or things like that.
Here I want to consider two things, particle there. One thing that this could be strictly that, but
now let’s relax our assumptions. It could also mean for example, if you have a box, and a
partition. And there’s the left side of the box, and the right side of the box. And the molecule
can either be on the left side, or on the right side. That’s a fairly physical question. Here you
could be probability the amplitude to be on the left, or amplitude to be on the right. Two
component vector just like that. One would be the amplitude to be in either one, and maybe
that amplitude changes in time. Or it could be that you have a particle, and suddenly you
discovered that yeah, the particle is at rest. It’s not moving. It’s not doing anything. It’s one
single point, not two points. But actually this particle has maybe something called spin, and the
spin can be up, or the spin can be down.
We it could invent something. We could call it spin, or a particle could be in this state, or in that
state. And if that’s possible for a particle you could have here the amplitude for up spin, and
the amplitude for down spin. And those would be the two numbers. It’s lots of possibilities in
the sense this is a classic problem waiting for a physical application in quantum mechanics.
Let’s push it a little more. Now how would we do inner products? We decided OK, you need to
do inner products. And what was the inner product of two functions phi and psi was the
integral, the X of phi star of X1 times-- phi star of X times psi of X.
And what you’re really doing is taking the values of the first wave function at one point.
Complex conjugating it, take the value of the second wave function at the same point complex
conjugating it. If you would have two vectors like this alpha, and beta the first wave function.
Alpha one, beta one, and the second wave function. Alpha two, beta two. The inner products
psi 1, psi 2 should be the analog of this thing which is multiply things at the same point. You
should do the alpha one star. That’s alpha two plus beta one star times beta two. That would
be the nice way to do this.
You could think of this as having transposed this alpha one, and complex conjugated it. Beta
one, and then the matrix product with alpha one, beta one. You transpose complex conjugate
the first, and you do that with the second. When you study a little more quantum mechanics in
805 you will explore this analogy even more in that you will think of a wave function as a
column vector, infinite one. psi at zero, psi at epsilon, psi at two epsilon, psi at minus epsilon.
So you’ve sliced the x-axis and conserved an infinite vector. And that’s all wave function. It’s
not so unnatural to do this, and this will be our inner product.
How about H be in Hermitian. That just means for matrices that H transpose complex
conjugate that dagger, Hermitian, is equal to H. And you may have seen that that’s what
dagger means. You transpose a complex conjugate. If you haven’t seen it you could prove it
now using this rule for the inner product because the inner product will tell you how to
construct the dagger of any operator. And you will find that indeed the dagger what it does is
transposes, and complex conjugates it. And it sort of comes because the inner product
transposes, and complex conjugates the first object.

L24.3 Hamiltonian and emerging spin angular momentum (15:42)

MITOCW | watch?v=8NKsBpjXRt0
PROFESSOR: Here is where the power of this comes when you decide that you’re going to invent all possible
Hamiltonians at this moment. You’ve reduced the infinite dimensional space of functions in the
line to two points, so you have a two-dimensional vector space, dramatic reduction.
So here we decide, OK, here is the Hamiltonian. And it’s going to be a two-by-two matrix, and
it better be Hermitian. So what options do I have? Well, Hermitian means transpose complex
conjugated gives you back the same matrix. So let’s try to parametrize such a matrix.
I could put a0, a real quantity here, and another real quantity in the bottom size. And if they
are real, the transpose complex conjugate will remain the same. That’s OK. So I could put a0
and a1 here.
I’ll do it in a little different way. I’ll put a0 plus a3, and a0 minus a3 here. Now, the thing is that
a0 and a3 have to be real. So I’ll use a0, a1, a2, and a3. And they all should be real.
So here, transpose complex conjugate doesn’t affect these things. They are the same. That’s
good. Here, we can a1 minus ia2. This is a complex number. And the only thing that must
happen is that, when I transpose a complex conjugate, I must get the same thing.
So I should put here a1 plus ia2. Because if I transpose this, I will have it on this side. And then
I complex conjugate it, and it becomes this term. Similarly, if I transpose this term, it goes
here. But then complex conjugated, it becomes that.
So actually, I claim the most general two-by-two Hermitian matrix. Time independent-- you
see, all our quantum mechanics this semester has been time independent potentials. So here
it’s time independent. And now, this is the most general Hamiltonian you could have. That’s it.
So when you see something like that, you realize that in an hour or two or after some thinking,
you will have solved the most general dynamical system with two degrees of freedom in
quantum mechanics.
So I will write this as a0 times this matrix, plus a1 times this matrix, plus a2 times this matrix,
plus a3 times this matrix. That’s exactly what you have in there. Multiply in these constants
and add these matrices, and they give you all what we have.
So actually, these are the basic Hermitian two-by-two matrices. And if you multiply them by
real numbers, you still are Hermitian. And if you add them, you still are Hermitian. So the most
general Hermitian matrix has four parameters.
And it is a space of matrices spanned by these four matrices. They are so famous, these
matrices, that this is called sigma 1, this is called sigma 2, and this is called sigma 3. And
they’re called the Pauli matrices.
Well, but let’s put units to these things. We want to write Hamiltonians. So let’s make sure we
have units that do the job. The Hamiltonian must have units of energy. So we could do a
Hamiltonian that has units of energy. So I’ll write h omega, which has units of energy, omega
1, sigma 1, plus h-- I’ll put it even over 2-- h omega 2, over 2, sigma 2, plus h omega 3, over
2, sigma 3.
Now you would say, well, why didn’t you use the first matrix. I could have used the first matrix,
but the first matrix is proportional to the identity. We already learned in our course that if you
have an extra constant operator in the Hamiltonian, it doesn’t change your calculations in any
way. You had the Hamiltonian for the harmonic oscillator. It was h omega N plus 1/2. And the
1/2 was an additive constant that never played any important role.
So this would be an additive constant to the energy. It would tell you how you’re measuring the
energy from what level. So it’s not very interesting. You can use it sometimes, but it’s definitely
not all that interesting.
So I’ll do a little variation of this by writing omega 1, h over 2, sigma 1, plus omega 2, h over 2,
sigma 2, plus omega 3, h over 2, sigma 3. And then you say, look, that’s interesting, OK, I
have an omega on this thing. But omega is fine. We know what it is. It’s a frequency, 1 over
time unit. But this has units of angular momentum.
h bar has units of angular momentum. And the thing that is a little mysterious here is that we
seem to have three of them. So maybe somehow this has to do with angular momentum. So
let’s investigate it a little bit.
Well, they have units of angular momentum. So maybe I can call some first component of
angular momentum, h bar over 2 sigma 1, second component of angular momentum, h 1 over
2 sigma 2, and the third component of angular momentum, h bar over 2 sigma 3.
Well, those are just names. But we can try to do a computation with them. We can try to see
what is the commutator of Sx with Sy. And happily, these are matrices, so it’s a natural thing to
do commutators.
So you would have h bar over 2, sigma 1, with h bar over 2, sigma 2, commutator. And it’s
equal to h bar over 2 times h bar over 2, sigma 1, sigma 2, minus sigma 2, sigma 1.
So it’s h bar over 2 times h bar over 2. And let’s do this. Sigma 1 is 0, 1, 1, 0. Sigma 2 is 0,
minus i, i, 0, minus 0, minus i, i, 0, 0, 1, 1, 0. OK, I have to do all that arithmetic. Happily, this is
not that bad. Let’s see if I don’t make mistakes.
OK, here I get two terms, an i from the first, a 0 here, a 0, and a minus i here-- minus-- and
minus i, a 0, a 0, and an i, which is h bar over 2, times h bar over 2, times-- oh, they don’t
cancel. They seem to cancel, but there’s some minus-- it’s actually twice-- of those, so 2i
minus 2i, 0, 0.
And here we get a 2 cancels this and then i goes out. So I’ll have with this factor and i out is i h
bar times h bar over 2, times the matrix 1 minus 1, 0, 0. Somehow, it gave that.
h bar over 2, 1 minus 1-- 1 minus 1 is sigma 3. And h bar over 2 sigma 3 is Sz, so this is all Sz.
So it’s i h bar Sz. So this stuff, Sx, Sy, is giving you i h bar Sz. And that was exactly like angular
momentum.
So not only it has the units of angular momentum, it has the commutation relations of angular
momentum. Hermitian operators, two-by-two matrices, they used to be r cross p, all these
derivatives, complicated stuff. Here it is-- with two-by-two matrices, you’ve constructed angular
momentum.
What we’ve constructed at this moment is spin 1/2. A whole spin 1/2 system is nothing else
than that-- angular momentum and the freedom of having two discrete degrees of freedom.
The interpretation that what they have to do is spin up and spin down is something that
physicists came up with. But the mathematics was there waiting as the simplest quantum
mechanical problem.
Considering [? who wrote ?] the Schrodinger equation, maybe, if he had been more
mathematically inclined, he could have discovered, five minutes later, spin. But he wanted to
figure out the wave function of the hydrogen atom and scattering and all these very
complicated things.
So needless to say, the other commutation relations work out. So if you check that Sy with Sz,
you will get i h bar Sx. And if you do finally Sz with Sx, you will get i h bar Sy.
So these two-by-two matrices satisfy this property. And there is a little more to be said. I want
to say a few more things about it because it’s counter-intuitive and therefore very nice.
Half of the semester in 805 is devoted to spin 1/2. It takes a while to understand it. So I wanted
you to see it, at least once. And the problem is the physical interpretation takes time to get
accustomed.
So on the other hand, we did write the Hamiltonian. So the Hamiltonian was omega 1 Sx, plus
omega 2 Sy, plus omega 3 Sz. And it’s there-- S1, S2, S3, second line. And this is the
Hamiltonian.
So people write it sometimes as omega dotted with an S vector, as it’s saying it has three
components, as omega has three components as well. And there’s a lot of physics in this
Hamiltonian. It’s the simplest Hamiltonian, but it actually represents a spin in a magnetic field.
And what it will make it do, this Hamiltonian, if we solve the differential, this two-by-two matrix
equation, we will find that the spin starts to precess. That’s the origin of nuclear magnetic
resonance, spinning, precessing spins, that the machine makes them precess. And they send
a signal and you detect the density of different fluids in the body.

L24.4 Eigenstates of the Hamiltonian (14:03)

MITOCW | watch?v=3Cij8HYKXOk
PROFESSOR: I want to just elucidate a little more what are the eigenstates here. So with angular momentum, we
measure L squared and we measure Lz. So with spin, we’ll measure spin squared and Sz. And Sz is interesting. It
would be spin or angular momentum in the z direction.
So let’s look at that, Sz, this is the operator, the measurable. It’s this time nothing else than a simple matrix. It’s
not the momentum operator. It’s not angular momentum operator with derivative. It’s an angular momentum
operator, but it seems to have come out of thin air.
But it hasn’t. So here it is. Oh, and it’s diagonal already. So the eigenstates are easily found. I have one state-- I
don’t know how I want to call it-- I’ll call it 1, 0. It’s one state. And that’s an eigenstate of it. We’ll call it, for
simplicity, up. We’ll see why.
Sz, acting on up, is equal to h bar over 2, 1, 0; 0, minus 1 acting on 1, 0. That’s h bar over 2. And the matrix is at
1, 0. So it is an eigenstate because it’s h bar over 2 up. The 1, 0 state again. So this thing, we call it up, because it
has up component of the z angular momentum. So it’s a spin up state.
What is the spin down state? It Would be 0, 1. It’s a spin down. And Sz on the spin down, it’s also an eigenstate,
this time with minus h bar over 2, spin down.
And we call it spin 1/2 because of this 1/2. And you’d say, no, you just put that constant because you want it there.
Not true, if I would have put a different constant here in defining this, I would not have gotten this without any
constant, that it’s how angular momentum works.
So if I use two-by-two matrices, I’m forced to get spin 1/2. You cannot get anything else. The 1/2 of the spin is
already there. The component of angular momentum is h bar over 2. If you have a photon, it has spin 1. The
components of angular momentum is plus h or minus h, if you have the two circularly polarized waves.
So this is actually interesting. But it begs for another question because we have a good intuition. And this is spin
up along the z direction because it has a Sz component, eigenvalue h over 2. So the last question I want to ask is,
how do I get a spin state to point in the x direction or in the y direction.
You see, the interpretation of this spin state is that it’s a spin state that has the spin up in the z direction, because
that’s what you can measure, or spin down in the down direction of Sz. Can I get spin states that point along the x
direction or y direction?
And here’s where the problem seems to hit you and you say, I’m in trouble. I have this state spin up and spin
down along z. And it’s a two-dimensional vector space, because two-by-two matrices, and Sx, Sy, Sz is three
down along z. And it’s a two-dimensional vector space, because two-by-two matrices, and Sx, Sy, Sz is three
dimensions. How am I going to get three dimensions out of two dimensions? You just have spin states along z, up
and down.
Now the spin up and spin dow, moreover, are orthogonal states. These two are orthogonal states. You see, you
do the inner product, transpose this, you get this, and times that. So they are orthogonal, unless you imagine this
vector plus this vector is a full basis for the vector space, because the vector space is a, b. And now you see that
this is a times up plus b times down.
So anything is a superposition of up and down. So how do I ever get something that points along x, or something
that points along y? Well, let’s try to see that. Well, consider Sx, you have an Sx operator, which is h bar over 2, 0,
1, 1, 0.
And then you can try to analyze this, but it’s more entertaining to imagine other things, to say, look, if I’ve gotten
this vector 1, 0, which is up, and 0, 1, which is down, I can try maybe a vector that has the up and the down.
Maybe the up and the down is a vector that points nowhere. Who knows, whatever.
If I want to normalize it, I have to put a 1 over square root of 2. And now I know, it’s 1 over square root of 2, up,
plus down. That’s what this vector is. But let’s see what Sx does on it. Sx on 1 over square root of 2, 1, 1 is h bar
over 2, 1 over square root of 2, 0, 1, 1, 0, on 1, 1. So h bar over 2, 1 over square root of 2.
And let’s see, that gives 1, that gives me another 1. Oops, I got the same vector I started with. It an eigenstate. So
this thing, this plus and down, superimposed, is an eigenstate of Sx. So this is actually a spin that points up, but in
the x direction.
Whenever we don’t put anything, we’re talking about z. But this is the spin up in the x direction. And these
appeared as the sum of a spin up and spin down in the z direction.
It may not be too surprising for you to imagine that if you put 1 over square root of 2, 1 minus 1, that vector is
orthogonal to this one. Yes, you do the transpose. And this one is orthogonal. So this is 1 over square root of 2,
up, minus, down. That is the down spin along x.
So the up and down spins along x come out like that. We form the linear combinations. So finally, you would say,
well, I’m going to push my luck and try to get spins along the y direction.
But I now form those linear combinations. What else could I do? These linear combinations are there. And I’ve got
already two things. And you say, well, that’s fair, you’re a two-dimensional vector space, so you’re getting two
things, spin states along x and spin states along z.
But actually, we didn’t run out of things to try. We could try a state of the form 1 over square root of 2, something
like this. We could try the state up. And then, we’ve put a plus, but now we could put a plus i, state down.
So this would be a state of the form 1, i. And what does it do? Well, let’s see what it does with Sy. 1, i. And the Sy
matrix is h bar over 2 minus i, i, 0, 0, 1, i. And there’s 1 over square root of 2. So it’s 1 over square root of 2, or h
bar over 2, 1 over square root of 2.
And let’s see what we get. Minus i times i is one 1. And the second one is i. We get the same state. Yes, it is an
eigenstate. So with a plus i here, this is this spin up along the y direction.
And the spin down along the y direction would be up, minus i, down. This is orthogonal to that vector. It’s 1 minus
i. And it’s the spin down in the y direction. You can calculate the eigenvalue, it’s minus h bar over 2, and it’s
pointing down.
So your complex numbers play the crucial role. If you didn’t have complex numbers, there was no way you could
ever get a state that this pointing in all possible directions. And you also see, finally, that this thing has nothing to
do with your usual wave functions, functions of x, theta, phi.
No, spin is an additional world with two degrees of freedom, an extra thing. It doesn’t have a simple wave function.
The spin wave functions are these two column vectors. But there is angular momentum in there, as you
discovered here. There is a commutation relations of angular momentum, the units of angular momentum, the
eigenvalues of angular momentum.
And this great thing is such a nice simple piece of mathematics. It has an enormous utility. It describes the spins of
particles. So it’s an introduction, in some sense, to what 805 is all about.
Spin systems are extremely important, practical applications. These things, because they have basically two
states, are essentially qubits for a quantum computer. Within these systems, we understand, in the simplest way,
entanglement, Bell inequalities, superposition, all kinds of very, very interesting phenomena. So it’s a good place
to stop.

  • 21
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值