Lecture 9: Observables, Hermitian operators, measurement and uncertainty. Particle on a circle.

L9.1 Expectation value of Hermitian operators (16:40)

L9.2 Eigenfunctions of a Hermitian operator (13:05)

L9.3 Completeness of eigenvectors and measurement postulate (16:56)

L9.4 Consistency condition. Particle on a circle (17:45)

L9.5 Defining uncertainty (10:31)

L9.1 Expectation value of Hermitian operators (16:40)MITOCW | watch?v=qP6y2edM6Ms

PROFESSOR: Today we’ll talk about observables and Hermitian operators. So we’ve said that an operator,
Q, is Hermitian in the language that we’ve been working so far, if you find that the integral, dx
psi 1 Q psi 2, is actually equal to the integral dx of Q, acting this time of Psi 1 all star psi 2.
So as you’ve learned already, this requires some properties about the way functions far away,
at infinity, some integration by parts, some things to manage, but this is the general statement
for a large class of functions, this should be true. Now we want to, sometimes, use a briefer
notation for all of this. And I will sometimes use it, sometimes not, and you do whatever you
feel. If you like to use this notation, us it. So here’s the definition. If you put up Psi 1, Psi 2 and
a parentheses, this denotes a number, and in fact denotes the integral of psi 1 star of x, psi 2
of x dx.
So whatever you put in the first input ends up complex motivated. When you put in the second
input, it’s like that, it’s all integrated. This has a couple of obvious properties. If you put a
number times psi 1 times psi 2 like this, the number will appear, together with psi 1, and will
complex conjugated. So it can go out as a star psi 1 psi 2. And if you put the number on the
second input, it comes out as is. Because the second input is not complex conjugated in the
definition. With this definition, a Hermitian operator, Q is Hermitian, has a nice look to it. It
becomes kind of natural and simple.
It’s the statement that if you have psi 1, Q psi 2, you can put the Q in the first input. Q psi 1 psi
2. This second term in the right hand side is exactly this integral here. And the first tern in the
left hand side is the left hand side of that condition. So it’s just maybe a briefer way to write it.
So when you get tired of writing integral dx of the first, the second, you can use this.
Now with distance last time, the expectation values of operators. So what’s the expectation
value of Q in some state psi of x? And that is denoted as these braces here and of psi is equal
to the integral of psi. The expectation value depends on the state you live in and it’s psi Q psi.
Or if you wish, dx in written notation psi Q. I should put the hats everywhere. This is the
expectation value of Q. I’m sorry, I missed here a star. So so far, so good. We’ve reviewed
what a Hermitian operator is, what an expectation value is, so let’s begin with some claims.
Claim number one. The expectation value of Q, with Q Hermitian. So everywhere here, Q will
be Hermitian. The expectation value of Q is real. A real number, it belongs to the real
numbers. So that’s an important thing. You want to figure out the expectation value of Q, you
have a psi star, you have a psi. Well, it’d better be real if we’re going to think, and that’s the
goal of this discussion, that Hermitian operators are the things you can measure in quantum
mechanics, so this better be real.
So let’s see what this is. Well, Q psi, that’s the expectation value. If I complex conjugate it, I
must complex conjugate this whole thing. Now if you want to complex conjugate an integral,
you can complex conjugate the integrand. Here it is. I took this right hand side here, the
integrand. I copied it, and now I complex conjugated it. That’s what you mean by complex
conjugating an integral. But this is equal, integral dx. Now I have a product of two functions
here. Psi star and Q that has acted on psi. So that’s how I think. I never think of conjugating Q.
Q is a set of operations that have acted on psi and I’m just going to conjugate it. And the nice
thing is that you never have to think of what is Q star, there’s no meaning for it.
So what happens here? Priority of two functions, the complex conjugate of the first-- now if you
[INAUDIBLE] normally something twice, you get the function back. And here you’ve got Q psi
star. But that, these are functions. You can move around. So this Q hat psi star Q psi. And so
far so good. You know, I’ve done everything I could have done. They told to come to complex
conjugate this, so I complex conjugated it and I’m still not there. But I haven’t used that this
operator is Hermitian. So because the operator is Hermitian, now you can move the Q from
this first input to the second one. So it’s equal to integral dx psi star Q psi. And oh, that was the
expectation value of Q on psi, so the star of this number is equal to the number itself, and that
proves the claim, Q is real.
So this is our first claim. The second claim that is equally important, claim two. The
eigenvalues of the operator Q are real. So what are the eigenvalues of Q? Well you’ve
learned, with the momentum operator, eigenvalues or eigenfunctions of an operator are those
special functions that the operator acts on them and gives you a number called the eigenvalue
times that function. So Q, say, times, psi 1, if psi 1 is a particularly nice choice, then it will be
equal to some number. Let me quote Q1 times psi1. And there, I will say that Q1 is the
eigenvalue. That’s the definition. And psi1 is the eigenvector, or the eigenfunction. And the
claim is that that number is going to real.
So why would that be the case? Well, we can prove it in many ways, but we can prove it kind
of easily with claim number one. And actually gain a little insight, cold calculate the expectation
value of Q on that precise state, psi 1. Let’s see how much is it. You see, psi 1 is a particular
state. We’ve called it an eigenstate of the operator. Now you can ask, suppose you live in psi
1? That’s who you are, that’s your state. What is the expectation value of this operator? So
we’ll learn more about this question later, but we can just do it, it’s the integral of dx psi 1 Q psi

  1. And I keep forgetting these stars, but I remember them after a little while. So at this
    moment, we can use the eigenvalue condition, this condition here, that this is equal to dx psi 1
    star Q1 psi 1. And the Q1 can go out, hence Q 1 integral dx of psi 1 star psi 1.
    But now, we’ve proven, in claim number one, that the expectation value of Q is always real,
    whatever state you take. So it must be real if you take it on the state psi 1. And if the
    expectation value of psi 1 is real, then this quantity, which is equal to that expectation value,
    must be real. This quantity is the product of two factors. A real factor here-- that integral is not
    only real, it’s even positive-- times Q1. So if this is real, then because this part is real, the other
    number must be real. Therefore, Q1 is real.
    Now it’s an interesting observation that if your eigenstate, eigenfunction is a normalized
    eigenfunction, look at the eigenfunction equation. It doesn’t depend on what precise psi 1 you
    have, because if you put psi 1 or you put twice psi 1, this equation still holds. So if it hold for
    psi 1, if psi 1 is called an ideal function, 3 psi 1, 5 psi 1, minus psi 1 are all eigenfunctions.
    Properly speaking in mathematics, one says that the eigenfunction is the subspace generated
    by this thing, by multiplication. Because everything is accepted. But when we talk about the
    particle maybe being in the state of psi 1, we would want to normalize it, to make psi 1 integral
    squared equal to 1. In that case, you would obtain that the expectation value of the operator
    on that state is precisely the eigenvalue. When you keep measuring this operator, this state,
    you keep getting the eigenvalue. So I’ll think about the common for a normalized psi 1 as a
    true state that you use for expectation values.
    In fact, whenever we compute expectation values, here is probably a very important thing.
    Whenever you compute an expectation value, you’d better normalize the state, because
    otherwise, think of the expectation value. If you don’t normalize the state, you the calculation
    and you get some answer, but your friend uses a wave function three times yours and your
    friend gets now nine times your answer. So for this to be a well-defined calculation, the state
    must be normalized.
    So here, we should really say that the state is normalized. Say one is the ideal function
    normalized. And this integral would be equal to Q1 belonging to the reals. And Q1 is real. So
    for a normalized psi 1 or how it should be, the expectation value of Q on that eigenstate is
    precisely equal to the eigenvalue.

L9.2 Eigenfunctions of a Hermitian operator (13:05)

MITOCW | watch?v=K3WI62VJqVo
PROFESSOR: So here comes the point that this quite fabulous about Hermitian operators. Here is the thing
that it really should impress you. It’s the fact that any, all Hermitian operators have as many
eigenfunctions and eigenvalues as you can possibly need, whatever that means. But they’re
rich. It’s a lot of those states. What it really means is that the set of eigenfunctions for any
Hermitian operator-- whatever Hermitian operator, it’s not just for some especially nice ones–
for all of them you get eigenfunctions.
And these eigenfunctions, because it has vectors, they are enough to span the space of
states. That is any state can be written as a superposition of those eigenvectors. There’s
enough. If you’re thinking finite dimensional vector spaces, if you’re looking at the Hermitian
matrix, the eigenvectors will provide you a basis for the vector space. You can understand
anything in terms of eigenvectors. It is such an important theorem. It’s called the spectral
theorem in mathematics.
And it’s discussed in lots of detail in 805. Because there’s a minor subtlety. We can get the
whole idea about it here. But there are a couple of complications that mathematicians have to
iron out. So basically let’s state we really need, which is the following. Consider the collection
of eigenfunctions and eigenvalues of the Hermitian operator q. And then I go and say, well, q
psi 1 equal q 1 psi 1 q psi 2 equal q2 psi 2.
And I actually don’t specify if it’s a finite set or an infinite set. The infinite set, of course, is a tiny
bit more complicated. But the result is true as well. And we can work with it. So that is the set
up. And here comes the claim. Claim 3, the eigenfunctions can be organized to satisfy the
following relation, integral dx psi i of x psi j of x is equal to delta ij. And this is called
orthonormality.
Let’s see what this all means. We have a collection of eigenfunctions. And here it says
something quite nice. These functions are like orthonormal functions, which is to say each
function has unit norm. You see, if you take i equal to j, suppose you take psi 1 psi 1, you get
delta 1 1, which is 1. Remember the [INAUDIBLE] for delta is 1 from the [INAUDIBLE] are the
same. And it’s 0 otherwise. psi 1 the norm of psi 1 is 1 and [INAUDIBLE] squared [INAUDIBLE]
psi 1, psi 2, psi3, all of them are well normalized.
So they satisfied this thing we wanted them to satisfy. Those are good states. psi 1, psi 2, psi
3, those are good states. They are all normalized. But even more, any two different ones are
orthonormal. This is like the 3 basis vectors of r3. The x basic unit vector, the y unit vector, the
z unit vector, each one has length 1, and they’re all orthonormal.
And when are two functions orthonormal? You say, well, when vectors are orthonormal I know
what I mean. But orthonormality for functions means doing this integral. This measures how
different one function is from another one. Because if you have the same function, this integral
and this positive, and this all adds up. But for different functions, this is a measure of the inner
product between two functions. You see, you have the dot product between two vectors. The
dot product of two functions is an integral like that. It’s the only thing that makes sense
So I want to prove one part of this, which is a part that is doable with elementary methods. And
the other part is a little more complicated. So let’s do this. And consider the case if qi is
different from qj, I claim i can prove this property. We can prove this orthonormality. So start
with the integral dx of psi i star q psi j. Well, q out here at psi j is qj. So this is integral dx psi i
star qj psi j. And therefore, it’s equal to qj times integral psi i star psi j.
I simplified this by just enervating it. Because psi i and psi j are eigenstates of q. Now, the
other thing I can do is use the property that q is Hermitant and move the q to act on this
function. So this is equal to integral dx q i psi i star psi j. And now I can keep simplifying as
well. And I have dx. And then I have the complex conjugate of qi psi i psi i, like this, psi j. And
now, remember q is an eigenvalue for Hermitian operator. We already know it’s real. So q
goes out of the integral as a number. Because it’s real, and it’s not changed. Integral dx psi i
star psi j.
The end result is that we’ve shown that this quantity is equal to this second quantity. And
therefore moving this-- since the integral is the same in both quantities, this shows that q i
minus qj, subtracting these two equations, or just moving one to one side, integral psi i star psi
j dx is equal to 0. So look what you’ve proven by using Hermiticity, that the difference between
the eigenvalues times the overlap between psi i and psi j must be 0.
But we started with the assumption that the eigenvalues are different. And if the eigenvalues
are different, this is non-zero. And the only possibility is that this integral is 0. So this implies
since we’ve assumed that qi is different than qj. We’ve proven that psi i star psi j dx is equal to
0. And that’s part of this little theorem. That the eigenfunctions can be organized to have
orthonormality and orthonormality between the different points.
My proof is good. But it’s not perfect. Because it ignores one possible complication, which is
that here we wrote the list of all the eigenfunctions. But sometimes something very interesting
happens in quantum mechanics. It’s called degeneracy. And degeneracy means that there
may be several eigenfunctions that are different but have the same eigenvalue. We’re going to
find that soon-- we’re going to find, for example, states of a particle that move in a circle that
are different and have the same energy. For example, a particle moving in a circle with this
velocity and a particle moving in a circle with the same magnitude of the velocity in the other
direction are two states that are different but have the same energy eigenvalue.
So it’s possible that this list not all are different. So suppose you have like three or four
degenerate states, say three degenerate states. They all have the same eigenvalue. But they
are different. Are they orthonormal or not? The answer is-- actually the clue is there. The
eigenfunctions can be organized to satisfy. It would be wrong if you say the eigenfunctions
satisfy. They can be organized to satisfy. It means that, yes, those ones that have different
eigenvalues are automatically orthonormal. But those that have the same eigenvalues, you
may have three of them maybe, they may not necessarily be orthonormal. But you can do
linear transformations of them and form linear combinations such that they are orthonormal.
So the interesting part of this theorem, which is the more difficult part mathematically, is to
show that when you have degeneracies this still can be done. And there’s still enough
eigenvectors to span the space.

L9.3 Completeness of eigenvectors and measurement postulate (16:56)

MITOCW | watch?v=XF6FAEi_54I
PROFESSOR: That brings us to claim number four, which is perhaps the most important one. I may have
said it already.b The eigenfunctions of Q form a set of basis functions, and then any
reasonable psi can be written as a superposition of Q eigenfunctions.
OK, so let’s just make sense of this. Because not only, I think we understand what this means,
but let’s write it out mathematically. So the statement is any psi of x, or this physical state, can
be written as a superposition of all these eigenfunctions So there are numbers, alpha 1 psi 1
of x plus alpha 2 psi 2 of x. Those are the expansion coefficients with alphas. And in summary,
we say from sum over i, alpha i psi i of x.
So the idea is that those alpha i’s exist and you can write them. So any wave function that you
have, you can write it in a superposition of those eigenfunctions of the Hermitian operator. And
there are two things to say here. One is that, how would you calculate those alpha i’s?
Well, actually, if you assume this equation, the calculation of alpha i’s is simple, because of this
property. You’re supposed to know the eigenfunctions. You must have done the work to
calculate the eigenfunctions. So here is what you can do. You can do the following integral.
You can do this one, psi i psi.
Let’s calculate this thing. Remember what this is. This is an integral, dx, of psi i star. That’s psi.
And psi is the sum over j of alpha j psi j. You can use any letter. I used i for the sum, but since
I put that psi i, I would make a great confusion if I used another i. So I should use j there. And
what is this? Well, you’re integrating the part of this. That’s a sum. So the sum can go out. It’s
the sum over j alpha j integral of psi i star psi j d.
And what is this delta ij? That is our nice orthonormality. So this is sum over j alpha j, delta i j.
Now, this is kind of a simple sum. You can always be done. You should just think a second.
You’re summing over j, and i is fixed. The only case when this gives something is when j, and
you’re summing over, is equal to i, which is a fixed number. Therefore, the only thing that
survives is j equals to i, so this is 1. And therefore, this is alpha i.
So we did succeed in calculating this, and in fact, alpha i is equal to this integral of psi i with
psi. So how do you compute it now for i? You must do an integral. Of what? Of psi i star times
your wave function. So in this common interval. So the alpha i’s are given by these numbers.
This would prove.
The other thing that you can check is if the wave function squared dx is equal to 1. What does
it imply for the alpha i’s? You see, the wave function is normalized, but it’s not a function of
alpha 1, alpha 2, alpha 3, alpha 4, all these things. So I must calculate this. And now let’s do it,
quickly, but do it.
Sum over i, alpha i, psi i star, sum over j, alpha j, psi j. See, that’s the integral of these things
squared dx. I’m sorry. I went wrong here. The star is there. The first psi, starred, the second
psi. Now I got it right. Now, I take out the sums i, sum over j, alpha i star alpha j, integral dx psi
i star psi j. This is delta i j, therefore j becomes equal to i, and you get sum over i of alpha i star
alpha i, which is the sum over i of, then alpha i squared. OK.
So that’s what it says. Look. This is something that should be internalized as well. The sum
over i of the alpha i squared is equal to 1. Whenever you have a superposition of wave
functions, and the whole thing is normalized, and your wave functions are orthonormal, then
it’s very simple. The normalization is computed by doing the sums of squares of each
coefficient. The mixings don’t exist because there’s no mixes here.
So everything is separate. Everything is unmixed. Everything is nice. So there you go. This is
how you expand any state in the collection of eigenfunctions of any Hermitian operator that
you are looking at.
OK. So finally, we get it. We’ve done all the work necessary to state the measurement
possibility. How do we find what we measure? So here it is. Measurement Postulate.
So here’s the issue. We want to measure. I’m going to say these things in words. You want to
measure the operator, q, of your state. The operator might be the momentum, might be the
energy, might be the angular momentum, could be kinetic energy, could be potential energy.
Any Hermitian operator. You want to measure it in your state.
The first thing that the postulate will say is that you will, in general, obtain just one number
each time you do a measurement, but that number is one of the eigenvalues of this operator.
So the set of possible measurements, possible outcomes, better say, is the set of eigenvalues
of the operator. Those are the only numbers you can get.
But you can get them with different probabilities. And for that, you must use this plane. And
you must, in a sense, rewrite your state as a superposition of the eigenfunctions, those alphas.
And the probability to measure q1 is the probability that you end up of this part of the
superposition, and it will be given by alpha 1 squared, [INAUDIBLE]. The probability to
measure q will be given by alpha 2 squared and all of these numbers.
So, and finally, that after the measurement, another funny thing happens. The state that was
this whole sum collapses to that state that you obtained. So if you obtained q1, well, the whole
thing collapses to psi 1. After you’ve done the measurement, the state of the system becomes
psi 1.
So this is the spirit of what happens. Let me write it out. If we measure Q in the state psi, the
possible values obtained are q1, q2. The probability, p i, to measure q i is p i equals alpha i
squared. And remember what this alpha i we calculated it. This overlap of psi i with psi
squared. And finally, after finding-- after, let’s write it, the outcome, q i, the state of the system
becomes psi of x is equal to psi i of x. And this is a collapse of the wave function. And it also
means that after you’ve done the measurement and you did obtain the value of q i, you stay
with psi i, if you measure it again, you would keep obtaining q i.
Why did it all become possible? It all became possible because Hermitian operators are rich
enough to allow you to write any state as a superposition. And therefore, if you want to
measure momentum, you must find all the eigenfunctions of momentum and rewrite your state
as a superposition of momentum. You want to do energy? Well, you must rewrite your state as
a superposition of energy eigenstates, and then you can measure. Want to measure angular
momentum? Find the eigenstates of angular momentum, use the theorem to rewrite your
whole state in different ways.
And this is something we said in the first lecture of this course, that any vector in a vector
space can be written in infinitely many ways as different superpositions of vectors. We wrote
the arrow and said, this vector is the sum of this and this, and this plus this plus this, and this
plus this plus this. And yes, you need all that flexibility. For any measurement, you rewrite the
vector as the sum of the eigenvectors, and then you can tell what are your predictions. You
need that flexibility that any vector in a vector space can be written in infinitely many ways as
different linear superpositions.
So there’s a couple of things we can do to add intuition to this. I’ll do, first, a consistency
check, and maybe I’ll do an example as well. And then we have to define uncertainties, those
of that phase. So any question about this measurement postulate? Is there something unclear
about it?
It’s a very strange postulate. You see, it divides quantum mechanics into two realms. There’s
the realm of the Schrodinger equation, your wave function evolves in time. And then there’s a
realm of measurement. The Schroedinger equation doesn’t tell you what you’re supposed to
do with measurement. But consistency with a Schroedinger equations doesn’t allow you many
things. And this is apparently the only thing we can do. And then we do a measurement, but
somehow, this psi of x collapses and becomes one of the results of your measurement.
People have wondered, if the Schroedinger equation is all there is in the world, why doesn’t
the result of the measurement come out of the Schroedinger equation? Well, people think very
hard about it, and they come up with all kinds of interesting things.
Nevertheless, nothing that comes out is sufficiently clear and sufficiently useful to merit a
discussion at this moment. It’s very interesting, and it’s subject of research, but nobody has
found a flaw with this way of stating things. And it’s the simplest way of stating things. And
therefore, the measurement is an extra assumption, an extra postulate. That’s how a
measurement works. And after you measure, you leave the system, the Schroedinger
equation takes over and keeps evolving. You measure again, something happens, there’s
some answer that gets realized. Some answers are not realized, and it so continues.

L9.4 Consistency condition. Particle on a circle (17:45)

MITOCW | watch?v=_jPVD45YYlk
PROFESSOR: Let me do a little exercise using still this manipulation. And I’ll confirm the way we think about
expectations values.
So, suppose exercise. Suppose you have indeed that psi is equal to alpha i psi i. Compute the
expectation value of Q in the state of psi. Precisely, the expectation value of this operator
we’ve been talking about on the state.
So this is equal to the integral dx psi star Q psi. And now I have to put two sums before. And
go a little fast here. dx sum over i alpha i psi i star Q sum over j alpha j psi j. No star.
This is equal to sum over i sum over j alpha i star alpha j integral dx psi i star Q psi j. But Q psi
j is equal to qj psi j. Therefore, this whole thing is equal to qj times the integral dx of psi i star
psi j, which is qj delta ij.
So here we go. It’s equal to sum over i, sum over j, alpha i star alpha j, qj delta ij, which is
equal to the sum over i. The j’s disappear. And this is alpha i squared qi. That’s it. OK.
Now you’re supposed to look at this and say, yay. Now why is that? Look. How did we define
expectation values? We defined it as the sum of the value times the probability that this value
have. It’s for a random variable.
So here our random variable is the result of the measurement. And what are the possible
values? qi’s. And what are the probabilities that they have Pi? OK. So the expectation value of
q should be that, should be the sum of the possible values times their probabilities, and that’s
what the system gives.
This is how we defined expectation value of x. Even though it’s expectation value of P. And it
all comes from the measurement postulate and the definition. Now, this definition and the
measurement postulate just shows that this is what we expect. This is the result of the
expectation value. OK.
I think I have a nice example. I don’t know if I want to go into all the detail of these things, but
they illustrate things in a nice way. So let’s try to do it.
So here it is. It’s a physical example. This is a nice concrete example because things work out.
So I think we’ll actually illustrate some physical points.
Example. Particle on a circle. x 0 to L. Maybe you haven’t seen a circle described by that, but
you take the x-axis, and you say yes, the circle is 0 to L. L and 0. And the way you think of it is
that this point is identified with this point.
If you have a line and you identify the two endpoints, that’s called a circle. It’s in the sense of
topology. A circle as the set of points equidistant to a center is a geometric description of a
round circle. But this, topologically speaking, anything that is closed is topologically a circle.
We think of a circle as this, physically, or it could be a curved line that makes it into a circle.
But it’s not important.
Let’s consider a free particle on a circle, and suppose the circle has an end L. So x belongs
here. And here is the wave function, psi equals 2 over L, 1 over square root of 3 sine of 2 pi x
over L, plus 2 over square root of 3 cosine 6 pi x over L. This is the wave function of your
particle on a circle.
At some time, time equals 0, it’s a free particle. No potential. And it lives in the circle, and
these functions are kind of interesting. You see, if you live on the circle you would want to
emphasize the fact that this point 0 is the same as the point L, so you should have that psi and
L must be equal to psi at 0. It’s a circle, after all, it’s the same point.
And therefore for 0 or for L, the difference here is 0 or 2 pi, and the sine is the same thing.
And 0, when x equals 0, and 6 pi, so that’s also periodic, and it’s fine. It’s a good wave function
result.
The question is, for this problem, what are, if you measure momentum, measure momentum,
what are the possible values and their probabilities? Probabilities. So you decide to measure
momentum of this particle. What can you get? OK.
It looks a little nontrivial, and it is a little nontrivial. Momentum. So I must sort of find the
momentum eigenstates. Momentum eigenstates, they are those infinite plane waves, e to the
ikx, that we could never normalize. Because you square it, it’s 1, and the integral over all
space is infinite. So are we heading for disaster here? No. Because it lives in a finite space.
Yes, you have a question?
STUDENT: Should it be a wave function [INAUDIBLE] complex? Because right now, it just looks like it’s a
real value. And we can’t [INAUDIBLE] real wave functions, can we?
PROFESSOR: Well, it is the wave function at time equals 0. So the time derivative would have to bring in
complex things. So you can have a wave function that is 0, that is real at some particular time.
Like, any wave function psi of x e to the minus iEt over h bar is a typical wave function. And
then at time equal 0 it may be real. It cannot be real forever. So you cannot assume it’s real.
But at some particular times it could be real. Very good question.
The other thing you might say, look, this is too real to have momentum. Momentum has to do
with waves. That’s probably not a reliable argument.
OK, so, where do we go from here? Well, let’s try to find the momentum eigenstates. They
should be things like that, exponentials. So how could they look? Well, e to the 2 pi i, maybe.
What else? x, there should be an x for a momentum thing. Now there should be no units here,
so there better be an L here. And now I could put, maybe, well the 2 maybe was-- why did I
think of the 2 or the pi? Well, for convenience. But let’s see what.
Suppose you have a number m here. Then the good thing about this is that when x is equal to
0, there is some number here, but when x is equal to L, it’s a multiple of e to the 2 pi i, so
that’s periodic. So this does satisfy, I claim, it’s the only way if m is any integer. So it goes from
minus infinity to infinity. Those things are periodic. They satisfy psi. Actually they satisfy psi of x
plus L is equal to psi of x.
OK. That seems to be something that could be a momentum eigenstate. And then I have to
normalize it. Well, if I square it and integrate it. If I square it then the phase cancels, so you get

  1. If you integrate it you get L. If you put 1 over the square root of L, when you square it and
    integrate, you will get 1. So here it is. Psi m’s of x are going to be defined to be this thing. And I
    claim these things are momentum eigenstates.
    In fact, what is the value of the momentum? Well, you calculate h bar over i d dx on psi m. And
    you get what? You get 2 pi m over L times h bar times psi. The h bar is there, the i cancels,
    and everything then multiplies, the x falls down. So this is the state with momentum P equals
    to h bar 2 pi m over L.
    OK. Actually, doing that, we’ve done the most difficult part of the problem. You’ve found the
    momentum eigenfunctions. So now the rest of the thing is to rewrite this in terms of this kind of
    objects. I’ll do it in a second. Maybe I’ll leave a little space there and you can check the
    algebra, and you can see it in the notes. But you know what you’re supposed to do.
    A sine of x is e to the ix minus is e to the minus ix over 2i. So you’d get these things converted
    to exponentials. The cosine of x is equal to e to the ix plus e to the minus ix over 2. So if you
    do that with those things, look. What the sine of 2 pi x going to give you? It’s going to give you
    some exponentials of 2 pi ix over L.
    So suppose that m equals 1. And m Equals minus 1. And this will give you m equals 3, 3 times
    2 is 6. And m equal minus 3.
    So I claim, after some work, and you could try to do it. I think it would be a nice exercise. Psi is
    equal square root of 2 over 3, 1 over 2 i psi 1 minus square root of 2 over 3, 1 over 2i psi
    minus 1 plus 1 over square root of 3 psi 3, plus 1 over square root of 3 psi minus 3. And it
    should give you some satisfaction to see something like that. You’re now seeing the wave
    function written as a superposition of momentum eigenstates. This theorem came through.
    In this case, as a particle in the circle, the statement is that the eigenfunctions are the
    exponentials, and it’s Fourier’s theorem. Again, for a series.
    So finally, here is the answer. So psi 1, we can measure psi 1. What is the momentum of psi
    1? So here are p values. And probabilities. The first value, psi 1, the momentum is 2 pi h bar
    over L. So 2 pi h bar over L. And what is its probability? It’s this whole number squared. So
    square root of 2/3, 1 over 2i squared. So how much is that? It’s 2/3 times 1/4. 2/3 times 1/4,
    which is 1/6.
    And the other value that you can get is minus this one, so minus 2 pi h bar over L. This minus
    doesn’t matter, probability also 1/6. The next one is with 3. So you can get 2, 6 pi, 6 pi h bar
    over L, with probability square of this, 1/3. And minus 6 pi h bar over L with probability 1/3.
    Happily our probabilities add up.
    So there you go. That’s the theorem expressed in a very clear example. We had a wave
    function. You wrote it as a sum of four momentum eigenstates. And now you know, if you do a
    measurement, what are the possible values of the momentum. This should have been
    probably 1/6. You can do anything you want.

L9.5 Defining uncertainty (10:31)

MITOCW | watch?v=rCRH9CTThlo
PROFESSOR: Uncertainty. When you talk about random variables, random variable Q, we’ve said that it has
values Q1 up to, say, Qn, and probabilities P1 up to Pn, we speak of a standard deviation,
delta Q, as the uncertainty, the standard deviation. And how is that standard deviation
defined? Well you begin by making sure you know what is the expectation value of the-- or the
average value of this random variable, which was defined, last time, I think I put braces, but
bar is kind of nice sometimes too, at least for random variables, and it’s the sum of the Pi
times the Qi.
The uncertainty is also some expectation value. And expectation value of deviation. So the
uncertainty squared is the expectation value, sum over i, of deviations of the random variable
from the mean. So you calculate the expected value of the difference of your random variable
and the mean squared, and that is the square of the standard deviation.
Now this is the definition. And it’s a very nice definition because it makes a few things clear.
For example, the left hand side is delta Q squared, which means it’s a positive number. And
the right hand side is also a positive number, because you have probabilities times differences
of quantities squared. So this is all greater and equal to zero. And moreover, you can actually
say the following.
If the uncertainty, or the standard deviation, is zero, the random variable is not that random.
Because if this whole thing is 0, this delta squared, delta Q squared must be 0 and this must
be 0. But each term here is positive. So each term must be 0, because of any one of them was
not equal to zero, you would get a non-zero contribution. So any possible Qi that must have a
Pi different from 0 must be equal to Qbar. So if delta cubed is equal to 0, Qi is equal to Q as
not random anymore.
OK, now we can simplify this expression.
Do the following. By simplifying, I mean expand the right-hand side. So sum over i, Pi Qi
squared, minus 2 sum over i, Pi Qi Q bar plus sum over i, Pi Q bar squared. This kind of thing
shows up all the time, shows up in quantum mechanic as well, as we’ll see in a second. And
you need to be able to see what’s happenening. Here, you’re having the expectation value of
Qi squared. That’s the definition of a bar of some variable, you’d multiply with variable by the
exponent of [INAUDIBLE].
What is this? This a little more funny. First, you should know that Q bar is a number, so it can
go out. So it’s minus 2 Q bar. And then all that is left is this, but that’s another Q bar. So it’s
another Q bar. And here, you take this one out because it’s a number, and the sum of the
probabilities is 1, so it’s Q bar squared as well. And it always comes out that way, this minus 2
Q bar squared plus Q bar squared. So at the end, Delta Q, it’s another famous property, is the
mean of the square minus the square of the mean.
And from this, since this is greater or equal than 0, you always conclude that the mean of the
square is always bigger than the-- maybe I shouldn’t have the i here, I think it’s a random
variable Q squared. So the mean, the square of this is greater or equal than Q bar squared.
OK. Well, what happens in quantum mechanics, let give you the definition and a couple of
ways of writing it. So here comes the definition. It’s inspired by this thing. So in quantum
mechanics, permission operator Q will define the uncertainty of Q in the state, Psi O squared
as the expectation value of Q squared minus the expectation value of Q squared.
Those are things that you know in quantum mechanics, how you’re supposed to compute.
Because you know what an expectation value is in any state Psi. You so Psi star, the operator,
Psi. And here you do this thing, so it’s all clear. So it’s a perfectly good definition. Maybe it
doesn’t give you too much insight yet, but let me say two things, and we’ll leave them to
complete for next time.
Which is claim one, one, that Delta Q squared Psi can be written as the expectation value of Q
minus absolute expectation value of Q squared. Like that. Look. It looks funny, and we’ll
elaborate this, but the first claim is that this is a possible re-writing. You can write this
uncertainty as a single expectation value. This is the analog of this equation in quantum
mechanics.
Claim two is another re-writing. Delta Q squared on Psi can be re-written as this. That’s an
integral. Q minus Q and Psi. Look at that. You act on Psi with the operator, Q, and
multiplication by the expectation value of Q. This is an operator, this is a number multiplied by
Psi. You can add to this on the [? wave ?] function, you can square it, and then integrate. And
that is also the uncertainty. We’ll show these two things next time and show one more thing
that the uncertainty vanishes if and only if the state is an ideal state of Q. So If the state that
you are looking for is an ideal state of Q, you have no uncertainty. And if you have no
uncertainty, the state must be an ideal state of Q. So those all things will come from this
planes, that we’ll elaborate on next time.

  • 10
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值