Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: This is the second of five lectures on the Kinetic Theory of Gases.
Instructor: Mehran Kardar
Lecture 8: Kinetic Theory o...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Let's say that I tell you that I'm interested in a gas that has some temperature. I specify, let's say, temperature. Pressure is room temperature, room pressure. And I tell you how many particles I have. So that's [INAUDIBLE] for you what the macro state is. And I want to then see what the corresponding micro state is.
So I take one of these boxes, whereas this is a box that I draw in three dimensions, I can make a correspondence and draw this is 6N-dimensional coordinate space, which would be hard for me to draw. But basically, a space of six n dimensions. I figure out where the position, and the particles, and the momenta are. And I sort of find that there is a corresponding micro state that corresponds to this macro state. OK, that's fine. I made the correspondence.
But the thing is that I can imagine lots and lots of boxes that have exactly the same macroscopic properties. That is, I can imagine putting here side by side a huge number of these boxes. All of them are described by exactly the same volume, pressure, temperature, for example. The same macro state. But for each one of them, when I go and find the macro state, I find that it is something else. So I will be having different micro states.
So this correspondence is certainly something where there should be many, many points here that correspond to the same thermodynamic representation. So faced with that, maybe it makes sense to follow Gibbs and define an ensemble.
So what we say is, we are interested in some particular macro state. We know that they correspond to many, many, many different potential micro states. Let's try to make a map of as many micro states that correspond to the same macro state.
So consider n copies of the same macro state. And this would correspond to n different points that I put in this 6N-dimensional phase space. And what I can do is I can define an ensemble density. I go to a particular point in this space.
So let's say I pick some point that corresponds to some set of p's and q's here. And what I do is I draw a box that is 6N-dimensional around this point. And I define a density in the vicinity of that point, as follows.
Actually, yeah. What I will do is I will count how many of these points that correspond to micro states fall within this box. So at the end is the number of mu points in this box.
And what I do is I divide by the total number. I expect that the result will be proportional to the volume of the box. So if I make the box bigger, I will have more. So I divide by the volume of the box. So this is, let's call d gamma is the volume of box.
Of course, I have to do this in order to get a nice result by taking the limit where the number of members of the ensemble becomes quite large. And then presumably, this will give me a well-behaved density. In this limit, I guess I want to also have the size of the box go to 0. OK?
Now, clearly, with the definitions that I have made, if I were to integrate this quantity against the volume d gamma, what I would get is the integral dN over N. N is, of course, a constant. And the integral of dN is the total number. So this is 1.
So we find that this quantity rho that I have constructed satisfies two properties. Certainly, it is positive, because I'm counting the number of points. Secondly, it's normalized to 1. So this is a nice probability density. So this ensemble density is a probability density function in this phase space that I have defined. OK? All right. So once I have a probability, then I can calculate various things according to the rules of probability that we defined before. So for example, I can define an ensemble average.
Maybe I'm interested in the kinetic energy of the particles of the gas. So there is a function O that depends on the sum of all of the p squareds. In general, I have some function of O that depends on p and q. And what I defined the ensemble average would be the average that I would calculate with this probability. Because I go over all of the points in phase space.
And let me again emphasize that what I call d gamma then really is the product over all points. For each point, I have to make a volume in both momentum and in coordinate. It's a 6N-dimensional volume element. I have to multiply the probability, which is a function of p and q against this O, which is another function of p and q. Yes?
AUDIENCE: Is the division by M necessary to make it into a probability density?
PROFESSOR: Yes.
AUDIENCE: Otherwise, you would still have a density.
PROFESSOR: Yeah. When I would integrate then, I would get the total number. But the total number is up to me, how many members of the ensemble I took, it's not a very well-defined quantity. It's an arbitrary quantity. If I set it to become very large and divide by it, then I will get something that is nicely a probability. And we've developed all of these tools for dealing with probabilities. So that would go to waste if I don't divide by. Yes?
AUDIENCE: Question. When you say that you have set numbers, do you assume that you have any more informations than just the microscopic variables GP and--
PROFESSOR: No.
AUDIENCE: So how can we put a micro state in correspondence with a macro state if there is-- on the-- like, with a few variables? And do you need to-- from-- there's like five variables, defined down to 22 variables for all the particles?
PROFESSOR: So that's what I was saying. It is not a one-to-one correspondence. That is, once I specify temperature, pressure, and the number of particles. OK? Yes?
AUDIENCE: My question is, if you generate identical macro states, and create-- which macro states--
PROFESSOR: Yes.
AUDIENCE: Depending on some kind of a rule on how you make this correspondence, you can get different ensemble densities, right?
PROFESSOR: No. That is, if I, in principle and theoretically, go over the entirety of all possible macroscopic boxes that have these properties, I will be putting infinite number of points in this. And I will get some kind of a density.
AUDIENCE: What if you, say, generate infinite number of points, but all in the case when, like, all molecules of gas are in right half of the box?
PROFESSOR: OK. Is that a thermodynamically equilibrium state?
AUDIENCE: Did you mention it needed to be?
PROFESSOR: Yes. I said that-- I'm talking about things that can be described macroscopically. Now, the thing that you mentioned is actually something that I would like to work with, because ultimately, my goal is not only to describe equilibrium, but how to reach equilibrium.
That is, I would like precisely to answer the question, what happens if you start in a situation where all of the gas is initially in one half of the room? And as long as there is a partition, that's a well-defined macroscopic state. And then I remove the partition. And suddenly, it is a non-equilibrium state. And presumably, over time, this gas will occupy that.
So there is a physical process that we know happens in nature. And what I would like eventually to do is to also describe that physical process. So what I will do is I will start with the initial configuration with everybody in the half space. And I will calculate the ensemble that corresponds to that. And that's unique.
Then I remove the partition. Then each member of the ensemble will follow some trajectory as it occupies eventually the entire box. And we would like to follow how that evolution takes place and hopefully show that you will always have, eventually, at the end of the day, the gas occupying the system uniform.
AUDIENCE: Yeah, but just would it be more correct to generate many, many different micro states as the macro states which correspond to them? And how many different--
PROFESSOR: What rule do you use for generating many, many micro states?
AUDIENCE: Like, all uniformly arbitrary perturbations of particles always to put them in phase space. And look to-- like, how many different micro states give rise to the same macro state?
PROFESSOR: Oh, but you are already then talking about the macro state?
AUDIENCE: A portion-- which description do you use as the first one to generate the second one? So in your point of view, let's do-- [INAUDIBLE] to macro states, and go to micro state. But can you reverse?
PROFESSOR: OK. I know that the way that I'm presenting things will lead ultimately to a useful description of this procedure. You are welcome to try to come up with a different prescription. But the thing that I want to ensure that you agree is that the procedure that I'm describing here has no logical inconsistencies. I want to convince you of that.
I am not saying that this is necessarily the only one. As far as I know, this is the only one that people have worked with. But maybe somebody can come up with a different prescription. So maybe there is another one. Maybe you can work on it. But I want you to, at this point, be convinced that this is a well-defined procedure. OK?
AUDIENCE: But because it's a well-defined procedure, if you did exist on another planet or in some universe where the physics were different, the point is, you can use this. But can't you use this for information in general when you want to-- like, if you have-- the only requirement is that at a fine scale, you have a consistent way of describing things; and at a large scale, you have a way of making sense of generalizing.
PROFESSOR: Right.
AUDIENCE: So it's sort of like a compression of data, or I use [INAUDIBLE].
PROFESSOR: Yeah. Except that part of this was starting with some physics that we know. So indeed, if you were in a different universe-- and later on in the course, we will be in a different universe where the rules are not classical or quantum-mechanical. And you have to throw away this description of what a micro state is. And you can still go through the entire procedure.
But I want to do is to follow these set of equations of motion and this description of micro state, and see where it leads us. And for the gas in this room, it is a perfectly good description of what's happened. Yes?
AUDIENCE: Maybe a simpler question. Is big rho defined only in spaces where there are micro states? Like, is there anywhere where there isn't a micro state?
PROFESSOR: Yes, of course. So if I have-- thinking about a box, and if I ask what is rho out here, I would say the answer is rho is 0. But if you like, you can say rho is defined only within this space of the box. So the description of the macro state which has something to do with the box, over which I am considering, will also limit what I can describe. Yes.
And certainly, as far as if I were to change p with velocity, let's say, then you would say, a space where V is greater than speed of light is not possible. That's the point. So your rules of physics will also define implicitly the domain over which this is there. But that's all part of mechanics. So I'm going to assume that the mechanics part of it is you are comfortable. Yes?
AUDIENCE: In your definition of the ensemble average, are you integrating over all 6N dimensions of phase space?
PROFESSOR: Yes.
AUDIENCE: So why would your average depend on p and q? If you integrate?
PROFESSOR: The average is of a function of p and q. So in the same sense that, let's say, I have a particle of gas that is moving on, and I can write the symbol, p squared over 2m, what is this average? The answer will be KT over 2, for example. It will not depend on p. But the quantity that I'm averaging inside the triangle is a function of p and q. Yes?
AUDIENCE: So if it's an integration, or basically the--?
PROFESSOR: The physical limit of the problem.
AUDIENCE: Given a macro state?
PROFESSOR: Yes. So typically, we will be integrating q over the volume of a box, and p from minus infinity to infinity, because classically, without relativity, this is a lot. Yes?
AUDIENCE: Sorry. So why is the [INAUDIBLE] from one end for every particle, instead of just scattering space, would you have a [INAUDIBLE]? Or is that the same thing?
PROFESSOR: I am not sure I understand the question. So if I want to, let's say, find out just one particle that is somewhere in this box, so there is a probability that it is here, there is a probability that it is there, there is a probability that it is there.
The integral of that probability over the volume of the room is one. So how do I do that? I have to do an integral over dx, dy, dz, probability as a function of x, y, and z. Now I just repeat that 6N times. OK? All right. So that's the description.
But the first question to sort of consider is, what is equilibrium in this perspective? Now, we can even be generous, although it's a very questionable thing, to say that, really, when I sort of talk about the kinetic energy of the gas, maybe I can replace that by this ensemble average. Now, if I'm in equilibrium, the results should not depend as a function of time.
So I expect that if I'm calculating things in equilibrium, the result of equations such as this should not depend on time, which is actually a problem. Because we know that if I take a picture of all of these things that I am constructing my ensemble with and this picture is at time t, at time t plus dt, all of the particles have moved around. And so the point that was here, the next instant of time is going to be somewhere else. This is going to be somewhere else. Each one of these is flowing around as a function of time. OK?
So the picture that I would like you to imagine is you have a box, and there's a huge number of bees or flies or whatever your preferred insect is, are just moving around. OK?
Now, you can sort of then take pictures of this cluster. And it's changing potentially as a function of time. And therefore, this density should potentially change as a function of time. And then this answer could potentially depend on time. So let's figure out how is this density changing as a function of time. And hope that ultimately, we can construct a solution for the equation that governs the change in density as a function of time that is in fact invariant in time.
It is going back to my flies or bees or whatever, you can imagine a circumstance in which the bees are constantly moving around. Each individual bee is now here, then somewhere else. But all of your pictures have the same density of bees, because for every bee that left the box, there was another bee that came in its place. So one can imagine a kind of situation where all of these points are moving around, yet the density is left invariant.
And in order to find whether such a density is possible, we have to first know what is the equation that governs the evolution of that density. And that is given by the Liouville's equation. So this governs evolution of rho with time. OK.
So let's kind of blow off the picture that we had over there. Previously, they're all of these coordinates. There is some point in coordinate space that I am looking at. Let's say that the point that I am looking at is here. And I have constructed a box around it like this in the 6N-dimensional space. But just to be precise, I will be looking at some particular coordinate q alpha and the conjugate momentum p alpha.
So this is my original point corresponds to, say, some specific version of q alpha p alpha. And that I have, in this pair of dimensions, created a box that in this direction has size dq alpha, in this direction has size dp alpha. OK? And this is the picture that I have at some time t. OK?
Then I look at an instant of time that is slightly later. So I go to a time that is t plus dt. I find that the point that I had initially over here as the center of this box has moved around to some other location that I will call q alpha prime, p alpha prime.
If you ask me what is q alpha prime and p alpha prime, I say, OK, I know that, because I know with equations of motion. If I was running this on a computer, I would say that q alpha prime is q alpha plus the velocity q alpha dot dt, plus order of dt squared, which hopefully, I will choose a sufficient and small time interval I can ignore.
And similarly, p alpha prime would be p alpha plus p alpha dot dt order of dt squared. OK? So any point that was in this box will also move. And presumably, close-by points will be moving to close-by points. And overall, anything that was originally in this square actually projected from a larger dimensional cube will be part of a slightly distorted entity here. So everything that was here is now somewhere here. OK?
I can ask, well, how wide is this new distance that I have? So originally, the two endpoints of the square were a distance dq alpha apart. Now they're going to be a distance dq alpha prime apart. What is dq alpha prime? I claim that dq alpha prime is whatever I had originally, but then the two N's are moving at slightly different velocities, because the velocity depends on where you are in phase space.
And so the difference in velocity between these two points is really the derivative of the velocity with respect to the separation that I have between those two points, which is dq times dq alpha. And this is how much I would have expanded plus higher order.
And I can apply the same thing in the momentum direction. The new vertical separation dp alpha prime is different from what it was originally, because the two endpoints got stretched. The reason they got stretched was because their velocities were different. And their difference is just the derivative of velocity with respect to separation times their small separation. And if I make everything small, I can, in principle, write higher order terms. But I don't have to worry about that. OK?
So I can ask, what is the area of this slightly distorted square? As long as the dt is sufficiently small, all of the distortions, et cetera, will be small enough. And you can convince yourself of this. And what you will find is that the product of dq alpha prime, dp alpha prime, if I were to multiply these things, you can see that dq alpha and dp alpha is common to the two of them.
So I have dq alpha, dp alpha. From multiplying these two terms, I will get one. And then I will get two terms that are order of dt, that I will get from dq alpha dot with respect to dq alpha, plus dp alpha dot with respect to dp alpha. And then there will be terms that are order of dt squared and higher order types of things. OK?
So the distortion in the area of this particle, or cross section, is governed by something that is proportional to dt. And dq alpha dot over dq alpha plus dp alpha dot dp alpha.
But I have formally here for what P i dot and qi dot are. So this is the dot notation for the time derivative other than the one that I was using before. So q alpha dot, what do I have for it? It is dh by dp alpha. So this is d by dq alpha of the H by dp alpha.
Whereas p alpha dot, from which I have to evaluate d by dp alpha, p alpha dot is minus dH by dq alpha. So what do I have? I have two second derivatives that appear with opposite sign and hence cancel each other out. OK?
So essentially, what we find is that the volume element is preserved under this process. And I can apply this to all of my directions. And hence, conclude that the initial volume that was surrounding my point is going to be preserved. OK?
So what that means is that these classical equations of motion, the Hamiltonian equations, for this description of micro state that involves the coordinates and momenta have this nice property that they preserve volume of phase space as they move around. Yes?
AUDIENCE: If the Hamiltonian has expontential time dependence, that doesn't work anymore.
PROFESSOR: No. So that's why I did not put that over there. Yes.
And actually, this is sometimes referred to as being something like an incompressible fluid. Because if you kind of deliver hydrodynamics for something like water if you regard it as incompressible, the velocity field has the condition that the divergence of the velocity is 0.
Here, we are looking at a 6N-dimensional mention velocity field that is composed of q alpha dot and p alpha dot. And this being 0 is really the same thing as the divergence in this 6N-dimensional space is 0. And that's a property of the Hamiltonian dynamics that one has. Yes?
AUDIENCE: Could you briefly go over why you have to divide by the separation when you expand the times between the displacement?
PROFESSOR: Why do I have to multiply by the separation?
AUDIENCE: Divide by.
PROFESSOR: Where do I divide?
AUDIENCE: dq alpha by--
PROFESSOR: Oh, this. Why do I have to take a derivative. So I have two points here. All of my points are moving in time. So if these things were moving with uniform velocity, one second later, this would have moved here, this would have moved the same distance, so that the separation between them would have been maintained if they were moving with the same velocity.
So if you are following somebody and you are moving with the same velocity as them, thus, your separation does not change. But if one of you is going faster than the other one, then the difference in velocity will determine how you separate.
And what is the difference in velocity? The difference in velocity depends, in this case, to how far apart the points are. So the difference between velocity here and velocity here is the derivative of velocity as a function of this coordinate. Derivative of velocity as a function of that coordinate multiplied by the separation. OK?
So what does this incompressibility condition mean? It means that however many points I had over here, they end up in a box that has exactly the same volume, which means that the density is going to be the same around here and around this new point. So essentially, what we have is that the rho at the new point, p prime, q prime, and time, t, plus dt, is the same thing as the rho at the old point, p, q, at time, t. Again, this is the incompressibility condition.
Now we do mathematics. So let's write it again. So I've said, in other words, that rho p, q, t is the same as the rho at the new point. What's the momentum at the new point? It is p plus. For each component, it is p alpha plus p alpha dot. Let's put it this way. p plus p dot dt, q plus q dot dt, and t plus dt. That is, if I look at the new location compared to the old location, the time changed, the position changed, the momentum changed.
They all changed-- in each arguments changed infinitesimally by an amount that is proportional to dt. And so what I can do is I can expand this function to order of dt. So I have rho at the original point. So this is all mathematics. I just look at variation with respect to each one of these arguments.
So I have a sum over alpha, p alpha dot d rho by dp alpha plus q alpha dot d rho by dq alpha plus the explicit derivative, d rho by dt. This entirety is going to be multiplied by dt. And then, in principle, the expansion would have higher order terms. OK?
Now, of course, the first term vanishes. It is the same on both times. So the thing that I will have to set to 0 is this entity over here. Now, quite generally, if you have a function of p, q, and t, you evaluate it at the old point and then evaluate at the new point. One can define what is called a total derivative. And just like here, the total derivative will come from variations of all of these arguments. We'll have a partial derivative with respect to time and partial derivatives with respect to any of the other arguments.
So I wrote this to sort of make a distinction between the symbol that is commonly used, sometimes d by dt, which is straight, sometimes Df by Dt. And this is either a total derivative or a streamline derivative. That is, you are taking derivatives as you are moving along with the flow.
And that is to be distinguished from these partial derivatives, which is really sitting at some point in space and following how, from one time instant to another time instant, let's say the density changes. So Df by Dt with a partial really means sit at the same point. Whereas this big Df by Dt means, go along with the flow and look at the changes.
Now, what we have established here is that for the density, the density has some special character because of this Liouville's theorem, that the streamlined derivative is 0. So what we have is that d rho by dt is 0. And this d rho by dt I can also write down as d rho by dt plus sum over all 6N directions, d rho by dp alpha. But then I substitute for p alpha dot from here. p dot is minus dH by dq. And then I add d rho by dq alpha. q alpha dot is dH by dp r. OK?
So then I can take this combination with the minus sign to the other side and write it as d rho by dt is something that I will call the Poisson bracket of H and rho, where, quite generally, if I have two functions in phase space that is defending on B and q, this scalary derivative the Poisson bracket is defined as the sum over all 6N possible variation. The first one with respect to q, the second one with respect to p. And then the whole thing with the opposite sign. dA, dP, dB, dq.
So this is the Poisson bracket. And again, from the definition, you should be able to see immediately that Poisson bracket of A and B is minus the Poisson bracket of B and A. OK?
Again, we ask the question that in general, I can construct in principle a rho of p, q, let's say, for an equilibrium ensemble. But then I did something, like I removed a partition in the middle of the gas, and the gas is expanding. And then presumably, this becomes a function of time.
And since I know exactly how each one of the particles, and hence each one of the micro states is evolving as a function of time, I should be able to tell how this density in phase space is changing. So this perspective is, again, this perspective of looking at all of these bees that are buzzing around in this 6N-dimensional space, and asking the question, if I look at the particular point in this 6N-dimensional space, what is the density of bees?
And the answer is that it is given by the Poisson bracket of the Hamiltonian that governs the evolution of each micro state and the density function. All right.
So what does it mean? What can we do with this? Let's play around with it and look at some consequences. But before that, does anybody have any questions? OK. All right.
We had something that I just erased. That is, if I have a function of p and q, let's say it's not a function of time. Let's say it's the kinetic energy for this system where, at t equals to 0, I remove the partition, and the particles are expanding. And let's say the other place you have a potential, so your kinetic energy on average is going to change. You want to know what's happening to that.
So you calculate at each instant of time an ensemble average of the kinetic energy or any other quantity that is interesting to you. And your prescription for calculating an ensemble average is that you integrate against the density the function that you are proposing to look at.
Now, in principle, we said that there could be situations where this is dependent on time, in which case, your average will also depend on time. And maybe you want to know how this time dependence occurs. How does the kinetic energy of a gas that is expanding into some potential change on average? OK.
So let's take a look. This is a function of time, because, as we said, these go inside the average. So really, the only explicit variable that we have here is time. And you can ask, what is the time dependence of this quantity? OK?
So the time dependence is obtained by doing this, because essentially, you would be adding things at different points in p, q. And at each point, there is a time dependence. And you take the derivative in time with respect to the contribution of that point. So we get something like this.
Now you say, OK, I know what d rho by dt is. So this is my integration over all of the phase space. d rho by dt is this Poisson bracket of H and rho. And then I have O. OK?
Let's write this explicitly. This is an integral over a whole bunch of coordinates and momenta. There is, for the Poisson bracket, a sum over derivatives. So I have a sum over alpha-- dH by dq alpha, d rho by dp alpha minus dH by dp alpha, d rho by dq alpha. And that Poisson bracket in its entirety then gets multiplied by this function of phase space. OK.
Now, one of the mathematical manipulations that we will do a lot in this class. And that's why I do this particular step, although it's not really necessary to the logical progression that I'm following, is to remind you how you would do an integration by parts when you're faced with something like this. An integration by parts is applicable when you have variables that you are integrating that are also appearing as derivatives.
And whenever you are integrating Poisson brackets, you will have derivatives for the Poisson bracket. And the integration would allow you to use integration by parts. And in particular, what I would like to do is to remove the derivative that acts on the densities. So I'm going to essentially rewrite that as minus an integral that involves-- again. I don't want to keep rewriting that thing. I want to basically take the density out and then have the derivative, which is this d by dp in the first term and d by dq in the second term, act on everything else.
So in the first case, d by dp alpha will act on O and dH by dq alpha. And in the second case, d by dq alpha will act on O and dH by dp alpha. Again, there is a sum over alpha that is implicit. OK?
Again, there is a minus sign. So every time you do this procedure, there is this. But every time, you also have to worry about surface terms. So on the surface, you would potentially have to evaluate things that involve rho, O, and these d by d derivatives.
But let's say we are integrating over momentum from minus infinity to infinity. Then the density evaluated at infinity momenta would be 0. So practicality, in all cases that I can think of, you don't have to worry about the boundary terms.
So then when you look at these kinds of terms, this d by dp alpha can either act on O. So I will get dO by dp alpha, dH by dq alpha. Or it can act on dH by d alpha. So I will get plus O d2 H, dp alpha dq alpha. And similarly, in this term, either I will have dO by dq alpha, dH by dp alpha, or O d2 H, dq alpha dp alpha.
Once more, the second derivative terms of the Hamiltonian, the order is not important. And what is left here is this set of objects, which is none other than the Poisson bracket. So I can rewrite the whole thing as d by dt of the expectation value of this quantity, which potentially is a function of time because of the time dependence of my density is the same thing as minus an integration over the entire phase space of rho against this entity, which is none other than the Poisson bracket of H with O. And this integration over density of this entire space is just our definition of the expectation value.
So we get that the time derivative of any quantity is related to the average of its Poisson bracket with the Hamiltonian, which is the quantity that is really governing time dependences. Yes?
AUDIENCE: Could you explain again why the time derivative when N is the integral, it's rho as a partial derivative?
PROFESSOR: OK. So suppose I'm doing a two-dimensional integral over p and q. So I have some contribution from each point in this p and q. And so my integral is an integral dpdq, something evaluated at each point in p, q that could potentially depend on time. Imagine that I discretize this. So I really-- if you are more comfortable, you can think of this as a limit of a sum.
So this is my integral. If I'm interested in the time dependence of this quantity-- and I really depends only on time, because I integrated over p and q. So if I'm interested in something that is a sum of various terms, each term is a function of time. Where do I put the time dependence? For each term in this sum, I look at how it depends on time. I don't care on its points to the left and to the right. OK?
Because the big D by Dt involves moving with the streamline. I'm not doing any moving with the streamline. I'm looking at each point in this two-dimensional space. Each point gives a contribution at that point that is time-dependent. And I take the derivative with respect to time at that point. Yes?
AUDIENCE: Couldn't you say that you have function O as just some function of p and q, and its time derivative would be Poisson bracket?
PROFESSOR: Yes.
AUDIENCE: And does the average of the time derivative would be the average of Poisson bracket, and you don't have to go through all the--
PROFESSOR: No. But you can see the sign doesn't work.
AUDIENCE: How come?
PROFESSOR: [LAUGHS] Because of all of these manipulations, et cetera. So the statement that you made is manifestly incorrect. You can't say that the time dependence of this thing is the-- whatever you were saying. [LAUGHS]
AUDIENCE: [INAUDIBLE].
PROFESSOR: OK. Let's see what you are saying.
AUDIENCE: So Poisson bracket only counts for in place for averages, right?
PROFESSOR: OK. So what do we have here? We have that dp by dt is the Poisson bracket of H and rho. OK? And we have that O is an integral of rho O. Now, from where do you conclude from this set of results that d O by dt is the average of a Poisson bracket that involves O and H, irrespective of what we do with the sign?
AUDIENCE: Or if you look not at the average failure of O but at the value of O and point, and take-- I guess it would be streamline derivative of it. So that's assuming that you're just like assigning value of O to each point, and making power changes with time as this point moves across the phase space.
PROFESSOR: OK. But you still have to do some bit of derivatives, et cetera, because--
AUDIENCE: But if you know that the volume of the in phase space is conserved, then we basically don't care much that the function O is any much different from probability density.
PROFESSOR: OK. If I understand correctly, this is what you are saying. Is that for each representative point, I have an O alpha, which is a function of time. And then you want to say that the average of O is the same thing as the sum over alpha of O alpha of t's divided by N, something like this.
AUDIENCE: Eh. Uh, I want to first calculate what does time derivative of O? O remains in a function of time and q and p. So I can calculate--
PROFESSOR: Yes. So this O alpha is a function of
AUDIENCE: So if I said--
PROFESSOR: Yes. OK. Fine.
AUDIENCE: O is a function of q and p and t, and I take a streamline derivative of it. So filter it with respect to t. And then I average that thing over phase space. And then I should get the same version--
PROFESSOR: You should.
AUDIENCE: --perfectly. But--
PROFESSOR: Very quickly, I don't think so. Because you are already explaining things a bit longer than I think I went through my derivation. But that's fine. [LAUGHS]
AUDIENCE: Is there any special--
PROFESSOR: But I agree in spirit, yes. That each one of these will go along its streamline, and you can calculate the change for each one of them. And then you have to do an average of this variety. Yes.
AUDIENCE: [INAUDIBLE] when you talk about time derivative of probability density, it's Poisson bracket of H and rho. But when you talk about time derivative of average, you have to add the minus sign.
PROFESSOR: And if you do this correctly here, you should get the same result.
AUDIENCE: Oh, OK.
PROFESSOR: Yes.
AUDIENCE: Well, along that line, though, are you using the fact that phase space volume is incompressible to then argue that the total time derivative of the ensemble average is the same as the ensemble average of the total time derivative of O, or not?
PROFESSOR: Could you repeat that? Mathematically, you want me to show that the time derivative of what quantity?
AUDIENCE: Of the average of O.
PROFESSOR: Of the average of O. Yes.
AUDIENCE: Is it in any way related to the average of dO over dt?
PROFESSOR: No, it's not. Because O-- I mean, so what do you mean by that? You have to be careful, because the way that I'm defining this here, O is a function of p and q. And what you want to do is to write something that is a sum over all representative points, divided by the total number, some kind of an average like this. And then I can define a time derivative here. Is that what you are--
AUDIENCE: Well, I mean, I was thinking that even if you start out with your observable being defined for every point in phase space, then if you were to, before doing any ensemble averaging, if you were to take the total time derivative of that, then you would be accounted for p dot and q dot as well, right? And then if you were to take the ensemble average of that quantity, could you arrive at the same result for that?
PROFESSOR: I'm pretty sure that if you do things consistently, yes. That is, what we have done is essentially we started with a collection of trajectories in phase space and recast the result in terms of density and variables that are defined only as positions in phase space. The two descriptions completely are equivalent. And as long as one doesn't make a mistake, one can get one or the other.
This is actually a kind of a well-known thing in hydrodynamics, because typically, you write down, in hydrodynamics, equations for density and velocity at each point in phase space. But there is an alternative description which we can say that there's essentially particles that are flowing. And particle that was here at this location is now somewhere else at some later time.
And people have tried hard. And there is a consistent definition of hydrodynamics that follows the second perspective. But I haven't seen it as being practical. So I'm sure that everything that you guys say is correct. But from the experience of what I know in hydrodynamics, I think this is the more practical description that people have been using. OK?
So where were we? OK. So back to our buzzing bees. We now have a way of looking at how densities and various quantities that you can calculate, like ensemble averages, are changing as a function of time. But the question that I had before is, what about equilibrium?
Because the thermodynamic definition of equilibrium, and my whole ensemble idea, was that essentially, I have all of these boxes and pressure, volume, everything that I can think of, as long as I'm not doing something that's like opening the box, is perfectly independent of time.
So how can I ensure that various things that I calculate are independent of time? Clearly, I can do that by having this density not really depend on time. OK? Now, of course, each representative point is moving around. Each micro state is moving around. Each bee is moving around. But I want the density that is characteristic of equilibrium be something that does not change in time. It's 0.
And so if I posit that this particular function of p and q is something that is not changing as a function of time, I have to require that the Poisson bracket of that function of p and q with the Hamiltonian, which is another function of p and q, is 0. OK?
So in principle, I have to go back to this equation over here, which is a partial differential equation in 6N-dimensional space and solve with equal to 0. Of course, rather than doing that, we will guess the answer. And the guess is clear from here, because all I need to do is to make rho equilibrium depend on coordinates in phase space through some functional dependence on Hamiltonian, which depends on the coordinates in phase space. OK?
Why does this work? Because then when I take the Poisson bracket of, let's say, this function of H with H, what do I have to do? I have to do a sum over alpha d rho with respect to, let's say-- actually, let's write it in this way. I will have the dH by dp alpha from here. I have to multiply by d rho by dq alpha. But rho is a function of only H. So I have to take a derivative of rho with respect to its argument H. And I'll call that rho prime. And then the derivative of the argument, which is H, with respect to q alpha. That would be the first term.
The next term would be, with the minus sign, from the H here, dH by dq alpha, the derivative of rho with respect to p alpha. But rho only depends on H, so I will get the derivative of rho with respect to its one argument. The derivative of that argument with respect to p alpha. OK? So you can see that up to the order of the terms that are multiplying here, this is 0. OK?
So any function I choose of H, in principle, satisfies this. And this is what we will use consistently all the time in statistical mechanics, in depending on the ensemble that we have. Like you probably know that when we are in the micro canonical ensemble, we look at-- in the micro canonical ensemble, we'd say what the energy is.
And then we say that the density is a delta function-- essentially zero. Except places, it's the surface that corresponds to having the right energy. So you sort of construct in phase space the surface that has the right energy. So it's a function of these four.
So this is the micro canonical. When you are in the canonical, I use a rho this is proportional to e to the minus beta H and other functional features. So it's, again, the same idea. OK?
That's almost but not entirely true, because sometimes there are also other conserved quantities. Let's say that, for example, we have a collection of particles in space in a cavity that has the shape of a sphere. Because of the symmetry of the problem, angular momentum is going to be a conserved quantity. Angular momentum you can also write as some complicated function of p and q. For example, p cross q summed over all of the particles. But it could be some other conserved quantity.
So what does this conservation law mean? It means that if you evaluate L for some time, it is going to be the same L for the coordinates and momenta at some other time. Or in other words, dL by dt, which you obtained by summing over all coordinates dL by dp alpha, p alpha dot, plus dL by dq alpha q alpha dot.
Essentially, taking time derivatives of all of the arguments. And I did not put any explicit time dependence here. And this is again sum over alpha dL by dp alpha. p alpha dot is minus dH by dq alpha. And dL by dq alpha, q alpha dot is dH by dp alpha. So you are seeing that this is the same thing as the Poisson bracket of L and H. So essentially, conserved quantities which are essentially functions of coordinates and momenta that you calculate that don't change as a function of time are also quantities that have zero Poisson brackets with the Hamiltonian. So if I have a conserved quantity, then I have a more general solution.
To my d rho by dt equals to 0 requirement. I could make a rho equilibrium which is a function of H of p and q, as well as, say, L of p and q.
And when you go through the Poisson bracket process, you will either be taking derivatives with respect to the first argument here. So you would get rho prime with respect to the first argument. And then you would get the Poisson bracket of H and H. Or you would be getting derivatives with respect to the second argument. And then you would be getting the Poisson bracket of L and H. And both of them are 0.
By definition, L is a conserved quantity. So any solution that's a function of Hamiltonian, the energy of the system is conserved as a function of time, as well as any other conserved quantities, such as angular momentum, et cetera, is certainly a valid thing. So indeed, when I drew here, in the micro canonical ensemble, a surface that corresponds to a constant energy, well, if I am in a spherical cavity, only part of that surface that corresponds to the right angular momentum is going to be accessible.
So essentially, what I know is that if I have conserved quantities, my trajectories will explore the subspace that is consistent with those conservation laws. And this statement here is that ultimately, those spaces that correspond to the appropriate conservation law are equally populated. Rho is constant around. So in some sense, we started with the definition of rho by putting these points around and calculating probability that way, which was my objective definition of probability.
And through this Liouville theorem, we have arrived at something that is more consistent with the subjective assignment of probability. That is, the only thing that I know, forgetting about the dynamics, is that there are some conserved quantities, such as H, angular momentum, et cetera. And I say that any point in phase space that does not violate those constants in this, say, micro canonical ensemble would be equally populated. There was a question somewhere. Yes?
AUDIENCE: So I almost feel like the statement that the rho has to not change in time is too strong, because if you go over to the equation that says the rate of change is observable is equal to the integral that was with a Poisson bracket of rho and H, then it means that for any observable, it's constant in time, right?
PROFESSOR: Yes.
AUDIENCE: So rho of q means any observable we can think of, its function of p and q is constant?
PROFESSOR: Yep. Yep. Because-- and that's the best thing that we can think of in terms of-- because if there is some observable that is time-dependent-- let's say 99 observables are time-independent, but one is time-dependent, and you can measure that, would you say your system is in equilibrium? Probably not. OK?
AUDIENCE: It seemed like the same method that you did to show that the density is a function of [INAUDIBLE], or [INAUDIBLE] that it's the function of any observable that's a function of q. Right?
PROFESSOR: Observables? No. I mean, here, if this answer is 0, it states something about this quantity averaged. So if this quantity does not change as a function of time, it is not a statement that H and 0 is 0. A statement that H and O is 0 is different from its ensemble average being 0.
What you can show-- and I think I have a problem set for that-- is that if this statement is correct for every O that the average is 0, then your rho has to satisfy this theorem equals to-- Poisson bracket of rho and H is equal to 0. OK.
So now the big question is the following. We arrived that the way of thinking about equilibrium in a system of particles and things that are this many-to-one mapping, et cetera, in terms of the densities-- we arrived that the definition of what the density is going to be in equilibrium.
But the thermodynamic statement is much, much more severe. The statement, again, is that if I have a box and I open the door of the box, the gas expands to fill the empty space or the other part of the box. And it will do so all the time.
Yet the equations of motion that we have over here are time reversal invariant. And we did not manage to remove that. We can show that this Liouville equation, et cetera, is also time reversal invariant. So for every case, if you succeed to show that there is a density that is in half of the box and it expands to fill the entire box, there will be a density that presumably goes the other way.
Because that will be also a solution of this equation. So when we sort of go back from the statement of what is the equilibrium solution and ask, do I know that I will eventually reach this equilibrium solution as a function of time, we have not shown that. And we will attempt to do so next time.
Free Downloads
Video
- iTunes U (MP4 - 169MB)
- Internet Archive (MP4 - 169MB)
Subtitle
- English - US (SRT)