Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar introduces the Scaling Hypothesis, including the Homogeneity Assumption, Divergence of the Correlation Length, Critical Correlation Functions and Self-similarity.
Instructor: Prof. Mehran Kardar
Lecture 6: The Scaling Hypo...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK. Let's start. So if we have been thinking about critical points. And these arise in many phased diagrams such as that we have for the liquid gas system where there's a coexistence line, let's say, between the gas and the liquid that terminate, or we looked in the case of a magnet where as a function of [INAUDIBLE] temperature there was in some sense coexistence between magnetizations in different directions terminating at the critical point.
So why is it interesting to take it a whole phase diagram that we have over here? For example, for this system, we can also have solid, et cetera.
AUDIENCE: So isn't this [INAUDIBLE] [INAUDIBLE]
PROFESSOR: [INAUDIBLE]. Yes. Thank you. And focus on just the one point. In the vicinity of this one point. And the reason for that was this idea of universality. There many things that are happening in the vicinity of this point as far as singularitities, correlations, et cetera, are concerned that are independent of whatever the consequence of the system are.
And these singularities, we try to capture through some scaling laws for the singularities. And I've been kind of constructing a table of your singularities. Let's do it one more time here. So we could look at system such as the liquid gas-- so let's have here system-- and then we could look at the liquid gas.
And for that, we can look at a variety of exponents. We have alpha, beta, gamma, delta, mu, theta. And for the liquid gas, I write you some numbers. The heat capacity diverges with an exponent that is 0.11-- slightly more accurate than I had given you before. The case for beta is 0.33 gamma. OK. I will give you a little bit more digits just to indicate the accuracy of experiments. This is 1.238 minus plus 0.012.
So these exponents are obtained by looking at the fluid system with light scattering-- doing this critical opalescence that we were talking about in more detail and accurately, Delta is 4.8. The mu is, again from light scattering 0.629 minus plus 0.003. Theta is 0.032 0 minus plus 0.013. And essentially these three are [INAUDIBLE] light scattered.
Sorry. Another case that I mentioned is that of the super fluid. And in this general construction of the lambda gives [INAUDIBLE] theories that we had, liquid gas would be in question one. Superfluid would be in question two. And I just want to mention that actually the most experimentally accurate exponent that has been determined is the heat capacity for dry superfluid helium transition.
I had said that it kind of looks like a logarithmic divergence. You look at it very closely. And it is in fact a cusp, and does not diverge all the way to infinity, so it corresponds to a slightly negative value of alpha, which is the 0.0127 minus plus 0.0003. And the way that this has been data-mined is they took superfluid helium to the space shuttle, and this experiments were done away from the gravity of the earth in order to not to have to worry about the density difference that we would have across the system.
Other exponents that you have for this system-- let me write down-- beta is around 0.35. Gamma is 1.32. Delta is 4.79. Mu is is 0.67. Theta is 0.04. And we don't need this for system. Any questions we could do players kind of add the exponents here I've booked usability even if it's minus 1 is a research data 0.7 down all those are long this is more to say about new ideas and so on I think is that these numbers aren't you think that is simplest way for us is net my position and the question is why these numbers are all of the same as all the systems is therefore profound.
These are dimensionless numbers. So in some sense, it is a little bit of mathematics. It's not like you calculate the charge of the electron and you get a number. These don't depend on a specific material.
Therefore, what is important about them is that they must somehow be capturing some aspect of the collective behavior of all of these degrees of freedom, in which the details of what the degrees of freedom are is not that important. Maybe the type of synergy rating is important. So unless we understand and derive these numbers, there is something important about the collective behavior of many degrees of freedom that we have not understood.
And it is somehow a different question if you are thinking about phase transitions. So let's say you're thinking about superconductors. There's a lot of interest in making high temperature superconductor pushing TC further and further up. So that's certainly a material problem.
We are asking a different problem. Why is it, whether you have a high temperature superconductor or any other type of system, the collective behavior is captured by the same set of exponents. So in an attempt to try to answer that, we did this Landau-Ginzburg and try to calculate its singular behavior using this other point of approximation.
And the numbers that we got, alpha was 0, meaning that there was discontinuity. Beta was 1/2, gamma was 1, delta was 3, my was 1/2, theta was 0, which don't quite match with these numbers that we have up there. So question is, what should you do?
We've made an attempt and that attempt was not successful. So we are going to completely for a while forget about that and try to approach the problem from a different perspective and see how far we can go, whether we can gain any new insights. So that new approach I put on there the name of the scaling hypothesis.
And the reason for that will become apparent shortly. So what we have in common in both of these examples is that there is a line where there are discontinuities in calculating some thermodynamic function that terminates at a particular point. And in the case of the magnetic system, we can look at the singularities approaching that point either along the direction that corresponds to change in temperature and parametrize that through heat, or we can change the magnetic field and approach the problem from this other direction.
And we saw that there were analogs for doing so in the liquid gas system also. And in particular, let's say we calculated a magnetization, we found that there was one form of singularity coming this way, one form of singularity coming that way. We look at the picture for the liquid gas system that I have up there, and it's not necessarily clear which direction would correspond to this nice symmetry breaking or non-symmetry breaking that you have for the magnetic system.
So you may well ask, suppose I approach the critical point along some other direction. Maybe I come in along the path such as this. I still go to the critical point. We can imagine that for the liquid gas system. And what's the structure of the singularities? I know that there are different singularities in the t and h direction. What is it if I come and approach the system along a different direction, which we may well do for a liquid gas system?
Well, we could actually answer that if we go back to our graph saddlepoint approximation. In the saddlepoint approximation, we said that ultimately, the singularities in terms of these two parameters t and h-- so this is in the saddlepoint. Part obtained by minimizing this function that was appearing in the expansion in the exponent. There was a t over 2m squared. There was a mu n to the 4th. And there is an hm.
So we had to minimize this with respect to m. And clearly, what that gives us is m. If I really solve the equation, that corresponds to this minimization, which is a function of t and h. And in particular, approaching two directions that's indicated, if I'm along the direction where h equals 0, I essentially balance these two terms. Let's just write this as a proportionality. I don't really care about the numbers.
Along the direction where h equals 0, I have to balance m to the 4th and tm squared. So m squared will scale like e. m will scale like square root of t. And more precisely, we calculated this formula for t negative and h equals to 0. If I, on the other hand, come along the direction that corresponds to t equals to 0, along that direction I don't have a first term.
I have to balance um the 4th and hm. So we immediately see that m will scale like h over u. In fact, more correctly h over 4u to the power of one third. You substitute this in the free energy and you find that the singular part of the free energy as a function of t and h in this saddlepoint approximation has the [INAUDIBLE] to the form of proportionality.
If I substitute this in the formula for t negative, I will get something like minus t squared over 4 we have-- forget about the number t squared over u. If I go along the t equals to 0 direction, substitute that over there, I will get n to the 4th. I will get h to the 4 thirds divided by mu to the one third.
Even the mu dependence I'm not interested. I'm really interested in the behavior close to t and h as a function of t and h. Mu is basically some non-universal number that doesn't go to 0. I could in some sense capture these two expressions by a form that is t squared and then some function-- let's call it g sub f which is a function of-- let's see how I define the delta h over t to the delta.
So my claim is that I toyed with the behavior coming across these two different special direction. In general, anywhere else where t and h are both nonzero, the answer for m will be some solution of a cubic equation, but we can arrange it to only be a function of h over [INAUDIBLE] and have this form. Now I could maybe rather than explicitly show you how that arises, which is not difficult-- you can do that-- since there's something that we need to do later on, I'll show it in the following manner.
I have not specified what this function g sub f is. But I know its behavior along h equals to 0 here. And so if I put h equals to 0, the argument of the function goes to 0. So if I say that the argument of the function is a constant-- the constant let's say is minus 1 over u on one side, 0 on the other side, then everything's fine.
So I have is that the limit as its argument goes to 0 should be some constant. Well, what about the other direction? How can I reproduce from a form such as this the behavior when t equals to 0? Because I see that when t equals to 0, the answer of course cannot depend on t itself, but as a power law as a function of h.
Is it consistent with this form? Well, as t goes to 0 in this form, the numerator here goes to 0, the argument of the function goes to infinity, I need to know something about the behavior of the function of infinity. So let's say that the limiting behavior as the argument of the function goes to infinity of gf of x is proportional to the argument to some other peak.
And I don't know where that power is. Then if I look at this function, the whole function in this limit where t goes to 0 will behave. There's a t squared out front the goes to 0, the argument of the function goes to infinity. So the function will go like the argument to some power. So I go like h t to the delta to some other peak.
So what do I know? I know that the answer should really be proportional to h to the four thirds. So I immediately know that my t should be four thirds. But what about this delta? I never told you what delta was. Now I can figure out what delta is, because the answer should not depend on t. t has gone to 0.
And so what power of t do I have? I have 2 minus delta p should be 0. So my delta should be 2 over p, 2 over four thirds, so it should be three halves. Why is this exponent relevant to the question that I had before?
You can see that the function that describes the free energy as a function of these two coordinates. If I look at the combination where h and t are non-zero, is very much dependent on this h divided by t to the delta, and that delta is three halves. So, for example, if I were to draw here curves where h goes like 3 to the three halves-- it's some coefficient, I don't know what that coefficient is-- then essentially, everything that is on the side that hogs the vertical axis behaves like the h singularity.
Everything that is over here depends like a t singularity. So a path that, for example, comes along a straight line, if I, let's say, call the distance that I have to the critical point s, then t is something like s cosine of theta. h is something like s sine theta. You can see however that the information h over t to the delta as s goes to 0 will diverge, because I have other three halves down here for s that will overcome the linear cover I have over there.
So for any linear path that goes through the critical point, eventually for small s I will see the type of singularity that is characteristic of the magnetic field if the exponents are according to this other point. We have this assumption, of course. But if I therefore knew the correct delta for all of those systems, I would be also able to answer, let's say for the liquid gas, whether if I take a linear path that goes through the critical point I would see one set of singularities or deltas that have singularities.
So this delta which is called a gap exponent, gives you the answer to that. But of course I don't know the other exponents. There is no reason for me to trust the gap exponent that I obtained in this fashion.
So what I say is let's assume that for any critical point, the singular part of the free energy on approaching the critical point which depends on this pair of coordinates has a singular behavior that is similar to what we had over here, except that I don't know the exponent. So rather than putting 2 t squared, I write t to the 2 minus alpha for reason that will become apparent shortly, and some function of h t to the delta and for some alpha and delta.
So this is certainly already an assumption. This mathematically corresponds to having homogeneous functions. Because if I have a function of x and y, I can certainly write lots of functions such as x squared plus y squared plus a constant plus x cubed y cubed that I cannot rearrange into this form.
But there are certain functions of x and y that I can rearrange so that I can pull out some factor of let's say x squared out front, and everything that is then in a series is a function of let's say y over x cubed. Something like that. So there's some class of functions of two arguments that have this homogeneity.
So we are going to assume that the singular behavior close to the critical point is described by such a function. That's an assumption. But having made that assumption, let's follow its consequence and let's see if we learned something about that table of exponents.
Now the first thing to note is clearly I chose this alpha over here so that when I take two derivatives with respect to t, I would get something like a heat capacity, for which I know what the divergence is. That's a divergence called alpha. But there's one thing that I have to show you is that when I take a derivative of one of these homogeneous functions, with respect to one of its arguments, I will generate another homogeneous function.
If I take one derivative with respect to t, that derivative can either act on this, leaving the function unchanged, or it can act on the argument of the function and give me d to the 2 minus alpha. I will have minus h t to the power of delta plus 1. There will be a factor of delta and then I will have the derivative function ht to the delta.
So I just took derivatives. I can certainly pull out a factor of t to the 1 minus alpha. Then the first term is just 2 minus alpha times the original function. The second term is minus delta h divided by t to the delta. Because I pulled out the 1 minus alpha, this t gets rid of the factor of 1 there. And I have the derivative.
So this is completely different function. It's not the derivative of the original function. But whatever it is it is still only a function of the combination h over t to the delta. So the derivative of a homogeneous function is some other homogeneous function. Let's call it g2. It doesn't matter. Let's call it g1 ht to the delta.
And this will happen if I take a second derivative. So I know that if I take two derivatives, I will get t to the minus alpha. I will basically drop two factors over there. And then some other function, ht to the delta. Clearly again, if I say that I'm looking at the line where h equals to zero for a magnet, then the argument of the function goes to 0.
If I say that the function of the argument goes to 0 is a constant, like we had over here, then I will have the singularity t to the minus alpha. So I've clearly engineered whatever the value of alpha is in this table, I can put over here and I have the right singularity for the heat capacity. Essentially I've put it there by hand.
Let me comment on one other thing, which is when we are looking at just the temperature, let's say we are looking at something like a superfluid, the only parameter that we have at our disposal is temperature and tens of ITC. Let's say we plug the heat capacity and then we see divergence of the heat capacity on the two sides.
Who said that I should have the same exponent on this side and on this side? So we said that generally, in principle, I could say I would do that. And in principle, there is no problem with that. If there is function that has one behavior here, another behavior there, who says that two exponents have to be the same?
But I have said something more. I have said that in all of the cases that I'm looking at, I know that there is some other axis. And for example, if I am in the liquid gas system, I can start from down here, go all the way around back here without encountering a singularity.
I can go from the liquid all the way to gas without encountering a singularity. So that says that the system is different from a system that, let's say, has a line of singularities. So if I now take the functions that in principle have two different singularities, t to the minus alpha minus t to the minus alpha plus on the h equals to 0 axis and try to elevate them into the entire space by putting this homogeneous functions in front of them, there is one and only one way in which the two functions can match exactly on this t equals to 0 line, and that's if the two exponents are the same and you are dealing with the same function.
So that we put in a bit of physics. So in principle, mathematically if you don't have the h axis and you look at the one line and there's a singularity, there's no reason why the two singularities should be the same. But we know that we are looking at the class of physical systems where there is the possibility to analytically go from one side to the other side. And that immediately imposes this constraint that alpha plus should be alpha minus, and one alpha is in fact sufficient. And I gave you the correct answer for why that is. If you want to see the precise mathematical details step by step, then that's in the notes.
So fine. So far we haven't learned much. We've justified why the two alphas should be the same above and below, but we put the alpha, the one alpha, then by hand. And then we have this unknown delta also. But let's proceed.
Let's see what other consequence emerge, because now we have a function of two variables. I took derivatives in respect to t. I can take derivatives with respect to m. And in particular, the magnetization m as a function of t and h is obtained from a derivative of the free energy with respect to h.
There's potential. It's the response to adding a field could be some factor of beta c or whatever. It's not important. The singular part will come from this. And so taking a derivative of this function I will get t this to the 2 minus alpha.
The derivative of a can be respect to h, but h comes in the combination h over t to the delta will bring down a factor of minus delta up front. Then the derivative function-- let's call it gf1, for example. So now I can look at this function in the limit where h goes to 0, climb along the coexistence line, h2 goes to 0.
The argument of the function has gone to 0. Makes sense that the function should be constant when its argument goes to 0. So the answer is going to be proportional to t to the 2 minus alpha minus delta. But that's how beta was defined.
So if I know my beta and alpha, then I can calculate my delta from this exponent identity. Again, so far you haven't done much. You have translated two unknown exponents, this singular form, this gap exponent that we don't know. I can also look at the other limit where t goes to 0 that is calculating the magnetization along the critical isotherm.
So then the argument of the function has gone to infinity. And whatever the answer is should not depend on t, because I have said t goes to 0. So I apply the same trick that I did over here. I say that when the argument goes to infinity, the function goes like some power of its argument.
And clearly I have to choose that power such that the t dependence, since t is going to 0, I have to get rid of it. The only way that I can do that is if p is 2 minus alpha minus delta divided by that. So having done that, the whole thing will then be a function of h to the p.
But the shape of the magnetization along the critical isotherm, which was also the shape of the isotherm of the liquid gas system, we were characterizing by an exponent that we were calling 1 over delta. So we have now a formula that says my delta shouldn't in fact be the inverse of p. It should be the delta 2 minus alpha minus delta. Yes?
AUDIENCE: Why isn't the exponent t minus 1 after you've differentiated [INAUDIBLE]? Because g originally was defined as [INAUDIBLE].
PROFESSOR: Let's call it pr. Because actually, you're right. If this is the same g and this has particular singularity [INAUDIBLE]. But at the end of the day, it doesn't matter. So now I have gained something that I didn't have before. That is, in principle I hit alpha and beta, my two exponents, I'm able to figure out what delta is.
And actually I can also figure out what gamma is , because gamma describes the divergence of the susceptibility. [INAUDIBLE] which is the derivative of magnetization with respect to field, I have to take another derivative of this function. Taking another derivative with respect to h will bring down another factor of delta. So this becomes minus 2 delta.
Some other double derivative function h 2 to the delta. And susceptibilities, we are typically interested in the limit where the field goes to 0. And we define them to diverge with exponent gamma. So we have identified gamma to be 2 delta plus alpha minus 2.
So we have learned something. Let's summarize it. So the consequences-- one is we established that same critical exponents above and below.
Now since various quantities of interest are obtained by taking derivatives of our homogeneous function and they turn into homogeneous functions, we conclude that all quantities are homogeneous functions of the same combination ht to the delta. Same delta governs it.
And thirdly, once we make this answer our assumption for the free energy, we can calculate the other exponents on the table. So all, of almost all other exponents related to 2, in this case alpha and delta. Which means that if you have a number of different exponents that all depend on 2, there should be some identities, exponent identities.
It's these numbers in the table, we predict if all of this is varied have some relationships with t. So let's show a couple of these relationships. So let's look at the combination alpha plus 2 beta plus gamma. Measurement of heat capacity, magnetization, susceptibility. Three different things.
So alpha is alpha 2. My beta up there is 2 minus alpha minus delta. My gamma is 2 delta plus alpha minus 2. We got algebra. There's one alpha minus 2 alpha plus alpha. Alpha is cancelled. Minus 2 deltas plus 2 deltas then it does cancel. I have 2 times 2 minus 2, so that 2.
So the prediction is that you take some line on the table, add alpha, beta, 2 beta plus gamma, they should add up to one. So let's pick something. Let's pick a first-- actually, let's pick the last line that has a negative alpha. So let's do n equals to 3.
For n equals to 3 I have alpha which is minus .12. I have twice beta, that is .37, so that becomes 74. And then I have gamma, which is 1.39. So this is 74. I have 9 plus 413 minus 2, which is 1. I have 3 plus 7, which is 10, minus 1, which is 9.
But then I had a 1 that was carried over, so I will have 0. So then I have 1, so 201. Not bad. Now this goes by the name of the Rushbrooke identity. The Rushbrooke made a simple manipulation based on thermodynamics and you have a relationship with these.
Let's do another one. Let's do delta and subtract 1 from it. What is my delta? I have delta to the delta 2 plus alpha minus delta. This is small delta versus big delta. And then I have minus 1. Taking that into the numerator with the common denominator of 2 plus alpha minus delta, this minus delta becomes plus delta, which this becomes 2 delta minus alpha minus 2. 2 delta
AUDIENCE: Should that be a minus alpha in the denominator?
PROFESSOR: It better be. Yes. 2 delta plus alpha minus 2. Then we can read off the gamma. So this is gamma over beta. And let's check this, let's say for m equals to 2. No, let's check it for m plus 21, for the following reason, that for n equals to 1, what we have for delta is 4.8 minus 1, which would be 3.8.
And on the other side, we have gamma over beta. Gamma is 1.24, roughly, divided by beta .33, which is roughly one third. So I multiply this by 3. And that becomes 3.72. This one is known after another famous physicist, Ben Widom, as the Widom identity.
So that's nice. We can start learning that although we don't know anything about this table, these are not independent numbers. There's relationship between them. And they're named after famous physicists. Yes?
AUDIENCE: Can we briefly go over again what extra assumption we had put in to get these in and these out? Is it just that we have this homogeneous function [INAUDIBLE]?
PROFESSOR: That's right. So you assume that the singularity in the vicinity of the critical point as a function of deviations from that critical point can be expressed as a homogeneous function. The homogeneous function, you can rearrange any way you like. One nice way to rearrange it is in this fashion.
It will depend, the homogeneous function on two exponents. I chose to write it as 2 minus alpha so that one of the exponents would immediately be alpha. The other one I couldn't immediately write in terms of beta or gamma. I had to do these manipulations to find out what the relationship [INAUDIBLE].
But the physics of it is simple. That is, once you know the singularity of a free energy, various other quantities you obtain by taking derivatives of the free energy. That's [INAUDIBLE] And so then you would have the singular behavior of [INAUDIBLE].
So I started by saying that all other exponents, but then I realized we have nothing so far that tells us anything about mu and eta. Because mu and eta relate to correlations. They are in microscopic quantities. Alpha, beta, gamma depend on macroscopic thermodynamic quantities, magnetization susceptibility.
So there's no way that I will be able to get information, almost. No easy way or no direct way to get information about mu and eta. So I will go to assumption 2.0. Go to the next version of the homogeneity assumption, which is to emphasize that we certainly know, again from physics and the relationship between susceptibility and correlations, that the reason for the divergence of the susceptibility is that the correlations become large.
So we'll emphasize that. So let's write our ansatz not about the free energy, but about the correlation length. So let's replace that ansatz with homogeneity of correlation length. So once more, we have a structure where is a line that is terminate when two parameters, t and h go to 0.
And we know that on approaching this point, the system will become cloudy. There's a correlation length that diverges on approaching that point a function of these two arguments. I'm going to make the same homogeneity assumption for the correlation length. And again, this is an assumption. I say that this is a to the minus mu.
The exponent mu was a divergence of the correlation length. Some other function, it's not that first g that we wrote. Let's call it g psi of ht to the delta. So we never discussed it, but this function immediately also tells me if you approach the critical point along the criticalizer term, how does the correlation length diverge through the various tricks that we have discussed?
But this is going to be telling me something more if from here, I can reproduce my scaling assumption 1.0. So there is one other step that I can make. Assume divergence of c is responsible-- let's call it even solely responsible-- for singular behavior.
And you say, what does all of this mean? So let's say that I have a system could be my magnet, could be my liquid gas that has size l on each search. And I calculate the partition function log z.
Log z will certainly have the part that is regular. Well-- log z will have a part that is certainly-- let's say the contribution phonons, all kinds of other regular things that don't have anything to do with singularity of the system. Those things will give you some regular function.
But one thing that I know for sure is that the answer is going to be extensive. If I have any nice thermodynamic system and I am in v dimensions, then it will be proportional to the volume of that system that I have. Now the way that I have written it is not entirely nice, because log z is-- a log is a dimensionless quantity.
Maybe I measured my length in meters or centimeters or whatever, so I have dimensions here. So it makes sense to pick some landscape to dimensionalize it before multiplying it by some kind of irregular function of whatever I have, t and h, for example. But what about the singular part?
For the singular part, the statement was that somehow it was a connective behavior. It involved many, many degrees of freedom. We saw for the heat capacity of the solid at low temperatures, it came from long wavelength degrees of freedom. So no lattice parameter is going to be important.
So one thing that I could do, maintaining extensivity, is to divide by l over c times something. So that's the only thing that I did to ensure that extensivity is maintained when I have kind of benign landscape, but in addition a landscape that is divergent.
Now you can see that immediately that says that log z singular as a function of t and h, will be proportional to c to the minus t. And using that formula, it will be proportional to t to the du, some other scaling function. And it's go back to gf ht to the delta.
Physically, what it's saying is that when I am very close, but not quite at the critical point, I have a long correlation length, much larger than microscopic length scale of my system. So what I can say is that within a correlation length, my degrees of freedom for magentization or whatever it is are very much coupled to each other.
So maybe what I can do is I can regard this as an independent lock. And how many independent locks do I have? It is l over c to the d. So the statement roughly is a part of the assumption is that this correlation length that is getting bigger and bigger. Because things are correlated, the number of independent degrees of freedom that you are having gets smaller and smaller.
And that's changing the number of degrees of freedom is responsible for the singular behavior of the free energy. If I make this assumption about this correlation then diverges, then I will get this form. So now my ansatz 2.0 matches my ansatz 1.0 provided du is 2 minus alpha. So I have du2 plus 2 minus alpha which is known after Brian Josephson, so this is the Josephson relation. And it is different from the other exponent identities that we have because it explicitly depends on the dimensionality of space. d appears in the problem. It's called hyperscale for that reason. Yes?
AUDIENCE: So does the assumption that the divergence in c is solely responsible for the singular behavior, what are we excluding when we assume that? What else could happen that would make that not true?
PROFESSOR: Well, what is appearing here maybe will have some singular function of t and h.
AUDIENCE: So this similar to what we were assuming before when we said that our free energy could have some regular part that depends on [INAUDIBLE] the part that [INAUDIBLE].
PROFESSOR: Yes, exactly. But once again, the truth is really whether or not this matches up with experiments. So let's, for example, pick anything in that table, v equals to t. Let's pick n goes to 2, which we haven't done so far. And so what the formula would say is 3 times mu.
Mu for the superfluid is 67 is 2 minus-- well, alpha is almost 0 but slightly negative. So it is 0.01. And what do we have? 3 times 67 is 2.01. So it matches. Actually, we say, well, why do you emphasize that it's the function of dimension?
Well, a little bit later on in the course, we will do an exact solution of the so-called 2D Ising model. So this is a system that first wants to be close to 2, n equals to 1. And it was an important thing that people could actually solve an interacting problem, not in three dimensions but in two.
And the exponents for that, alpha is 0, but it really is a logarithmic divergence. Beta is 1/8. Gamma is 7/4, delta is 15, mu is 1, and eta is 1/4. And we can check now for this v equals to 2 n equals to 1 that we have two times our mu, which exactly is known to be 1 is 2 minus logarithmic divergence corresponding to 0.
So again, there's something that works. One thing that you may want to see and look at is that the ansatz that we made first also works for the result of saddlepoint, not surprisingly because again in the saddlepoint we start with a singular free energy and go through all this. But it does not work for this type of scaling, because 2 minus alpha would be 0 is not equal to d times one half, except in the case of four dimensions.
So somehow, this ansatz and this picture breaks down within the saddlepoint approximation. If you remember what we did when we calculated fluctuation corrections for the saddlepoint, you got actually an exponent alpha that was 2 minus mu over 2. So the fluctuating part that we get around the saddlepoint does satisfy this. But on top of that there's another part that is doe to the saddlepoint value itself that violates this hyperscaling solution. Yes?
AUDIENCE: Empirically, how well can we probe the dependence on dimensionality that we're finding in these expressions?
PROFESSOR: Experimentally, we can do d equals to 2 d equals to 3. And computer simulations we can also do d equals to 2 d equals to 3. Very soon, we will do analytical expressions where we will be in 3.99 dimensions. So we will be coming down conservatively around 4. So mathematically, we can play tricks such as that.
But certainly empirically, in the sense of experimentally we are at a disadvantage in those languages. OK? So we are making progress. We have made our way across this table. We have also an identity that involves mu. But so far I haven't said anything about eta. I can say something about the eta reasonably simply, but then you try to build something profound based on that.
So let's look at exactly at tc, at the critical point. So let's say you are sitting at t and h equals to 0. You have to prepare your system at that point. There's nothing physically that says you can't. At that point, you can look at correlations. And the exponent eta for example is a characteristic of those correlations.
And one of the things that we have is that m of x m of 0, the connected parts-- well, actually at the critical point we don't even have to put the connected part because the average of n is going to be 0. But this is a quantity that behaves as 1 over the separation that's actually include two possible points, x minus y. When we did the case of the fluctuations at the critical point within the saddlepoint method, we found that the behavior was like the Coulomb law. It was falling off as 1x to the d minus 2.
But we said that experiment indicated that there is a small correction for this that we indicate with exponent eta. So that was how the exponent eta was defined. So can we have an identity that involves the exponent eta? We actually have seen how to do this already. Because we know that in general, the susceptibilities are related to integrals of the correlation functions.
Now if I put this power law over here, you can see that the answer is like trying to be integrate x squared all the way to infinity down. So it will be divergent and that's no problem. At the critical point we know that the susceptibility is divergent.
But you say, OK, if I'm away from the critical point, then I will use this formula, but only up to the correlation length. And I say that beyond the correlation length, then the correlations will decay exponentially.
That's too rapid a falloff, and essentially the only part that's contributing is because what was happening at the critical point. Once I do that, I have to integrate ddx over x to the d minus 2 plus eta up to the correlation length. The answer will be proportional to the correlation length to the power of 2 minus eta. And this will be proportional to p to the power of c goes [INAUDIBLE] to the minus mu. 2 minus eta times mu.
But we know that the susceptibilities diverge as t to the minus gamma. So we have established an exponent identity that tells us that gamma is 2 minus eta times mu. And this is known as the Fisher identity, after Michael Fisher.
Again, you can see that in all of the cases in three dimensions that we are dealing with, exponent eta is roughly 0. It's 0.04 And all of our gammas are roughly twice what our mus are in that table. It's time we get that table checked. The one case that I have on that table where eta is not 0 is when I'm looking at v positive 2 where eta is 1/4. So I take 2 minus 1/4, multiply it by the mu that is one in two dimension, and the answer is the 7/4, which we have for the exponent gamma over there.
So we have now the identity that is applicable to the last exponents. So all of this works. Let's now take the conceptual leap that then allows us to do what we will do later on to get the exponents. Basically, you can see that what we have imposed here conceptually is the following.
That when I'm away from the critical point, I look at the correlations of this important statistical field. And I find that they fall off with separation, according to some power. And the reason is that at the critical point, the correlation length has gone to infinity. That's not the length scale that you have to play with.
You can divide x minus y divided by c, which is what we do away from the critical point. c has gone to infinity. The other length scale that we are worried about are things that go into the microscopics. but we are assuming that microscopics is irrelevant. It has been washed out. So if we don't have a large length scale, if we don't have a short length scale, some function of distance, how can it decay?
The only way it can decay is [INAUDIBLE]. So this statement is that when we are at a critical point, I look at some correlation. And this was the magnetization correlation. But I can look at correlation of anything else as a function of separation. And this will only fall off as some power of separation.
Another way of writing it is that if I were to multiply this by some length scale, so rather than looking at things that are some distance apart at twice that distance apart or hundred times that distance apart, I will reproduce the correlation that I have up to some other of the scale factor. So the scale factor here we can read off has to be related to t minus 2 plus eta.
But essentially, this is a statement again about homogeneity of correlation functions when you are at a critical point. So this is a symmetry here. It says you take your statistical correlations and you look at them at the larger scale or at the shorter scale. And up to some overall scale factor, you reproduce what you had before.
So this is something to do with invariance on the scale. This scaling variance is some property that was popular a while ago as being associated with the kind of geometrical objects that you would call fractals. So the statement is that if I go across my system and there is some pattern of magnetization fluctuations, let's say I look at it. I'm going along this direction x.
And I plot at some particular configuration that is dominant and is contributing to my free energy, the magnetization, that it has a shape that has this characteristic self similarity kind of maybe looking like a mountain landscape. And the statement is that if I were to take a part of that landscape and then blow it up, I will generate a pattern that is of course not the same as the first one. It is not exactly scale invariant. But it has the same kind of statistics as the one that I had originally after I multiplied this axis by some factor lambda. Yes?
AUDIENCE: Under what length scales are those subsimilarity properties evident and how do they compare to the length scale over which you're doing your course grading for this field?
PROFESSOR: OK, so basically we expect this to be applicable presumably at length scales that are less than the size of your system because once I get to the size of the system I can't blow it up further or whatever. It has to certainly be larger than whatever the coarse-graining length is, or the length scale at which I have confidence that I have washed out the microscoping details.
Now that depends on the system in question, so I can't really give you an answer for that. The answer will depend on the system. But the point is that I'm looking in the vicinity of a point where mathematically I'm assured that there's a correlation length that goes to infinity. So maybe there is some system number 1 that average out very easily, and after a distance of 10 I can start applying this.
But maybe there's some other system where the microscopic degrees of freedom are very problematic and I have to go further and further out before they average out. But in principle, since my c has gone to infinity, I can just pick a bigger and bigger piece of my system until that has happened. So I can't tell you what the short distance length scale is in the same sense that when [INAUDIBLE] says that coast of Britain is fractal, well, I can't tell you whether the short distance is the size of a sand particle, or is it the size of, I don't know, a tree or something like that. I don't know.
So we started thinking about our original problem. And constructing this Landau-Ginzburg, [INAUDIBLE] that we worked with on the basis of symmetries such as invariance on the rotation, et cetera. Somehow we've discovered that the point that we are interested has an additional symmetry that maybe we didn't anticipate, which is this self-similarity and scale invariance.
So you say, OK, that's the solution to the problem. Let's go back to our construction of the Landau-Ginsburg theory and add to the list of symmetries that have to be obeyed, this additional self-similarity of scaling. And that will put us at t equals to 0, h equals to 0. And for example, we should be able to calculate this correlation. Let me expand a little bit on that because we will need one other correlation.
Because we've said that essentially, all of the properties of the system I can get from two independent exponents. So suppose I constructed this scale invariant theory and I calculated this. That would be on exponent. I need another one. Well, we had here a statement about alpha.
We made the statement that heat capacity diverges. Now in the same sense that the susceptibility is a response-- it came from two derivatives of the free energy with respect to the field. The derivative of magnetization with respect to field magnetization is one derivative [INAUDIBLE].
The heat capacity is also two derivatives of free energy with respect to some other variable. So in the same sense that there is a relationship between the susceptibility and an integrated correlation function, there is a relationship that says that the heat capacity is related to an integrated correlation function.
So c as a function of say t and h, let's say the singular part, is going to be related to an integral of something. And again, we've already seen this. Essentially, you take one derivative of the free energy let's say with respect the beta or temperature, you get the energy.
And you take another derivative of the energy you will get the heat capacity. And then that derivative, if we write in terms of the first derivative of the partition function becomes converted to the variance in energy. So in the same way that the susceptibility was the variance of the net magnetization, the heat capacity is related to the variance of the net energy of the system at an even temperature.
The net energy of the system we can write this as an integral of an energy density, just as we wrote the magnetization as an integral of magnetization density. And then the heat capacity will be related to the correlation functions of the energy density. Now once more, you say that I'm at the critical point. At the critical point there is no length scale.
So any correlation function, not only that of the magnetization, should fall off as some power of separation. And you can call that exponent whatever you like. There is no definition for it in the literature. Let me write it in the same way as magnetization as d minus 2 plus eta prime. So then when I go and say let's terminate it at the correlation length, the answer is going to be proportional to c to the 2 minus eta prime, which would be t to the minus mu. 2 minus eta prime.
So then I would have alpha being mu 2 minus eta. So all I need to do in principle is to construct a theory, which in addition to rotational invariance or there's whatever is appropriate to the system in question, has this statistical scale invariance. Within that theory, calculate the correlation functions of two quantities, such as magnetization and energy.
Extract two exponents. Once we have two exponents, then we know why your manipulations will be able to calculate all the exponents. So why doesn't this solve the problem? The answer is that whereas I can write immediately for you a term such as m squared, that is rotational invariant, I don't know how to write down a theory that is scale invariant.
The one case where people have succeeded to do that is actually two dimensions. So in two dimensions, one can show that this kind of scale invariance is related to conformal invariance and that one can explicitly write down conformal invariant theories, extract exponents et cetera out of those. But say in three dimensions, we don't know how to do that. So we will still, with that concept in the back of our mind, approach it slightly differently by looking at the effects of the scale transformation on the system. And that's the beginning of this concept of normalization.
Free Downloads
Video
- iTunes U (MP4 - 174MB)
- Internet Archive (MP4 - 174MB)
Subtitle
- English - US (SRT)