Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar introduces Continuous Spins at Low Temperatures, including the Non-linear σ-model.
Instructor: Prof. Mehran Kardar
Lecture 20: Continuous Spin...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So we are going to switch directions. Rather than thinking about binary variables, these Ising variables, that were discrete, think again about a lattice. But now at each site we put a spin that has unit magnitude, but m component. That is, Si has components 1, 2, to n.
And the constraint that this sum over alpha Si alpha squared is unit. So clearly, if I-- let's put it here explicitly. Alpha of 1 to n, so that's when I look at the case of n equals to 1, essentially I have one component.
2 squared has to be 1, so it's either plus or minus. We recover the Ising variable. For n equals to 2, it's essentially a unit vector who's angle theta, for example, can change in three dimensions if we were exploring the surface of the cube.
And we always assume that we have a weight that tends to make our spins to be parallel. So we use, essentially, the same form as the Ising model. We sum over near neighbors.
And the interaction, rather than sigma I sigma j, we put it as this Si dot Sj, where these are the dot products of two vectors. Let's call the dimensionless interaction in front K0.
So when we want to calculate the partition function, we need to integrate over all configurations of these spins of this weight. Now for each case, we have to do n components.
But there is a constraint, which is this one. Now I'm going to be focused on the ground state. So when t equals to 0, we expect that spontaneously the particular configuration will be chosen. Everybody will be aligned to that configuration.
Without loss of generality, let's choose aligned state to point along the last component. That is, all of the Si at t equal to 0 will be of the form 0,0, except that the last component is pointing along some particular direction.
So if it was two components, the y component would always be 1. It would be aligned along the y direction. Yes, question?
AUDIENCE: What dimensionality is the lattice?
PROFESSOR: It can be anything. So basically we have two parameters, as usual. n is the dimensionality of spin, and d would be the dimensionality of our lattice. In practice for what the calculations that we are going to be doing, we will be focusing in d that is close to 2.
Now if the odd fluctuations at finite t, what happens is that the state of the vector is going to change. So this Si at finite temperature would no longer be pointing along the last component. It will start to have fluctuations.
Those fluctuations will change the 0 from the ground state to some value I'll call pi 1, the next one pi 2, all of the big pi n minus 1. And since the whole entire thing is a unit vector, the last component has to shrink to adjust for that.
So we would indicate the last component by sigma. So essentially this subspace of fluctuations around the ground state is captured through this vector pi that is n minus 1 dimensional. And this corresponds to the transverse modes that we're looking at when we were doing the expansion of the Landau-Ginzburg model around its symmetry broken state.
In this case, the longitudinal mode, essentially, is infinitely stiff. You don't have the ability to stretch along the longitudinal mode because of the constraint that we have put over here.
So if you think back, we had this wine bottle, or Mexican hat potential. And the Goldstone modes corresponded to going along the bottom, and how easy it was to climb this Mexican hat was determined by the longitudinal mode.
In this case, the Mexican hat has become very, very stiff to climb on the sides. So you don't have the longitudinal mode. You just have these Goldstone modes. The cost to pay for that is that I have to be very careful in calculating the partition function.
If I'm integrating over the n components of some particular spin, I have to make sure that I remember that this sum of all of these components is 1. So I have to integrate subject to that constraint.
And the way that I have broken things down now, I'm integrating over the n minus 1 component of this vector pi. And this additional direction, d Sigma, but I can't do both of them independently. Because there's a delta function that enforces that sigma squared plus pi squared equals to 1.
The pi squared corresponds to the magnitude of this n minus 1 component vector. And essentially, I can solve for this delta function, and really replace this sigma over here with square root of 1 minus pi squared.
But I have to be a little bit careful in my integrations. Because this delta function I can write as a delta function of sigma plus or minus square root of 1 minus pi squared. And there is a rule that if I use this delta function to set sigma to be equal to square root of 1 minus pi squared, like I have done over here, I have to be careful that the delta function of a times x is actually a delta function of x divided by modulus of a.
So essentially, I have to substitute something here. So this is, in fact, equal to the integration in the pi directions because of the use of this delta function to set the value of sigma, I have to divide by the square root of 1 minus pi squared.
Which actually shortly we will write this in the following way. I guess there's an overall factor of 1/2 but it doesn't really matter. So yes?
AUDIENCE: So what do you do with the fact that there are two places where the delta function is done?
PROFESSOR: I'm continuously connecting to the solution that starts at 0 temperature with a particular state. So I have removed that ambiguity by the starting points of my cube.
But if I was integrating over all possibilities, then I should really add that on too. And really just make the partition function with the sum of two equivalent terms-- one around this ground state, one around another state.
AUDIENCE: The product is supposed to be for a lattice site-- for the integration variable, not for the--
PROFESSOR: Right. So I did something bad here. So here I should have written-- so this is an n component integration that I have to do on each side. Now let's pick one of the sides. So let's say we pick the side pi. For that, I have a small n integration to do.
What does it say? It basically says that if, for example, I am looking at the case of n equals to 2, then I have started with a state that points along this direction. But now I'm allowing fluctuations pi in this direction.
And I can't simply say that the amount of these fluctuations, pi, is going let's say from minus infinity to infinity. Because how much pi I have changes actually whether it is small or whether it is large when I'm down here.
And so there's constraints let's say on how big pi can be. Pi cannot be larger than 1. And essentially, a particular magnitude of pi-- how much weight does it has, it is captured by this.
So that's one thing to remember when we are dealing with integration over unit spins, and we want to look at the fluctuations. The other choice of notation that I would like to do is the following. I said that my starting [INAUDIBLE] is a 0 sum over all nearest neighbors Si dot Sj.
Now in the state where all of the spins are pointing in one direction, this factor is unity. So the 0 temperature state gets a factor of 1 here on each one. Let's say we are in a hyper cubic lattice. There are d bonds per site. So at 0 temperature, I would have NdK0, basically-- the value of this ground state.
And then if I have fluctuations from that state, I can capture that as follows, as minus K0 over 2. It's a reduction in this energy, as sum over ij Si minus Sj squared. And you can check that.
If I square these terms, I'm going to get 1, 1, 1/2, which basically reproduces this, which actually goes over her. And the dot product, minus 2 Si dot Sj, cancels this minus 1/2, basically gives you this exactly.
So the reason I write it in this fashion is because very shortly, I want to switch from going and doing things on a lattice to going to a continuum. And you can see that this form, summing over the difference between near neighbors, I very nicely can go to a gradient squared.
So essentially that's what I want to do. Whenever I have a sum over a site, I want to replace it with an integral over a space. And I guess to keep things dimensionless, I have to divide by a to get d. So I can call that the density that I have to include, which is also the same thing as the number of lattice points in the body of the box.
So my minus beta H in the continuum goes over to whatever the contribution of the completely aligned state is. And then whatever the difference of the spins is, because of the small fluctuations, I will capture through an integration of gradient of S squared.
And I call the original coupling that we're basically the strength of the interaction divide by KdK0. Clearly in order to get the coupling that I have in the continuum, I have to have this fact of a to the d. But then in the gradient, I also have to divide by distance.
So there's something here, a factor of a to the 2 minus d that relates these two factors. It should be the other way around. It doesn't matter.
AUDIENCE: Question.
PROFESSOR: Yes.
AUDIENCE: This step is only valid for a cubicle--
PROFESSOR: Yes. So if it was something like a triangular lattice, or something, there would be some miracle factors here.
AUDIENCE: But I mean like writing the difference of the spin squared as the gradient squared. Like if it were a triangular lattice?
PROFESSOR: Yeah, so the statement is that whatever lattice you have, what am I doing at the level of the lattice, I'm trying to keep things that are close to each other aligned. So when I go to the continuum, how is this captured, it's a term like a gradient squared.
Now on the hyper cubic lattices, the relationship between what you put on the bonds of the hyper cubic lattice and what we've got in the continuum is immediately apparent. If you try to do it on the triangular lattice, you still can.
And you'll find that at the end of the day, you will get the factor of square root of 3, or something like that. So there's some miracle factor that comes into play.
And then at the end of the day, I also want to replace these gradient of a squared naturally in terms of essentially S has n components. N minus one of them are pi, and one of them is sigma. So this would be minus K/2 integral d dx. I have gradient of the pi component, and a gradient of the sigma component squared.
So after integrating sigma using the delta functions, I'm going to the continuum limit. The partition function that we have to evaluate up to various non-singular factors, such as this constant over here, is obtained by integrating over all configurations of our pi field, now regarded as a continuously varying object under the dimensional lattice.
And the weight, which is as follows, there is a gradient of pi squared, essentially this term over here. There is the gradient the other term. The other term, however, if I use the delta function in square root of 1 minus pi squared squared.
And then there's this factor from the integration that I have to be careful of, which I can also take to the exponent, and write as, again, this density times log of 1 minus pi squared. There's, I think, a factor of [INAUDIBLE].
So the weight that I had started with, with S dot S was kind of very simple-looking. But because of the constraints, was hiding a number of conditions. And if we explicitly look at those conditions and ask what is the weight of the fluctuations that I have to put around the ground state, these Goldstone modes, that is captured with this Hamiltonian. Part of it is this old contribution from Goldstone modes, the transfer modes that we had seen.
But now being more careful, we see that these Goldstone modes, I have to be careful about integrating over them because of the additional terms that capture, essentially, the original full symmetry, full rotational symmetry, that was present in integration over S. Yes?
AUDIENCE: The integration-- the functional integration, pi should be linked up to a sphere of 1.
PROFESSOR: This will keep. So I put that constraint over here. And it's not just that it is limited to something, but for a particular value of pi, it gets this additional weight. So if you like, once I try to take my integrals outside that region, that factor says the weight as usual.
So this entity is called the non-linear sigma model. And I never understood why they don't call it a non-linear pi model. Because we integrate immediately with sigma. That's how it is.
So what you're going to do if we had, essentially, stuff that the first term not included any of the other things, we will have had the analysis of Goldstone modes that we had done previously. The effect of these things, you can see if I start making expansion in powers of pi, is to generate interactions that will be non-linear terms among the parts.
So these Goldstone modes that we were previously dealing with as independent modes of the system, are actually non-linearly coupled. And we want to know what the effect of that is on the behavior of the entire system.
So whenever we're faced with a non-linear theory, we have to do some kind of a preservative analysis. And the first thing that you may be tempted to do is to expand the powers of pi, and then look at the Gaussian part, and then the higher order parts, etc. That's a way of doing it.
But there's actually another way that is more consistent, which is to organize the terms in this weight according to powers of temperature. Because after all, I started with a zero temperature configuration. And I'm hoping that I'm expanding for small fluctuation.
So my idea is to-- I know the ground state. I want to see what happens if I go slightly beyond that. And the reason for fluctuations is temperature, so organize terms in this effective Hamiltonian for the pis in powers of temperature.
And temperature, by this I mean the inverse of this coupling constant, K, because even, again, if I go through my old derivation, you can see that I go minus beta H, so K0 should be inversely proportional to temperature. K is proportion to K0. It should be inversely proportional to temperature.
So to some overall coefficient, let's just define temperature [INAUDIBLE]. Now we see that at the level that we were looking at things before, from this term it's kind of like a Gaussian form, where I have something like K, which is the inverse temperature pi squared.
So just on dimensional grounds, up to functional forms, etc. we expect pi squared to be proportional to temperature at the 0 order, if you like. Because, again, if temperature goes to 0, there's not going to no fluctuations. As I go away from 0 temperature, the average fluctuations will be 0. Average squared will be proportional to temperature. It all makes sense.
So then if I look at this term, I see that dimensionally, it is inverse temperature pi squared is of the order of temperature. So this is dimensionally t to the 0. Whereas if I start to expand this, this log I can start to expand as minus pi squared plus pi to the 4th over 2 pi to the 6th over 3, and so forth.
You can see that subsequent terms in this series are higher and higher order in this temperature. This will be the order of temperature-- temperature squared, temperature cubed. And already we can see that this term is small compared to this term.
So although this is a Gaussian term, and I would've maybe been tempted to put it in the 0 order Hamiltonian, If I'm organizing things according to orders of temperature, my 0-th order will remain this. This will be the contribution to first order, 2nd order, 3rd order.
And similarly, I can start expanding this. Square root is 1 minus pi squared over 2. So then I take the gradient of minus pi squared over 2. I will get pi gradient of pi. You can see that the lowest order term in this expansion will be pi gradient of pi squared, and then higher order terms.
And this is something that is order of pi to the 4th, so it gives the order of temperature squared multiplied by inverse temperatures. So this is a term that is contributing to order of T to 0, T to the 1. So basically, at order of T to 0, I have as my beta H0 just the integral d dx K/2 gradient of pi squared.
While at order of T the 1st power, I will have a correction which has two types of terms. One term is this K/2 integral d dx pi gradient of pi squared, coming from what was the gradient of sigma squared. And then from here, I will get a minus rho over 2 integral d dx pi squared. And then there will be other terms like order of T squared, U2, and so forth.
So I just re-organized terms in this interacting Hamiltonian in what I expected to be powers of this temperature. Now here we-- one of the first things that we will do is to look at this and realize that we can decompose into modes by going to previous space, I do a Fourier transform.
This thing becomes K/2 integral dd q divided by 2 pi to the d q squared pi theta of q squared. So let's write it as pi theta [INAUDIBLE]. And again, as usual, we will end up needing to calculate averages with this Gaussian rate. And what we have here is that pi alpha of q1 pi beta of q2, we get this 0-th ordered rate.
The components have to be the same. The sum of the two momenta has to be 0. And if so, I just get K q squared. Now I can similarly Fourier transform the terms that I have over here. So the interactions to one-- first one becomes rather complicated.
We saw that when we have something that's is four powers of a field. And when we go to Fourier space, rather than having one integral over x, we ended up with multiple integrals. So I will have, essentially, Fourier transform of four factors of pi.
For each one of them I will have an integration. So I will have dd q1, dd q2, dd q3. And the reason I don't have the 4th one is because of the integration over x, forcing the four q's to be added up to 0. So I will have pi alpha of q1, pi alpha of q2,
Now note that this high gradient of pi came from a gradient of pi squared, which means that the two pis that go with this, carry the same index. Whereas for the next factor, pi gradient of pi, they came from different ones. So I have pi q3 I beta minus q1 minus q2 minus q3.
Now if I just written this, this would've been the Fourier transform of my usual 4th order interaction. But that's not what I have because I have two additional gradients. And so for two of these factors actually I had to take the gradient first. And every time I take a gradient in Fourier space, I will bring a factor of I q. So I will have here I q1 dotted with Iq let's say 3.
So the Fourier transform of the leading quartic interaction that I have, is actually the form that I have over here. There is a trivial term that comes from Fourier transforming. It's pi squared because then I Fourier transform that, I get simply pi alpha of q squared. Yes?
AUDIENCE: Does it matter which q's you're pulling out as the gradient?
PROFESSOR: You can see that these four pis over here in Fourier space appear completely interchangeably. So it really doesn't matter, no. Because by permutation and re-ordering these integration, you can move it into something else. No, there is-- I shouldn't-- I'll draw a diagram that corresponds to that that will make one constraint apparent.
So when I was drawing interaction terms for m to the 4th tier for Landau-Ginzburg, and I have something that has 4 interactions, I would draw something that has 2 lines. But the 2 lines had 2 branches.
And the branching was supposed to indicate that 2 of them were carrier 1 index, and 2 of them were carrying the same index. Now I have to make sure that I indicate that the branches of these things additionally have these gradients for the Iq's associated with them.
And I make a convention the branch, or the q, that has the gradient on it, I will put a line. Now you can see that if I go back and look at the origin of this, that one of the gradients acts on one pair of pis, and the other acts of the other pairs of pis. So the other dashed line I cannot put on the same branch, but I have to put over here.
So the one constraint that I have to be careful of is that these Iq' should pick one from alpha and one from beta. This is the diagrammatic presentation.
So what I can do is to now start doing perturbation in these interaction. You want to do the lowest order to see what the first correction because of fluctuations and interaction of these Goldstone modes.
But rather than do things in two steps, first doing perturbation, encountering difficulty, and then converting things to a normalization group, which we've already seen that happen, that story, in dealing with the Landau-Ginzburg model, Let's immediately do the perturbative renormalization group of this model.
So what I'm supposed to do things is to note that all of these theories came from some underlying lattice model. I was carefully drawing for you the first lattice model originally. Which means that there is some cut off here, some lattice cut off.
Which means that when I go to Fourier space, there is always some kind of a range of wave numbers or wave vectors that I have to integrate with. So essentially, my pi's are limited after I do a little bit of averaging, if you like that there is some shortest wavelength, and the corresponding largest wave number, lambda, in [INAUDIBLE].
And the procedure for RG, the first one, was to think about all of these pi modes, and brake them into two pieces. One's that we're responding to the short wavelength fluctuations that we want to get rid of, and the ones that correspond to long wavelength fluctuations that we would like to keep. So my task is as follows, that I have to really calculate the partition function over here, which in it's Fourier representation indicates averaging over all modes that's are in this orange.
But those modes I'm going to represent as D pi lesser, as well as D pi greater. Each one of these pi's is, of course, an n minus on component vector. And I have a rate that i obtained by substituting pi lesser and pi greater in the expressions that I have up there.
And we can see already that the 0-th order terms, as usual, nicely separates out into a contribution that we have for pi lesser, a contribution that we have for pi greater, and that the interaction terms will then involve both of these modes. And in principle, I could proceed and include higher and higher orders.
Now I want to get rid of all of the modes that are here. So that I have an effective theory governing the modes that are the longer wavelengths, once I have gotten rid of the short wavelength fluctuations. So formally, once I have integrated over pi greater in this double integral, I will be left with the integration over the pi lesser field.
And the exponential gets modified as follows. First of all, if I were to ignore the interactions at the lowest order, the effect of doing the integration of the Gaussian modes that are out here, will, as usual, be a contribution to the free energy of the system coming from the modes that I integrated out.
And clearly it also depends, I forgot to say, that the range of integration is now between lambda over b lambda, where b is my renormalization factor. Yes?
AUDIENCE: Because you're coming from a lattice, does the particular shape of the Brillouin zone matter more now, or still not really?
PROFESSOR: It is in no way different from what we were doing before in the Landau-Ginzburg model. In the Landau-Ginzburg model, I could have also started by putting spins, or whatever degrees of freedom on a lattice. And let's say if I was in hyper cubic lattice, I would've had Brillouin zones, such as this.
And the first thing that we always said was that integrating all of these things gives you an additional totally harmless component to the energy that has no similar part in it. So we're always searching for the singularities that arise at the core of this integration. Whatever you do with the boundaries, no matter how complicated shapes they have, they don't matter.
So going back to here. If we had ignored the interactions, integrating over pi greater would've giving me this contribution to the free energy. And, of course, beta H0 of pi lesser would've remained.
But now the effect of having the interactions, as usual, it is like integrating into the minus u with the rate over here. So I would have an average such as this. And we do the cumulant expansion, as usual. And the first term I would get is the average of this quantity with respect to the Gaussian rate, integrating out the high component modes, high frequency modes, and high order corrections. Yes?
AUDIENCE: So right here you're doing two expansions kind of simultaneously. One is you have non-linear model that you're expanding different powers and temperature. And then you further on expand it to cumulants to be able to account for that.
PROFESSOR: No, because I can organize this expansion in cumulants in powers of temperature. So this u has an expansion that is u1, u2, etc. organized in powers of temperature.
AUDIENCE: OK.
PROFESSOR: And then when I take the first cumulant, you can see that the average, the lowest order term, will be--
AUDIENCE: The first cumulant is linear in temperature, and that's what you want?
PROFESSOR: Right. So I'm being consistent also with the perturbation that I had originally stated. Actually, since I drew a diagram for the first term, I should state that this term, since we are now also thinking of it as a correction in u1, I have to regard it as 2 factors of pi. So I could potentially represent it by a diagram such as this. So diagrammatically, my u1 that I have to take the average is composed of these two entities.
So what I need to do is to take the average of that expression. So I can either do that average over here. Take the average of this expression, or do it diagrammatically. Let us go by the diagrammatic route.
So essentially, what I'm doing is that every line that I see over there that corresponds to pi, I am really decomposing into two parts. One of them I will draw as a straight line that corresponds to the pi lesser that I am keeping. Or I replace it with a wavy line, which is the pi greater that I would be averaging over.
So the first diagram I had essentially something like this-- actually, the second diagram. The one that comes from rho pi squared. It's actually trivial, so let's go through the possibilities. I can either have both of these to be pi lessers-- sorry, pi greaters.
So this is pi greater, pi greater. And when I have to do an average, then I can use the formula that I have in red about the average of 2 pi greaters. And that would essentially amount to closing this thing down. And numerically, it would gives me a factor of minus rho over 2 integral d dK over 2 pi to the d in the interval between lambda over b, lambda .
And I have the average of pi alpha pi alpha using a factor of delta alpha alpha. Summing over alpha will give me a factor of n minus 1. And the average would be something like K k squared. So I would have to evaluate something like this.
But at the end of the day, I don't care about it. Why don't I care about it? Because clearly the result of doing this is another constant. It doesn't depend on pi lesser. So this is an addition to the free energy once I integrate modes between lambda over b to lambda, there is a contribution to the free energy that comes from this term.
It doesn't change the rate that I have to assign to configurations of the pi lesser field. That's another possibility. Another possibility is I have one of them being a pi greater, one of them being a pi lesser.
Clearly, when I try to get an average of this form, I have an average of one factor of pi with a Gaussian field that is even. So this is 0. We don't have to worry about it.
And finally, I will get a term, which is like this. Which doesn't involve any integrations, and really amounts to taking that term that I have over there, and just making both of those pi to be pi lessers.
So it's essentially the same form that will reappear, now the integration being from 0 to lambda over 2. So we know exactly what happens with the term on the right. Nothing useful, or important information emerges from it.
If I go and look at this one however, depending on where I choose to put the solid lines or the wavy lines, I will have a number of possibilities. One thing that is clearly going to be there is essentially I put pi lesser for each one of the branches. Essentially, when i write it here like this one, it is reproducing the integration that I have over there, except that, again, it only goes between 0 and lambda.
And now I can start adding wavy lines. Any diagram that has one wavy, and I can put the wavy line either on that type of branch, or I can put it on this type of branch. It has only one factor of pi. By symmetry, it will go to 0, like this.
There will be things that will have three factors of pi lesser. And all of these-- again because I'm dealing with an odd number of factors of pi greater that I'm averaging will give me 0.
There's one other thing that is kind of interesting. I can have all four of these lines wavy. And if I calculate that average, there's a number of ways of contracting these four pi's that will give me nontrivial factors.
But these are also contributions to the free energy. They don't depend on the pi's that I'm leaving out. So they don't have to worry about any of these diagrams so far. Now I dealt with the 0, 1, 3, and 4 wavy lines.
So I'm left with 2 wavy lines and 2 straight lines. So let's go through those. I could have one branch be wavy lines and one branch be straight lines. And then I take the average of this object. I have a pi greater-- a pi greater here, and therefore I can do an average of two of those pi greaters.
That average will give me a factor of 1 over K k squared. I have to integrate over that. But one of these branches had this additional dash thing that corresponds to having a factor of k. So the integral that I have to do involves something like this.
And then I integrate over the entirety of the k integration. This is an odd power, and so that will give me a 0 also. So this is also 0. And there's another one that's is like this where I go like this. And although I do the same thing now with two different branches, the k integration is the same. And that vanishes too.
So you say, is there anything that is left? The answer is yes. So the things that are left are the following. I can do something like this. Or I can do something like this.
So these are the two things that survive and will be nontrivial. You can see that this one will be proportional to pi lesser squared, while this one is going to be proportional to gradient of pi lesser squared.
So this one will renormalize, if you like, this coefficient. Whereas this one, we've modified and renormalize our coupling straight. So it turns out that that is really the more important point. But let's calculate the other one too. Yes?
AUDIENCE: Why do we connect the ones down here with the loops, but left all the ends free in the ones. Was that just a matter of the case of how to write diagram, or does that signify something?
PROFESSOR: Could you repeat that? I'm not sure I understand.
AUDIENCE: So when we had two wavy line, both coming out one of the diagram, this line, they just stop. We connected them together when we were writing the ones on the bottom line.
PROFESSOR: So basically, I start with an entity that has two solid lines and two wavy lines. And what I'm supposed to do is to do an integration-- an average of this over these pi greaters. Now the process of averaging essentially joins the two branches.
If I had the momentum here, q1, and a momentum here, q2, if I had an index here, alpha, and an index here, beta, that process of averaging is equivalent to saying the same momentum has to go through, the same index has to go through. There is no averaging that is being done on the solid lines, so there is-- meaningless to do anything.
So this entity means the following. I have K/ 2 let's call it legs 1, 2, 3, and 4. Integral q1 and q2, but q1 and q2 you can see explicitly are solid. So these are the integration from 0 to lambda over b.
I have an integration over q3, which is over a wavy line. So it's between lambda over b and lambda. If I call this branch alpha and this branch beta, from here I have actually pi lesser alpha of q1 pi lesser beta of q2.
I should have put them outside the integration, but it doesn't matter. And then here I had pi alpha of q3, pi beta of q4. But these also had these lines associated with them. So I have here actually an i q3, an i q4. Again, q4 has to stand for minus q1 minus q2 minus q3 from this.
And then I had the pi pi here, which give me, because of the averaging, delta alpha beta. And then I will have an integration that forces q2 plus q4 to be 0. And then I have K q3 squared.
Now q3 plus q4 is the same thing as minus q1 minus q3, if you like, because of that constraint. So I can take that outside the integration. There's no problem.
I have one integration left, which is 1 over K q3 squared, but these two then become the same. These pi's I will take outside. I note that because of this constraint, q1 and q2 being the same, these two really become one integration that goes between 0 and lambda over b.
And these indices have been made to be the same. So I have pi alpha of q, this q, squared. Then I have the integration from lambda over b to lambda. D d q3 2 pi to the d. Here I have i q3 i q4. But q4 was said to be minus q3.
So the two i's and the minus cancel each other. And I will get a factor of q3 squared. And then here I have a factor of K q3 squared. So the overall thing is just that we can see that the K's cancel.
I have one factor integral dd q 2 pi to the d. I have pi alpha of q squared. And these are q lessers. I integrated out this quantity. The q3's vanish, so I really have the integral of q3 over 2 pi to the d.
Now if I had done the integral of q3 2 pi to the d, all the way from 0 to lambda, what would I have done? If I multiply this volume here, that would be the number of modes. So this is, in fact, N/V, which is the quantity that I have called the density.
But what I'm doing is, in fact, doing just a fraction of this integral from 0 to lambda over b. So if I do the fraction from 0 to lambda over b, then I will get 1 minus b to the minus d. Sorry, from lambda over b to lambda.
Then if I had done all of the way from 0 to lambda, I will have had one, but I'm subtracting this fraction of it. So the answer is rho 1 minus b to the minus d. The overall thing here gets multiplied by rho 1 minus b to the minus d.
It just would correct that factor of density that we have. We'll see shortly it's not something to worry about. The next one is really the more interesting thing.
So here we have this diagram, which is K/2 integral from 0 to lambda over b. Essentially, I will get the same structure. This time let me write the pi alpha lesser of q1 pi beta lesser of q2 outside the last configuration.
I have the integral from lambda over b to lambda dd of q3 2 pi to the 3d. And I will have the same delta function structure, except that now these factors of i q become i q1 times i q2. So I can put them outside already. And then I have here the delta function.
So the only difference is that previously the q squared was inside the integration. Now the q squared is outside the integration. So the final answer will be K/2 integral 0 to lambda over d d d q 2 pi to the d.
I have a q squared. pi of q lesser squared. And the coefficient of that would look like what I had before, except without this factor. So it's the integral from lambda over b to lambda dd of q3 divided by 2 pi to the d 1 over Kq3 squared.
So once I do explicitly this calculation, the answer is going to be a weight that only depends on this pi lesser that I'm keeping and I would indicate that by beta H tilde, as usual. That depends of this pi lesser.
And we now have all of the terms that contribute to this beta H tilde. So let's write them down. There's a number of terms that correspond to changes in the free energy. So we have a V delta f p at the 0-th order contribution of delta f p at the first order that are essentially a bunch of diagrams, both from here as well as from here. But we don't really care about them.
Then we have types of terms that look like this. I can write them already after Fourier transformation in real space. So let me do that.
I have integral dd x in real space, realizing that my cut off has been changed to ba-- things that are proportional to gradient of pi lesser squared. Now gradient of pi lesser squared is the things I in Fourier space becomes q squared pi lesser squared.
And what is the coefficient? I had K/2, which comes from here. If I don't do anything, I just have the 0-th order modes acting on these. But I just calculated a correction to that that is something like this.
So in addition to what I had before, I have this correction, K/2 times this integral. And I'm going to call the result of this integral to be I sub d-- see that this integral is proportional to 1/K.
And when the integration-- let's call the result of the integral Id of b because it depends on both dimension of integration, as well as the factor V through here. So I have 1/K Id of b. So that's one type of that I have generated.
I started from this 0-th order form. And I saw that once I make the expansion of this to the lowest order, I will get a correction to that. And actually, just to sort of think about it in terms of formulae, you see what happened was that the first term that I had over here was pi gradient of pi repeated twice.
And what I did was I essentially did an average of these two pi's and got the correction to gradient of pi squared, which is what was computed here and [INAUDIBLE]. The next term that I have is this term by itself, not calculated with the pi lessers only-- so this object. So I will write it as K/2 I gradient of pi squared. There's no correction.
The final term that I have is this term. So I have minus rho over 2 pi lesser squared. And then the correction that I have was exactly the same form. It was 1 minus 1 minus b to the minus b.
The rho over 2 is going to be hung onto both of them. You can see that there's a rho and there's a 2. And so basically I have two add this to that. And to first order, this is the entire thing of the thing that I will get.
So this however is just a course gradient. It's the first step of the RG. And it has to be followed by the next two steps of the RG.
Where here, you look at you field and you can see that the field is much coarser because the short distance cut off rather than being a, has been switched to ba. So you define your x prime to x/b, so that you shrink. You get the same course pixel size that you had before.
And you also have to do a change in pi. So you replace pi lesser with some factor zeta pi prime, so that the contrast will look fine. So once I do that, you can see that this effect of this transformation is that this coupling K will change to K prime.
Because of the change of x with b x prime, from the integration, I will get b to the d. From the two derivatives, I will get V to the minus 2. From the fact that I have replace two pi lessers with pi primes, I will get a factor of zeta squared.
And then I will have this factor K 1 plus 1/K Id of b. So this is the recursion formula that we will be dealing with.
Now there is some subtleties that go with this formula that are worth thinking about. Our original system had really one coupling parameter K that because of the constraints of the full symmetry of this field, S, part of it became the quadratic part that was the free field theory.
But part of it made the interactions. But because of this vertical symmetry, to form of that interaction was fixed and had to be proportional to K. Now if we do our normalization group correctly, to full symmetry that we had has to be maintained at all levels.
Which means that the functional form that I should end up with should have the same property in that the higher order coefficient should be related to the lower order coefficient, exactly the same way as we had over there. And at least at this stage, it looks like that did not happen. That is, we got the correction to this term, but we didn't get the correction to this term.
We shouldn't be worried about that right here because we calculated things consistently corrections to order of T. And this was already a term that was order of T. So the real check is if you go and calculate the next order correction, you better get a correction to this term at next order that matches exactly this.
People have done that, and have checked that. And that indeed is the case. So one is consistent with this. There are other kinds of consistency checks that have happened all over the place, like the fact that this came back 1 minus 1 came out to be b to the minus d, so that the density is the same as before, consistent with the fact that you shrunk the lattice after RG so that the pixel size was the same as before is a consequence of that.
You may worry that that's not entirely the case because when I do this, I will have also a factor of zeta squared. But it turns out that zeta is 1 plus order of temperature, as we will shortly see. So I gain consistent-- everything's consistent at this level that we've calculated things. And the only change is this factor.
Now the one thing that we haven't calculated is what this zeta is. So to calculate zeta, I note the following that I start with a unit vector that is pointing at 0 temperature along this direction. Now because of fluctuations, this is going to be kind of rotating around this.
So there is this vector that is rotating. If you average it over some time, what you will see is that the average in all of these direction is 0. The variance is not 0, but the average is 0. But because of those fluctuations, the effective length that you see in this direction has shrunk.
How much has it shrunk by is related to this rescaling factor that I should chose. And so it's essentially average of something like 1 minus pi squared. But really it is the pi lesser squares that I'm averaging over.
Which the lowest order is 1 minus 1/2 the average of pi lesser squared, which is 1 minus 1/2 . Now this is an N minus 1 component vector. So each one of the components will give you one contribution.
The contribution that you get for one of these is simply the average of pi squared, which is 1 over K k squared, which I have to integrate over K's that lie between lambda over b and lambda.
And you can see that this is 1 minus 1/2 N and minus 1. It is inversely proportional to K. And then the integration that I have to do is precisely the same integration as here. So it is, again, this Id of b.
So let me write the answer, say a couple of words about it, and then we will deal with it next time. So K prime is going to be b to the d minus 2-- a new interaction parameters. It is one factor of 1 plus 1/K Id of b.
And then there's one factor of this square of zeta. So that gives you N minus 1 over K Id of b times K. So we'll analyze this more next time around. But I thought I would give you the physical reason for how this interaction parameter changes.
Let's say we are in two dimensions. So let's forget about this factor. In two dimensions, we can see that there is one factor that says at finite temperature, we are going to get weaker. The interaction is going to get weaker.
And the reason for that is precisely what I was explaining over here. That is, you have some kind of a unit vector, but because of its fluctuations, you will see that it will loop shorter. And it is less likely to be ordered.
The more components it has to fluctuate, the shorter it will look like. So there is that term. So if this was the only effect, then K will become weaker and weaker. And it will have disorder.
But this effect says that it actually gets stronger because of the interactions that you have among the modes. And to show you this to you, we can experiment with it yourself.
So this is a sheet of paper. This bend is an example of a Goldstone mode because I could have rotated this sheet without any cost of energy. So this bend is a Goldstone mode that costs very little energy.
Now this paper has the kinds of constraints that we have over here. And because of those constraints, if I make a mode in this direction, I'm not going to be able to bend it in the other directions. So clearly the mode that you have this direction, and this direction, are coupled. That's kind of an example of something like this.
Now while it is easy to do this bend because of this coupling, if thermal fluctuations have created modes that are shorter wavelength, and I have already created those modes over here, then you can experiment yourself. You'll see that this is harder to bend compared to this. You can see this already.
So that's the effect that you have over here. So it's the competition between these two. And depending on which one wins, and you can see that that depends on whether n is larger than 2 or less than 2.
So essentially for n that is larger than 2, you'll find that this term wins. And you get disorder in two dimension. And is less than 2, you will get order like we know the Ising model can be order. But there's other things that can be captured by this expression, that we will look at next time.
Free Downloads
Video
- iTunes U (MP4 - 180MB)
- Internet Archive (MP4 - 180MB)
Subtitle
- English - US (SRT)