Lecture 6: Wave Profiles, Heat Equation / point source

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

NARRATOR: The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu.

PROFESSOR: Ready to go. So first of all, to say thank you to several people in the class who sent codes to find these wavefronts when shock, a step function is moving along with velocity c. I had roughly drawn what these three methods produced, but it was much better to see it in Mr. Sekora's movie that's on 18.086 site and sure enough, behind the moving front is an overshoot that reminds us of the Gibbs phenomenon from Lax-Wendroff and I think it's sort of unavoidable if we have a negative because one of the three coefficients at the previous time is negative. For Lax-Wendroff, that's how it gets its second order accuracy and we see that better accuracy in a much steeper, much steeper profile.

Then Lax-Freidrichs, if you've noticed, has a sort of jagged profile. Maybe you've realized why that would be. Somehow Lax-Freidrichs kind of operates on a staggered grid. We'll see. Actually, with I've drawn a staggered grid here, so I can say what I mean. Lax-Freidrichs computes this value from the two earlier values there and then at the next level again and so forth, so that actually, Lax-Freidrichs is using two staggered grids separately. I think -- normally with Lax-Freidrichs, we call that n plus 1 and n plus 2 and n plus 3, but the picture is still right, so I think that's probably responsible for what you see, that there's two waves, two fronts, right? 1 delta x away from each other and finally, upwind is also smeared out.

I guess I was so pleased to see these actually showing on the website and that makes me think of more questions. Maybe you too. First of all, as time goes forward, we see this bump beginning to appear. So am I right in thinking that it reaches a certain height and stays there? So I guess I'm thinking about sort of further questions, because every time you do a numerical experiment, good additional question show up. So I'm wondering, as this wavefront forms, this characteristic profile for these three methods, first of all, does that profile settle down to a steady profile? Ultimately it'd be great to be able to predict what it would be. In particular, does that bump reach a certain height, because you can see it just barely forming and then you see it growing to a certain point? Then when the movie ends, fortunately it plots this picture and you really see the profile and you can actually see if you look closely that it goes above 1 again and probably more.

That would be, does it approach a steady profile? Does that height settle down? It'd be fantastic to be able to predict that height. For example, you always want to know, how do things depend on the parameter? The one parameter we have in this problem is r. c delta t over delta x. If I change r, I'm changing all the coefficients. If I changed r to be exactly 1, then the profile would be perfect, right? That's the golden ratio, r equal 1. I would be computing that value exactly from this value at exactly the right slope on the characteristic and perfection.

Now as r goes away from 1, I presume -- it goes down below 1. Of course, if it goes above 1, catastrophe. If it goes below 1, I presume the bump begins to appear, goes higher. I don't know. Could you see how the dependence on r appears? Then I can suggest one more thing, which is -- let's put a forth guy onto this picture, which would be leapfrog. leapfrog for first order equations, so let me just put down below what that would be. So it would be on that kind of a staggered grid. A new value at time n plus 1 would come from these at time n, really. On this one, at time n minus 1. So leapfrog -- you'd think of it right away. I use a center difference in time ujn plus 1 minus ujn minus 1 over 2 delta t equals c times a center difference in space uj plus 1 n minus uj minus 1 n over 2 delta x. So you see right away, the 2s cancel. The delta x times c -- the delta t times c over delta x is our old ratio r, so this is easily written in terms of r again. It does have this two time step issue and of course, we have to think about stability.

I'm writing down a fourth candidate for the equation. Ut equals cux, of course. Before I come back to the second order equations or systems, I just wanted to put in these comments, partly because I'm thinking about projects that are -- and I hope you are -- so some people will have [? cici's ?] or problems from other courses which involve solving PDEs and that'd be perfectly natural to base some project on, but if you don't have some specific application, then here are a whole lot of interesting question. Just go further with that, so the dependence on r would be a -- the profile dependence on r, c delta t over delta x would be a question.

Let me just emphasize, what's the goal? When I say that this upwind one is smeared out, I mean that it doesn't satisfy what you really hope for. You want to capture that shock, that discontinuity -- not perfectly. You can't expect that, but you want to capture it, maybe in, let's say, 2 delta x. A good method captures the shock within 2 delta x and does it without much undesirable oscillation, physical oscillaton like that, so these are methods where we're trading off. A good shot capture versus a smeared one, which is not satisfactory, really, in a lot of applications. So capturing the shock within 2 delta x roughly is highly desirable and you might have to go add a artificial viscosity that we'll discuss to get there.

So that's past things, but still very much present, if I can say. Here I've introduced a new leapfrog, a one way equation leapfrog, I'll call this, where the last lecture was about two way leapfrog. Leapfrog for utt equals c squared uxx, a two way wave equation. I might just mention about this one way leapfrog, once you put in e to the ikx as we always do, or e to the ikj delta x in the discrete case -- so you would put in an e to the ijj delta x times k and then find the g, the growth factor in -- that'll depend on k and will tell us the time dependence. It'll be a g to the nth.

I think you'll find that for this guy, the absolute value of g is exactly 1 while it's stable. Of course, if the Courrant condition is violated, then there's no way it could be stable and gs will grow, be bigger than 1. But I think in the stable range, you will have -- this g will have absolute value exactly 1. So it'll be on the unit circle. You don't have much leeway, right? We found upwind was, like, well inside, Lax-Freidrichs was well inside, Lax-Wendroff, because it had higher accuracy for low frequencies was closer and I think this guy is right on. So this is the first order methods. This would be Lax-Wendroff and this would be this second order method. So you're really playing with fire there and maybe this is not a favorite, just because it's too close to the edge, in fact, right on the edge. So those are some comments really generated by your numerical experiments and I think that's the way this course should -- homeworks and projects should develop out of numerical experiment.

Now I'll come back to complete my lecture on the second order wave equation. So we studied leapfrog for that. I won't repeat that, but then with second order wave equation, we have another option, which is to reduce a second order equation with a two time derivative to a system of two equations that are first order in time and space. I've chosen to use the letters h and e as a kind of hint that we're touching a really big application, which is electromagnetism. So e for the electric field, h for the magnetic field but here in 1 d -- so it allows me just to mention that in 3 d, when e has three components, h has three components, this is really the curl and this is really minus the curl and the notes will show the beautiful Yee mesh, staggered mesh that copies for finite difference methods, the properties of the curl, our interpretation of and the properties of -- that Maxwell discovered the physics of -- you know, current magnetic fields going around a wire that along which one an electric field is going.

But let me stay here with one dimension. So first of all, can we recover the wave equation from that? Let's just see that we haven't left behind the wave equation. So I just take the second derivative -- so the second derivative -- I just want to recover the wave equation so that I haven't lost it here -- so this will be -- if I I'm taking the time derivative of the first equation, so I get d second hdtdx, right? But bye the useful, extremely useful fact that this is the same as taking the t derivative first and the x derivative second, now I have the x derivative of this equation, so this is c -- now I have the x derivative or c times it, so now I'm up to c squared, the x derivative of the x derivative, right? So we eliminated h, got back to one equation, second order and it's the wave equation.

I want to show the same thing now for finite differences. It's just -- it's such a simple and useful device and our point will be that it works for -- just as well for differences. That allows us to do this staggered grid idea. So the staggered grid idea is, I'll do e on this grid, at these grid points, h at the half points, e again at the integer points, h again at the half points. Now, so my difference method, leapfrog, on this staggered mesh, will copy, so the first equation will give e at the new time from the difference in h at this time. So you see it. The edt is going to be replaced by that time difference. The hdx is going to be replaced by this space difference and that will determine e at the new time. So I need time 0 and time 1/2 to get started. Maybe I put those down here. Somewhere I have use the initial condition, the initial velocity to get started on e and h and then I go onwards. I've got to this point and this point. Then this guy comes from this one, this one and this one. Of course, that similarly comes from the same equation shifted over.

Now I know these. Now I'm ready for this one. Coming from -- what am I approximating now? I'm on the staggered part of the grid, the half steps. I'm approximating this on the half steps, right where h is defined. So that minus that is the time difference. This minus this -- which I know these now -- is the space difference in this equation and that's what allows me to compute the new one. You see, it's a simple idea. It's close but different from my scale or leapfrog, one way leapfrog here. Here it was the same u at all the points. here it is e at the integer time levels and h at the half integer time levels. It's natural to say I'm calling this leapfrog. Could I do this for the difference equation? I just want to see that if I eliminate h -- you saw what happened there. By using the identity that the second derivative in t and x is always the same as the second derivative in x and t, I eliminated h and got an equation that involved e only.

What's going to be the corresponding identity? Trivial, but just -- trivial things are not bad things to notice -- for the different method. This is what I wanted to be sure of, for the h part. This h -- so the h -- do you mind if I draw yet another symbol? I want to use this idea to eliminate h. So let me put two hs there. Is that right? Yeah. I think what I'd like to know -- let me just say what I believe. I believe that the time difference of the space difference of h at mesh points is equal to the space difference of the time difference of h at the same mesh points. I meant to put these as subsequent. it's not delta x. It's delta sub x. Difference in the time direction, difference in the space direction.

Let's just see what this means. Let me redraw these four points. These are h. h at all those points. So that says, figure out the space differences. So this, this minus this and this minus this and then take the time difference of those. Can I see what I'll get if I do that? So this means I take the space differences, so I have 1 minus 1 and that would be 1 minus 1, but now I'm going to take the time difference of those. So that'll be this one minus this one and this one plus this one. Do you see that? What we have here is h at this point minus h at this point minus h at this point plus h at this point plus 1.

I claim that we have the same thing here, that if I take the time differences first -- can I do that instead of taking the space differences first? I'll take this one, this time difference, this minus this and I'll subtract -- I'm taking now the space difference -- I'll subtract -- are you with me? You see, it's just the same four values -- two 1s and minus 1s, whether we go first -- in fact, this actually is a -- I think maybe throws a little light on the calculus identity, that we can exchange the order of the derivatives, because we know we can exchange the order of the differences, because we just got numbers like 1 minus 1 minus 1 and plus 1.

If we let the spacing, the mesh, go to 0 this and divided by the delta x and the delta t, I suppose we would prove the continuous case. So in a way, this throws light on something that's probably something that we take for granted, but we wouldn't want to be pressed on too closely. So having that identity allows me to do exactly what I did with the continuous case there and show that this, when I eliminate the half levels, I've got ordinary leapfrog. So when I eliminate half levels, I will be -- when I eliminate this half level, let's say -- and I'll be connecting -- when I eliminate half levels, I'm going to be connecting these at three levels -- one two three. It'll be leapfrog.

So since it's been a few days since we wrote down two way leapfrog or second order equation leapfrog, let me just write it again and that's what we would get. The time difference squared of e, it's equal will be equal to c squared times the space difference squared of e. Of course, here we have a delta x squared and here a delta t squared. So altogether, an r squared. That's the formula we know. That's the formula we know. So if I simplify this to make it look good and put the delta t squared up there, you see that it's just r squared. Maybe it's worth realizing that again the magic r equal 1. The magic ratio r equal 1 gives the exact u, agreeing exactly with you. This equation, if r is 1, would be satisfied exactly by the continuous solution and so it's the same as this one.

So that's leapfrog. So can I summarize leapfrog? Leapfrog was, first of all, that led us to these -- it worked for second order equations. It led us to the same stability condition that r had to be less or equal to 1 but it took a quadratic equation for g. Remember, we had g squared minus 2 g times some quantity a that involved rs and cosines plus 1 equals 0. The reason it was a quadratic was there were two time steps involved. We handled that. We got stability. If r was less than or equal to 1, that let us to a. In magnitude, less or equal 1 and led us to g. I'll just remember those steps: a less or equal 1 and that let us to g.

Yes, the notes, including more about Maxwell's equation and Yee's beautiful method m a figure that shows his m will go up -- I mean, every section is getting upgraded and this one was the next one to be upgraded. Probably by tomorrow, you'll have a good section on second order equations. For me, that's what I wanted to say about the wave equation alone, with no diffusion and now I'm ready to move to the heat equation. So what do we have coming up? Maybe important to say what's coming up.

So we've done one way wave. Now we've done two way waves. Next comes the heat equation, heat diffusion and then will come a mixture of the two, convection diffusion -- and by the way, so I'll go straight to these -- in a couple of weeks, we'll have a guest lecturer who will do the applications to financial mathematics, mathematical finance. Specifically the Black-Scholes equation. So you probably know that a lot of effort and a lot of money are going into the valuation of options and other financial derivatives and that Black-Scholes equation beautifully produced the heat equation and additional correction terms will produce a convection diffusion equation. So this is an application that we wouldn't have made some years ago, but now it's worth focusing on that. I think it's interesting in itself. So this is what's coming and then you could say one way wave equations, but nonlinear. So nonlinear wave equations and those are called conservation laws. that's after these linear cases. Alright.

So I'm ready to tackle the heat equation, starting then on to the next section of the notes. So let me tackle it here. So the heat equation. Let me take the constant to be 1 ut p equals uxx, because we already see the big difference. Now there's a second x derivative, but only a first derivative in time. The dimensions of t and x are no longer the same. We won't have delta t and delta x comparable. Now it will be delta t comparable with delta x squared and that's a very significant difference, because that's -- if delta x is small, delta x squared is extremely small. So when delta t is constrained by delta x squared, we're looking at small time steps. We're looking at stiff systems, absolutely. This is going to be a stiff problem, because the low frequencies and the high frequencies differ enormously on the decay rate. In fact, why don't we see it right away, starting as always with exponentials.

So if I plug in u of x and t is sum g of te to the ikx, separating variables as always. Look for a pure exponential solution, plug that in and we get dgdt times e to the ikx on the left and now I have to take two x derivatives, so that brings ik down twice. So that's minus k squared i squared giving the minus 1 times g times e to the ikx. Now of course, I can cancel that which is never 0 and I've separated out the time part, the g part, the dgt as minus k squared g. So g is e to the minus k squared t. Notice that it's k squared and notice that it's real and negative. See, that's a very big this difference. Compare e to the ikct, which it was for the one way wave equation. The difference is enormous here. That's magnitude 1 energy conserved. This is magnitude smaller than 1 energy dissipated. We had a ratio delta t over delta x. Now we're going to see delta t over delta x squared because somehow x and t are -- now have -- it's x squared that matches t now. So now when I put that into here -- let me just do it -- g of t is e to the minus k squared t.

So there is the pure exponential. Simple as always, because we have constant coefficients, but highly informative. I guess another thing that I read off of that is, what frequencies are decaying fast? High frequencies? If k is large, k squared is very large and e to the minus k squared t is decaying very quickly. So high frequency noise in this system, roughness, discontinuities are going to get smoothed out, strongly by moving forward in time. Physically, what's happening is we have our wavefront, which stayed a perfect step function in the wave equation, will instantly smear. So diffusion smears discontinuity. Heat travels. If we were back over here, if the temperature starts out as 1 on this side and 0 on this side, then in an instant, some heat flows from right to left. In fact, it flows with infinite speed. In a short delta t, there's a little heat all the way out there -- very little, of course, because it's -- I guess we want to see what that behavior is.

So I now want to write down -- staying with the differential equation, what do we want to do? We did the exponential case and learned a good bit from that, the fast decay of high frequencies. The next step would be to put different frequencies, different ks together by linearity, we can add these solutions. We can multiply that by any numbers we want, depending on k. We still have solutions and we can integrate over k, so we will actually. That that will be the next step. Let me write down what that does. I'm going to jump, take a little jump here to put on the board what I just said in words. I can multiply this, e to the minus k squared t e to the iks -- that's what's happening to the pure frequency k. If I multiply that by the amount of that frequency, which is in the initial condition -- so here is the intiial condition, u 0, and I've taken its Fourier transform. This tells me how much of frequency k is in the problem, is in the initial problem. This tells me how much of frequency k is in the initial problem. That amount multiplies this with this rapid decay as time goes forward, but at any later time, I'm just adding up -- and since k -- we're on the whole line -- k is a continuous -- so we add up from k equal minus infinity to infinity and I think we have to remember the 2 pi that depends on our particular definition of that transform, but I'm going to stay with this one.

That's the answer. That's u of x and t. So I've said that in words, but actually, we could just check it. We would like to know that this left hand side solves the heat equation and we would like to know that it has the correct start, the correct u of x and 0. Let's check that. At t equals 0 at the start, this is 1. So at the start, this formula, without that thing there, is just the inverse Fourier transform that takes the Fourier transform, multiplies by this, integrates to recover u 0. So it does start out correctly. Should I say that again? It starts correctly because at t equal 0, this is just the inverse Fourier transform of u 0 so it produces u 0. It inverts this transform that we took. Secondlty, does it solve the heat equation? Sure, because for each k, we've just found that that solves the heat equation and now we're just putting different ks together.

So that's the answer. So you might say, OK, perfection. Do we need finite differences? I guess we do. This is typical of formulas that give the answer exactly, but in terms that are not numerically convenient. Because to use his formula, first of all, this this requires an integral, an infinite integral to find -- integral to find for u 0. u 0, u hat. To use this formula, we'd have to find this transform and then we have to do this integral for the inverse transform, this integral for the answer u. Two infinite integrals and not to mention the fact that on a finite interval, when x doesn't go all the way from minus infinity to infinity, we have to think all over again. So we have a nice formula for the answer, but -- it looks nice, but it's not numerically terrific. We can get a lot of information out of this and there's one other special solution that's also extremely informative. What if -- so I'm going to do a special case, but it's the big case. What if the initial conditions is a delta function? What if the initial condition is a delta function? An impulse at times 0 x equal 0, an instant source of heat at a point source with finite strength, you could say. An impulse at times 0. For the wave equation, what happened? That was like turning on a flash of light and it went along the characteristics. For the heat equation, it'll be quite different. There aren't characteristics here, but this is an extremely important solution. I think you could call it the fundamental solution to the heat equation.

You may know what it is. You may have seen this formula. It's fantastic that we get a formula for this one. You could say, we've got a formula. What's the Fourier transform, what's u 0 hat of k for the delta function? 1, right? The Fourier transform of the delta function is a constant 1. All frequencies are there in equal amounts. This is a constant, so I can remove it. It's 1. I'm left with a specific integral to do. The question is, can I do it? The answer turns out to be yes. You can do that integral when this is a constant and that will give you the answer without starting value.

Let me just say what that answer is. u of x and t. This is the fundamental solution. u fundamental, maybe I should say, because it's such an important case. is turns out to be -- it's not obvious what that integral is. By the way, normally -- let's see -- normally it's -- to get the answer that I'm going to write down, we take a kind of end run on the integreal and I'm going to take a serious end run and write down the answer exactly. Here's the key part of the answer.

It's a solution which starts from a delta function, spreads of course, and the shape as it spreads is a bell. It's the famous bell shaped Gaussian distribution of e to the minus -- so is it e to the minus x squared, I guess? Then always here comes -- that scaler tells us the width of the bell. How far has it spread at time t? What goes there is 4 t. That's the key observation and notice how x squared and t are coming in. It's that ratio, x squared to t, that's crucial. That's the parameter again. Of course, we need to multiply by a constant. How do I know I need a constant? Because the total heat is conserved. So in other words, the integral of the delta function, the original source of heat, was 1. The integral from minus infinity to infinity of the delta function, which is all totally concentrated at one point is 1. So I need to put on whatever constant it takes so that the integral of this thing shall stay 1 for later times. This is one integral from minus infinity to infinity that we can do and it turns out to need 1 over the square root of 4 pi t.

That solves the heat equation. It's not a lot of fun to plug that into the heat equation, but it's possible, right? The x derivative, the time derivative is going to be messy. It'll have two terms. The x derivative will have two terms and they'll agree. The notes will give one way to do the integral and get that answer. You see that that integral, it's -- integrals with e to the minus k squared in them are impossible on a finite interval, but here we're going all the way from minus infinity to infinity, which you might think makes the problem harder, but actually it makes it a great deal easier and we can get an explicit answer, a beautiful answer.

So we learn that the solution to the heat equation is coming from a point source, is a bell shaped Gaussian curve that gets wider and wider as t increases, but I guess you would want to look this see, what happens if t comes down to 0? Does that really approximate -- I meant to say, does it really converge to -- this should, as t goes to 0, this should go to the delta function, and in some way, it does. In some way, it does. It's wonderful. The delta function, of course, is 0 away from the origin.

So why does this approach 0 away from the origin? Suppose x is 1. Suppose x is 1. Then do you see this thing going to 0 as t goes to 0? Look what's happening. As t goes to 0, this part is blowing up, but kind of weekly. That's not a disastrous blowup. Compared to this -- e to the minus -- let's say 1 over 4 t. As t goes to 0, that's e to a very negative exponent. That's going to 0 very fast, much faster than this is going to 0. If you like to think of l'Hopital's rule or ratio or something, this quantity is going to 0 way faster than this one is and the result is 0, except at the one point, x equals 0, where this is one. The fact that the total amount of heat stays at 1 tells us that the delta function is the limit there.

Today was the heat equation, continuous case, with these explicit answers.

Then next time is tomorrow I guess, is the heat equation, finite differences. Again, we'll have several difference methods and to compare them would be terrific. Please think about that comparison with the heat equation and/or what I was saying at the beginning of class, the deeper look at the comparison for the wave equation. See you tomorow for difference equations for this problem. Thanks.

Free Downloads

Video


Audio


Subtitle

  • English - US (SRT)