Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Improper integrals
Note: This video lecture was recorded in the Fall of 2007 and corresponds to the lecture notes for lecture 35 taught in the Fall of 2006.
Instructor: Prof. David Jerison
Lecture 36: Improper Integrals
Related Resources
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Now, today we are continuing with this last unit. Unit 5, continued. The informal title of this unit is Dealing With Infinity. That's really the extra little piece that we're putting in to our discussions of things like limits and integrals. To start out with today, I'd like to recall for you, L'Hôpital's Rule. And in keeping with the spirit here, we're just going to do the infinity / infinity case.
I stated this a little differently last time, and I want to state it again today. Just to make clear what the hypotheses are and what the conclusion is. We start out with, really, three hypotheses. Two of them are kind of obvious. The three hypotheses are that f(x) tends to infinity, g(x) tends to infinity, that's what it means to be in this infinity / infinity case. And then the last assumption is that f'(x) / g'(x) tends to a limit, L. And this is all as x tends to some a. Some limit a. And then the conclusion is that f(x) / g(x) also tends to L, as x goes to a. Now, so that's the way it is. So it's three limits. But presumably these are obvious, and this one is exactly what we were going to check anyway. Gives us this one limit. So that's the statement. And then the other little interesting point here, which is consistent with this idea of dealing with infinity, is that a equals plus or minus infinity and L equals plus or minus infinity are OK. That is, the numbers capital L, the limit capital L, and the number a can also be infinite.
Now in recitation yesterday, you should have discussed something about rates of growth, which follow from what I said in lecture last time and also maybe from some more detailed discussions that you had in recitation. And I'm going to introduce a notation to compare functions. Namely, we say that f(x) is a lot less than g(x) if-- So this means that the limit, as it goes to infinity, this tends to 0. As x goes to infinity, this would be. So this is a notation, a new notation for us. f is a lot less than g. And it's meant to be read only asymptotically. It's only in the limit as x goes to infinity that this happens. And implicitly here, I'm always assuming that these are positive quantities. f and g are positive.
What you saw in recitation was that you can make a systematic comparison of all the standard functions that we know about. For example, the log function goes to infinity. But a lot more slowly than x to a power. A lot more slowly then e^x. A lot more slowly than, say, e^(x^2). So this one is slow. This one is moderate. And this one is fast. And this one is very fast. Going to infinity. Tends to infinity, and this is of course as x goes to infinity. All of them go to infinity, but at quite different rates. And, analogous to this, and today we're going to be doing this, needing to do this quite a bit, is rates of decay, which are more or less the opposite of rates of growth. So rates of decay are rates at which things tend to 0. So the rate of decay, and for that I'm just going to take reciprocals of these numbers. So 1 / ln x tends to 0. But rather slowly. It's much bigger than 1 / x^p. Oh, I didn't mention that this exponent p is meant to be positive. That's a convention that I'm using without saying. I should've told you that.
So think x^(1/2), x^1, x^2, they're all in this sort of moderate intermediate range. And then that, in turn, goes to 0 but much more slowly then 1 / e^x, also known as e^(-x). And that, in turn, this guy here goes to 0 incredibly fast. e^(-x^2). vanishes really, really fast. So this is a review of L'Hôpital's Rule. What we said last time, and the application of it, which is to rates of growth and tells us what these rates of growth are.
Today, I want to talk about improper integrals. And improper integrals, we've already really seen one or two of them on your exercises. And we mention them a little bit, briefly. I'm just going to go through them more carefully and more systematically now. And we want to get just exactly what's going on with these rates of decay and their relationship with improper integrals. So I need for you to understand, on the spectrum of the range of functions like this, which ones are suitable for integration as x goes to infinity.
Well, let's start out with the definition. The integral from a to infinity of f(x) dx is, by definition, the limit as N goes to infinity of the ordinary definite integral up to some fixed, finite level. That's the definition. And there's a word that we use here, which is that we say the integral, so this is terminology for it, converges if the limit exists. And diverges if not. Well, these are the key words for today. So here's the issue that we're going to be addressing. Which is whether the limit exists or not. In other words, whether the integral converges or diverges.
These notions have a geometric analog, which you should always be thinking of at the same time in the back of your head. I'll draw a picture of the function. Here it's starting out at a. And maybe it's going down like this. And it's interpreting it geometrically. This would only work if f is positive. Then the convergent case is the case where the area is finite. So the total area is finite under this curve. And the other case is the total area is infinite.
I claim that both of these things are possible. Although this thing goes on forever, if you stop it at one stage, N, then of course it's a finite number. But as you go further and further and further, there's more and more and more area. And there are two possibilities. Either as you go all the way out here to infinity, the total that you get adds up to a finite total. Or else, maybe there's infinitely much. For instance, if it's a straight line going across, there's clearly infinitely much area underneath.
So we need to do a bunch of examples. And that's really our main job for the day, and to make sure that we know exactly what to expect in all cases. The first example is the integral from 0 to infinity of e^(-kx) dx. Where k is going to be some positive number. Some positive constant. This is the most fundamental, by far, of the definite integrals. Improper integrals.
And in order to handle this, the thing that I need to do is to check the integral from 0 up to N, e^(-kx) dx. And since this is an easy integral to evaluate, we're going to do it. It's -1/k e^(-kx), that's the antiderivative. Evaluated at 0 and N. And that, if I plug in these values, is -1/k e^(-kN), minus-- and if I evaluate it at 0, I get a -1/k e^0. So there's the answer. And now we have to think about what happens as N goes to infinity. So as N goes to infinity, what's happening is the second term here stays unchanged. But the first term is e to some negative power. And the exponent is getting larger and larger. That's because k is positive here. You've definitely got to pay attention. Even though I'm doing this with general variables here, you've got to pay attention to signs of things. Because otherwise you'll always get the wrong answer. So you have to pay very close attention here. So this is, if you like, e to the minus infinity in the limit, which is 0. And so in the limit, this thing tends to 0. And this thing is just equal to 1/k. And so all told, the answer is 1/k. And that's it.
Now we're going to abbreviate this a little bit. This thought process, you're going to have to go through every single time you do this. But after a while you also get good enough at it that you can make it a little bit less cluttered. So let me show you a shorthand for this same calculation. Namely, I write 0 to infinity e^(-kx) dx. And that's equal to -1/k e^(-kx) 0 to infinity. That was cute. Not small enough, however.
So, here we are. We have the same calculation as we had before. But now we're thinking, really, in our minds that this infinity is some very, very enormous number. And we're going to plug it in. And you can either do this in your head or not. You say -1/k e^(-infinity). Here's where I've used the fact that k is positive. Because e to the -k times a large number is minus infinity. And then here plus 1/k-- minus -1/k, let write it the same way I did before. And that's just equal to 0 + 1/k, which is what we want. So this is the same calculation, just slightly abbreviated. Yeah. Question.
STUDENT: [INAUDIBLE]
PROFESSOR: Good question. The question is, what about the case when the limit is infinity? I'm distinguishing between something existing and its limit being infinity here. Whenever I make a discussion of limits, I say a finite limit, or in this case, it works for infinite limits. So in other words, when I say exists, I mean exists and is finite. So here, when I say that it converges and I say the limit exists, what I mean is that it's a finite number. And so that's indeed what I said here. The total area is finite. And, similarly, over here. I might add, however, that there is another part of this subject. Which I'm skipping entirely. Which is a little bit subtle. Which is the following. If f changes sign, there can be some cancellation and oscillation. And then sometimes the limit exists, but the total area, if you counted it all positively, is actually still infinite. And we're going to avoid that case. We're just going to treat these positive cases. So don't worry about that for now. That's the next layer of complexity which we're not addressing in this class. Another question.
STUDENT: [INAUDIBLE]
PROFESSOR: The question is, would this be OK on tests. The answer is, absolutely yes. I want to encourage you to do this. If you can think about it correctly. The subtle point is just, you have to plug in infinity correctly. Namely, you have to realize that this only works if k is positive. This is the step where you're plugging in infinity. And I'm letting you put this infinity up here as an endpoint value. So in fact that's exactly the theme. The theme is dealing with infinity here. And I want you to be able to deal with it. That's my goal.
STUDENT: [INAUDIBLE]
PROFESSOR: OK, so another question is, so let's be sure here when the limit exists, I say it has to be finite. That means it's finite, not infinite. The limit can be 0. It can also be -1. It can be anything. Doesn't have to be a positive number. Other questions.
So we've had our first example. And now I just want to add one physical interpretation here. This is Example 1, if you like. And this is something that was on your problem set, remember. That we talked about the probability, or the number, if you like, the number of particles on average that decay in some radioactive substance. Say, in time between 0 and some capital T. And then that would be this integral, 0 to capital T, some total quantity times this integral here. This is the typical kind of radioactive decay number that one gets. Now, in the limit, so this is some number of particles. If the substance is radioactive, then in the limit, we have this.
Which is equal to the total number of particles. And that's something that's going to be important for normalizing and understanding. How much does the whole substance, how many moles do we have of this stuff. What is it. And so this is a number that is going to come up. Now, I emphasize that this notion of T going to infinity is just an idealization. We don't really believe that we're going to wait forever for this substance to decay. Nevertheless, as theorists, we write down this quantity. And we use it. All the time. Furthermore, there's other good reasons for using it, and why physicists accept it immediately. Even though it's not really completely physically realistic ever to let time go very, very far into the future. And the reason is, if you notice this answer here, look at how much simpler this number is, 1/k, than the numbers that I got in the intermediate stages here.
These are all ugly, the limits are simple. And this is a theme that I've been trying to emphasize all semester. Namely, that the infinitesimal, the things that you get when you do differentiation, are the easier formulas. The algebraic ones, the things in the process of getting to the limit, are the ugly ones. These are the easy ones, these are the hard ones. So in fact, infinity is basically easier than any finite number. And a lot of appealing formulas come from those kinds of calculations. Another question.
STUDENT: [INAUDIBLE]
PROFESSOR: The question is, shouldn't the answer be A? Well, the answer turns out to be A/k. Which means that when you set up your arithmetic, and you model this to a collection of particles. So you said it should be A. But that's because you made an assumption. Which was that A was the total number of particles. But that's just false, right? This is the total number of particles. So therefore, if you want to set it up, you want set up so that this number's the total number of particles. And that's how you set up a model, is you do all the calculations and you see what it's coming out to be. And that's why you need to do this kind of calculation.
OK, so. The main thing is, you shouldn't make assumptions about models. You have to follow what the calculations tell you. They're not lying. OK, so now. We carried this out. There's one other example which we talked about earlier in the class. And I just wanted to mention it again. It's probably the most famous after this one. Namely, the integral from minus infinity to infinity of e^(-x^2) dx. Which turns out, amazingly, to be able to be evaluated. It turns out to be the square root of pi. So this one is also great. This is the constant which allows you to compute all kinds of things in probability.
So this is a key number in probability. It basically is the key to understanding things like standard deviation and basically any other thing in the subject of probability. It's also what's driving these polls that tell you within 4% accuracy we know that people are going to vote this way or that. So in order to interpret all of those kinds of things, you need to know this number. And this number was only calculated numerically starting in the 1700s or so by people who-- actually, by one guy whose name was de Moivre, who was selling his services to various royalty who were running lotteries. In those days they ran lotteries, too. And he was able to tell them what the chances were of the various games. And he worked out this number. He realized that this was the pattern. Although he didn't know that it was the square root of pi, he knew it to sufficient accuracy that he could tell them the correct answer to how much money their lotteries would make.
And of course we do this nowadays, too. In all kinds of ways. Including slightly more legit businesses like insurance. So now, I'm going to give you some more examples. And the other examples are much more close to the edge between infinite and finite. This distinction between convergence and divergence. And let me just-- Maybe I'll say one more word about why we care about this very gross issue of whether something is finite or infinite. When you're talking about something like this normal curve here, there's an issue of how far out you have to go before you can ignore the rest.
So we're going to ignore what's called the tail here. Somehow you want to know that this is negligible. And you want to know how negligible it is. And this is the job of a mathematician, is to know what finite region you have to consider and which one you're going to carefully calculate numerically. And then the rest, you're going to have to take care of by some theoretical reasoning. You're going to have to know that these tails are small enough that they don't matter in your finite calculation. And so, we care very much about the tails. Because they're the only thing that the machine won't tell us. So that's the part that we have to know. And these tails are also something which are discussed all the time in financial mathematics. They're very worried about fat tails. That is, unlikely events that nevertheless happen sometimes. And they get burned fairly regularly with them. As they have recently, with the mortgage scandal. So, these things are pretty serious and they really are spending a lot of time on them. Of course, there are lots of other practical issues besides just the mathematics. But you've got to get the math right, too.
So we're going to now talk about some borderline cases for these fat tails. Just how fat do they have to be before they become infinite and overwhelm the central bump. So we'll save this for just a second. And what I'm saving up here is the borderline case, which I'm going to concentrate on, which is this moderate rate, which is x to powers. Here's our next example. I guess we'll call this Example 3. It's the integral from 1 to infinity dx / x. That's the power p = 1. And this turns out to be a borderline case. So it's worth carrying out carefully. Now, again I'm going to do it by the slower method. Rather than the shorthand method. But ultimately, you can do it by the short method if you'd like.
I break it up into an integral that goes up to some large number, N. I see that it's the logarithm function is the antiderivative. And so what I get is ln N minus ln 1, which is just 0. So this is just log N. In any case, it tends to infinity as N. goes to infinity. So the conclusion is, since the limit is infinite, that this thing diverges. Now, I'm going to do this systematically now with all powers p, to see what happens. I'll look at the integral. Sorry, I'm going to have to start at 1 here. From 1 to infinity, dx / x^p. and see what happens with these. And you'll see that p = 1 is a borderline when I do this calculation.
This time I'm going to do the calculation the hard way. But now you're going to have to think and pay attention to see what it is that I'm doing. First of all, I'm going to take the antiderivative. And this is x^(-p), so it's - -p + 1 divided by -p + 1. That's the antiderivative of the function 1/x^p or x^(-p). And then I have to evaluate that at 1 and infinity. So now, I'll write this down. But I'm going to be particularly careful here. I'll write it down. It's infinity to the -p + 1 over -p + 1 minus, so I plug in 1 here. So I get 1/(-p+1). So this is what I'm getting. Again, what you should be thinking here is this is a very large number to this power.
Now, there are two cases. There are two cases. And they exactly split at p = 1. When p = 1, this number is 0. This exponent is 0, and in fact this expression doesn't make any sense because the denominator is also 0. But for all of the other values, the denominator makes sense. But what's going on is that this is infinite when this exponent is infinity to a positive power. And it's 0 when it's infinity to a negative power. So I'm going to say it here, and you must check this at home. Because this is exactly what I'm going to ask you about on the exam. This is it. This type of thing, maybe with a specific value of p here. When p < 1, this thing is infinite. On the other hand, when p > 1, this thing is 0. So when p > 1, this thing is 0. It's just equal to 0. And so the answer is 1/(p-1). Because that's this number. Minus the quantity 1/(-p+1). This is a finite number here.
Notice that the answer would be weird if this thing went away in the p < 1 case. Then it would be a negative number. It would be a very strange answer to this question. So, in fact that's not what happens. What happens is that the answer doesn't make sense. It's infinite. So let me just write this down again, under here. This is a test in a particular case. And here's the conclusion. Ah. No, I'm sorry. I think I was going to write it over on this board here.
So the conclusion is that the integral from 1 to infinity dx / x^p diverges if p <= 1. And converges if p > 1. And in fact, we can actually evaluate it. It's equal to 1/(p-1). It's got a nice, clean formula even. Alright, now let me remind you. So I didn't spell the word diverges right, did I? Oh no, that's an r. I guess that's right. Diverges if p <= 1.
So really, I needed both of these arguments, which are sitting above it, in order to do it. Because the second argument didn't work at all when p = 1 because the formula for the antiderivative is wrong. The formula for the antiderivative is given by the log function when p = 1. So I had to do this calculation too. This is the borderline case, between p > 1 and p < 1. When p > 1, we got convergence. We could calculate the integral. When p < 1, when we got divergence and we calculated the integral over there. And here in the borderline case, we got a logarithm, and we also got divergence. So it failed at the edge. Now, this takes care of all the powers.
Now, there are a number of different things that one can deduce from this. And let me carry them out. So this is more or less the second thing that you'll want to do. And I'm going to emphasize maybe one aspect of it. I guess we'll get rid of this. But it's still the issue that we're discussing here. Is whether this area is fat or thin. I'll remind you of that. So here's the next idea. Something called limit comparison. Limit comparison is what you're going to use when, instead of being able actually to calculate the number, you don't yet know what the number is. But you can make a comparison to something whose convergence properties you already understand.
Now, here's the statement. If a function, f, is similar to a function, asymptotically the same as a function, g, as x goes to infinity, I'll remind you what that means in a second. Then the integral starting at some point out to infinity of f(x) dx, and the other one, converge and diverge at the same time. So both, either, either-- sorry, let's try it the other way. Either, both. Either both converge, or both diverge.
They behave exactly the same way. In terms of whether they're infinite or not. And, let me remind you what this tilde means. This thing means that f(x) / g(x) tends to 1. So if you have a couple of functions like that, then their behavior is the same. This is more or less obvious. It's just because far enough out, this is for large a, if you like. We're not paying any attention to what happens. It just has to do with the tail, and after a while f(x) and g(x) are comparable to each other. So their integrals are comparable to each other.
So let's just do a couple of examples here. If you take the integral from 0 to infinity dx over the square root of x^2+10, then I claim that the square root of x^2+10 resembles the square root of x^2, which is just x. So this thing is going to be like-- So now I'm going to have to do one thing to you here. Which is, I'm going to change this to 1. To infinity. dx/x. And the reason is that this x = 0 is extraneous. Doesn't have anything to do with what's going on with this problem. This guy here, the piece of it from-- So we're going to ignore the part integral from 0 to 1 dx / square root of x^2+10, which is finite anyway. And unimportant. Whereas, unfortunately, the integral of dx will have a singularity at x = 0. So we can't make the comparison there.
Anyway, this one is infinite. So this is divergent. Using what I knew from before. Yeah.
STUDENT: [INAUDIBLE]
PROFESSOR: The question is, why did we switch from 0 to 1? So I'm going to say a little bit more about that later. But let me just make it a warning here. Which is that this guy here is infinite for other reasons. Unrelated reasons. The comparison that we are trying to make is with the tail as x goes to infinity. So another way of saying this is that I should stick an a here and an a here and stay away from 0. So, say a = 1. If I make these both 1, that would be OK. If I make them both 2, that would be OK. If I make them both 100, that would be OK. So let's leave it as 100 right now. And it's acceptable. I want you to stay away from the origin here. Because that's another bad point. And just talk about what's happening with the tail. So this is a tail, and I also had a different name for it up top. Which is emphasizing this. Which is limit comparison. It's only what's happening at the very end of the picture that we're interested in. So again, this is as x goes to infinity. That's the limit we're talking about, the limiting behavior. And we're trying not to pay attention to what's happening for small values of x.
So to be consistent, if I'm going to do it up to 100 I'm ignoring what's happening up to the first 100 values. In any case, this guy diverged. And let me give you another example. This one, you could have computed. This one you could have computed, right? Because it's a square root of quadratic, so there's a trig substitution that evaluates this one. The advantage of this limit comparison method is, it makes no difference whether you can compute the thing or not. You can still decide whether it's finite or infinite, fairly easily. So let me give you an example of that.
So here we have another example. We'll take the integral dx, square root of x^3 + 3. Let's say, for the sake of argument. From 0 to infinity. Let's leave off, let's make it 10 to infinity, whatever. Now this one is problematic for you. You're not going to be able to evaluate it, I promise. So on the other hand 1 over the square root of x^3 + 3 is similar to 1 over the square root of x^3, which is 1/x^(3/2). So this thing is going to resemble this integral here. Which is convergent. According to our rule.
So those are the, more or less the main ingredients. Let me just mention one other integral, which was the one that we had over here. This one here. If you look at this integral, of course we can compute it so we know the area is finite. But the way that you would actually carry this out, if you didn't know the number and you wanted to check that this integral were finite, then you would make the following comparison. This one is not so difficult. First of all, you would write it as twice the integral from 0 to infinity of e^(-x^2) dx.
This is a new example here, and we're just checking for convergence only. Not evaluation. And now, I'm going to make a comparison here, Rather than a limit, comparison I'm actually just going to make an ordinary comparison. That's because this thing vanishes so fast. It's so favorable that we can only put something on top of it, we can't get something underneath it that exactly balances with it. In other words, this wiggle was something which had the same growth rate as the function involved. This thing just vanishes incredibly fast. It's great. It's too good for us, for this comparison. So instead what I'm going to make is the following comparison. e^(-x^2) <= e^(-x). At least for x >= 1. When x >= 1, then x^2 >= x, and so -x^2 < -x. And so e^(-x^2) is less than this. So this is the reasoning involved.
And so what we have here is two pieces. We have 2, the integral from 0 to 1, of e^(-x^2). That's just a finite part. And then we have this other part, which I'm going to replace with the e^(-x) here. 2 times 1 to infinity e^(-x) dx. So this is, if you like, this is ordinary comparison of integrals. It's something that we did way at the beginning of the class. Or much earlier on, when we were dealing with integrals. Which is that if you have a larger integrand, then the integral gets larger. So we've replaced the integral. We've got the same integrand on 0 to 1. And we have a larger integrand on-- So this one is larger integrand.
And this one we know is finite. This one is a convergent integral. So the whole business is convergent. But of course we replaced it by a much larger thing. So we're not getting the right number out of this. We're just showing that it converges. So these are the main ingredients. As I say, once the thing gets really, really fast-decaying, it's relatively straightforward. There's lots of room to show that it converges.
Now, there's one last item of business here which I have to promise you. Which I promised you, which had to do with dealing with this bottom piece here. So I have to deal with what happens when there's a singularity. This is known as an improper integral of the second type. And the idea of these examples is the following. You might have something like this. Something like this. Or something like this. These are typical sorts of examples. And before actually describing what happens, I just want to mention. So first of all, the key point here is you can just calculate these things. And plug in 0 and it works and you'll get the right answer. So you'll determine, you'll figure out, that it turns out that this one will converge, this one will diverge, and this one will diverge. That's what will turn out to happen. However, I want to warn you that you can fool yourself. And so let me give you a slightly different example. Let's consider this integral here. The integral from -1 to 1 dx / x^2. If you carry out this integral without thinking, what will happen is, you'll get the antiderivative, which is -x^(-1), evaluated at -1 and 1. And you plug it in. And what do you get? You get -1^(-1) minus, uh-oh, minus (-1)^(-1). There's a lot of -1's in this problem.
OK, so that's -1. And this one, if you work it all out, as I sometimes don't get the signs right, but this time I really paid attention. It's -1, I'm telling you that's what it is. So that comes out to be -2. Now, this is ridiculous. This function here looks like this. It's positive, right? 1/x^2 is positive. How exactly is it that the area between -1 and 1 came out to be a negative number? That can't be. There was clearly something wrong with this. And this is the kind of thing that you'll get regularly if you don't pay attention to convergence of integrals.
So what's going on here is actually that this area in here is infinite. And this calculation that I made is nonsense. So it doesn't work. This is wrong. Because it's divergent. Actually, when you get to imaginary numbers, it'll turn out that there's a way of rescuing it. But, still, it means something totally different when that integral is thought to -2. So. What I want you to do here, so I think we'll have to finish this up very briefly next time. We'll do these three calculations and you'll see that these two guys are divergent and this one converges. And we'll do that next time.
Free Downloads
Video
- iTunes U (MP4 - 109MB)
- Internet Archive (MP4 - 109MB)
Free Streaming
Subtitle
- English - US (SRT)