Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: A continuation of the previous lecture; important limit properties are developed as theorems from the formal definiton of limit.
Instructor/speaker: Prof. Herbert Gross
Lecture 5: A More Rigorous ...
Related Resources
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part I Study Guide (PDF - 22MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
NARRATOR: The following content is provided under a Creative Commons License. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Welcome, once again, to another lecture on limits. Actually, from a certain point of view, today's lecture will be the same as the last lecture, only from a different viewpoint. Our lecture today is called 'Limits: a More Rigorous Approach'. And what our objective for the day is, aside from helping you gain experience and facility with using limit expressions and absolute values and the like, is to have you see how we can use the power of objective, well-defined mathematical definitions to find rather easy ways of solving certain types of problems.
Now to this end, let's very briefly review our fundamental definition of last time. Namely, the limit of 'f of x' as 'x' approaches 'a' equals 'l' means that for each epsilon greater than 0, we can find delta greater than 0, such that whenever the absolute value of 'x minus a' is less than delta but greater than 0, then the absolute value of ''f of x' minus 'l'' will be less than epsilon. To state that once again, but in more intuitive terms, for any tolerance limit epsilon at all, we can suitably find another tolerance limit, delta, such that whenever we make 'x' within delta of 'a', 'f of x' will automatically be within epsilon of 'l'. And again, the very, very important emphasis here, we do not allow 'x' to equal 'a'.
Again, in terms of a diagram, if this is the curve 'y' equals 'f of x', this is 'x' equals 'a'. This is 'l'. If we call this 'l' plus epsilon, if we call this 'l' minus epsilon-- in other words, epsilon is this width over here-- then the way we find delta is to reflect back to the curve, emphasizing, again, in the neighborhood of the point 'a'-- I can't stress that point strongly enough, that if this curve were not 1:1, there are going to be other places where this line meets the curve. So if that happens, you must make sure that you pick the neighborhood of 'a', not some other point over here. We're interested in what happens near 'a'.
But at any rate, again notice that the fact that these two intervals here were equal does not guarantee that when you project down here, they will be equal. In fact, the only time that these two widths would be equal is if the curve happened to be a straight line. And what it means, again, is that the delta that we're talking about, for example, is the minimum of these two widths. In other words, as I've drawn this diagram, delta would be the distance from 'a' to this end point. And all we're saying is that once 'x' is in this open interval but not including 'a' itself, 'f of x' will be in this open interval.
And again, notice, as we were emphasizing last time, once this delta happens to work, automatically any smaller delta will also work. In other words, if something is true for everything in here, it's certainly true for everything in some subinterval of this. What you must be careful about is not to reverse this process. Don't get outside and pick bigger deltas, then you might very well be in a little bit of trouble.
At any rate, once we've reviewed what the basic definition is, it seems about the only way to show what mathematics is all about is to actually do a few theorems, that is, derive a few inescapable consequences of the definition that show how our theorems coincide with what we believe to be intuitively true in the first place. And for obvious reasons, we should start with what hopefully would be the simplest possible theorems and then proceed to tougher ones as we go along.
So as my first one, I've chosen the following. The limit of 'c' as 'x' approaches 'a' is 'c', where 'c' is any constant. Now again, that may look a little bit strange to you. Let's look at it this way. What I'm saying is let 'f of x' equal 'c', where 'c' is a constant. Then what we're saying is for this choice of 'f of x', the limit of 'f of x' as 'x' approaches 'a' is 'c' itself. That's what this thing says. How do we prove this? Well, you see, we have a criteria given. Namely, what is our basic definition? Let's just juxtaposition our basic definition with this particular result.
To prove that the limit here is 'c', notice that in this particular problem, if we come back and compare this with our basic definition, notice that we have the same basic definition as before, only the role of 'f of x' is played by 'c' and 'l' is also played by 'c'. In other words, in this particular problem 'f of x' is 'c' and 'l' is also 'c'. Now what must we do? We must show that for each epsilon greater than 0, or for an arbitrary epsilon greater than 0-- now what does it mean to say an arbitrary epsilon? In a way, think of it as being a game, a battle of wits between you and your worst enemy, and you're out to win and your worst enemy is out to beat you. So to make this as difficult a game as possible, you allow your worst enemy to choose the epsilon, provided only that it's positive. For whichever one he picks, you must be able to find the delta that matches that epsilon, such that what? Whenever 'x' is within delta of 'a' but not equal to 'a', 'f of x' will be within epsilon of 'l'.
So let's write that down over here. Let's write what that says. Given epsilon greater than 0, we must find a delta greater than 0, such that what? Well, by definition. Such that when the absolute value of 'x minus a' is less than delta but greater than 0, the absolute value of 'f of x'-- well, that of course, is 'c' in this case-- minus 'l', which is also 'c' in this case, the 'l' stands for the limit, has to be less than epsilon. And lo and behold, we find that this is a rather simple procedure because what is 'c' minus 'c'? 'c' minus 'c' is 0, and automatically that will be smaller than any positive epsilon.
In other words, what this says is even your worst enemy can't give you a tough time with this problem. Namely, no matter what epsilon he prescribes, no matter how sensitive, you say, well, for delta I'll pick anything. And it works. And why does that work? Well, here again we can emphasize the geometric approach. Namely, if we plot the graph 'f of x' equals 'c', observe that that plot equals a straight line, 'y' equals 'c'. Now take 'x' equals 'a' over here. What is 'f of a'? 'f of a' is 'c'. Now pick an epsilon, which is this half-width over here, knock that off on either side of 'c'. And all you're saying in this particular case is no matter how far away 'x' is from 'a', 'f of x' will be within these tolerance limits of 'c'.
And why is that? Because 'f' is defined in such a way that the output for any element in its domain is 'c' itself. In other words, every element maps up here. This is what you mean by saying that the graph is the straight line 'y' equals 'c'. By the way, another word of caution. We must always make sure that your solution does not depend on the diagram. You see, if a person were to look at this diagram very quickly, he would assume that 'c' had to be a positive constant here, because look at how I've drawn the line 'y' equals 'c'. It's above the x-axis. Well, 'c' could just as easily be a negative constant. And if I drew the diagram that way, 'c' would be below the x-axis.
The important thing to check is this. When you draw a diagram, you can't have a certain number being positive and negative at the same time, so you choose it one way or the other. Always make sure when you do this that your formal proof, your analytic proof, goes through word for word if you reverse the signature of the sign of the number that you're working with. Make sure that your answer does not depend on the picture that you've drawn. Make sure that your answer follows, inescapably, from your basic definition.
And notice that this is what we did here. We showed what? That no matter what epsilon we were given, we could find a delta-- in fact, in this case, any delta-- such that when 'x' was within delta of a as long as 'x' wasn't equal to 'a'. Well in this case, even if 'x' equaled 'a', there was no harm done. But the point is we want to keep away from a 0 over 0 form so we always impose this condition. In this case, once 'x' was within delta of 'a', 'f of x' was automatically within epsilon of 'c', as long as epsilon was positive, because 'f of x' was already equal to 'c'. The difference was already 0.
Now again, notice what happens here. The more pragmatic student will say, why did you use this long math when it was obvious from either the diagram or from intuition that this is the correct answer? And the reason, is as we have already seen and as we will see many, many times during our course, the intuitive answer and the correct answer will not necessarily be the same. The point is we always want to make sure that when we prove something, the proof follows from the assumed definitions in a logically rigorous way.
If it happens once you prove this that you can intuitively recognize the same result, that's like a double reward because now you won't have to memorize what the result was, you'll just use this thing automatically. But the beauty is that for somebody who doesn't have the same intuitive insights that you have, if he says it's not self-evident to me, explain to me what's happening. Then, you see, once he's accepted the basic definition and assuming that he knows the rules of mathematics, he has to come up with the same answer that you did. And this is what we mean by an objective criterion for doing mathematics, okay.
Well, this was kind of an easy one. Let's do another kind of easy one that's harder. Let's make some gradual transitions, here. Let's pick another theorem. Here's another one that sounds pretty self-evident. The limit as 'x' approaches 'a' is 'a'. In fact, what that seems to say in a self-evident way is that as 'x' gets arbitrarily close to 'a', 'x' gets arbitrarily close to equal to 'a'. And if anything is a truism, I guess that's it. So we certainly suspect that this is a true statement. All I'm trying to get you used to is not to make a mountain out of a mole hill, not to have to think that mathematics becomes a severe thing where we try to find hard ways of doing easy things, but rather that we can find an unambiguous, logically constructed language from which all of our results can be proven without recourse to intuition once our basic definitions are chosen.
So let's see how this would work. Let's again go back to our basic definition. You see, again, what we're saying here is that this is just a special case now where 'f of x' equals 'x'. You see the function is 'f of x'. In this case, it's 'x', and this is a special case where 'a' is 'l'. Now what are you saying here? You're saying given epsilon greater than 0, we must find a delta greater than 0, such that what? Such that whenever 'x' is within delta of 'a' but not equal to 'a', then 'x' will be within epsilon of 'a'. And now you just look at this thing, I hope, and you say well, there's sort of a similarity over here.
How close should 'x' be chosen to 'a' if you want 'x' to be within epsilon of 'a'? And the answer, quite obviously in this case without, cause to define again, is to simply say what in this case? For the given epsilon, choose, for example, delta to equal epsilon. Because certainly, if the absolute value of 'x minus a' is less than epsilon but greater than 0, then certainly the absolute value of 'x minus a' is less than epsilon. Don't be upset that the proof happened to be fairly easy in this case. More importantly, don't be upset that there was a more intuitive way of doing it. Remember what we want to do is to get these rigorous ways down pat, interpret them in terms of pictures wherever the pictures are available. And then, when we get to the case where pictures aren't available, to be able to extend the analytic concepts.
Again, to see what happens here pictorially, you notice that if you start with the function 'f of x' equals 'x,' its graph is a straight line 'y' equals 'x'. And I'll risk some freehand drawing here. So in other words, we suspect that this is 'a'. It better be, if this says that all points on this line have the property that the y-coordinate is equal to the x-coordinate. Now what do we want to do? We pick an epsilon and knock off a plus epsilon and a minus epsilon. And now what we want to find out is how close must x be to a on the x-axis in order that 'f of x'-- namely 'y' in this case, or 'x' itself-- be within this prescribed tolerance limits of 'a'?
You see what happens here? This is that one case where it happens that what? If you draw these lines over and project down, but by proportional parts-- and this is crucial here. Even if this weren't the 45 degree line, the fact that these two pieces here are equal would guarantee that these two pieces here are equal, in spite of how I've drawn this. The fact that this is the 45 degree line not only says that these two pieces here are equal but it says what? That each of these pieces is equal to each of these pieces. I wish I had drawn this better for you, but in fact the worse I draw it, the more you have to rely on being able to visualize abstractly what's happening here. What I'm saying is that this point here would be labeled 'a plus epsilon', this point here would be labeled 'a minus epsilon', and this is the pictorial version of what it means to say you could have chosen delta to equal epsilon, in this particular case.
Well, that's enough of the easier ones, so let's pick one that gets slightly tougher. And this one gets tougher in one sense, but insidiously-- meaning it's real sneaky-- simple in another sense. In other words, it turns out that one can reason falsely and get the right answer just by a quirk. You see, what I want to prove next is a very important theorem that says that the limit of a sum is equal to the sum of the limits.
Written out more formally, if I have two functions 'f of x' and 'g of x' and formed the sum ''f of x' plus 'g of x'' and I want to take the limit of that sum as 'x' approaches 'a', what this theorem says is you can find the limit of each of the functions separately first, and then add the two results. Now at first glance, you might be tempted to say, well, what else would you expect to happen? The answer is, I don't know, again, what else you would expect to happen. But this is a luxury.
You see, evidently what's happened here is that we've reversed the order of operations. You see, this says what? First add these two, and then take the limit. What are we doing down here? First we're taking the limits and then we're adding. Now, is it self-evident that just by changing the order of operations you don't change anything? We've seen many examples already in the short time that this course has been in existence where changing the order, changing the voice inflection, what have you, changes the answer. And in fact, we're going to see more drastic examples later on.
I guess this is one of the tragedies of a course like this. I guess it's typical of problems every place. If the place that the person is going to get into trouble comes far beyond the time at which you're lecturing to him, it's kind of empty to threaten him with the trouble he's going to get into. So I'm not going to threaten you with the trouble you're going to get into until we get into it. All I'm going to say is be careful when you say that it's self-evident that we can first add and then take the limit or whether we first take the limits and then add. In general, it does make a difference in which order you perform operations.
Well, let's take this just a little bit more formally to see what this thing says. First of all, when we write something like this, we assume that the limit of 'f of x' as 'x' approaches 'a' exists, otherwise we wouldn't write this thing. So let's call that limit 'l1'. In other words, let the limit of 'f of x' as 'x' approaches 'a' equal 'l1'. Let the limit of 'g of x' as 'x' approaches 'a' equal 'l2'. Now define a new function 'h of x' to be equal to ''f of x' plus 'g of x''. And by the way, this is something I didn't say strongly enough in one of our early lectures, and I want to make sure that it's very clear that this is understood. And that is, notice that when you define 'h' to be the sum of 'f' and 'g', you had better make sure that 'f' and 'g' have the same domain. You see, if some number 'x' belongs to the domain of 'f' but it doesn't belong to the domain of 'g', then how can you form ''f of x' plus 'g of x''? 'g' doesn't operate on 'x'.
By the way, this is not quite as serious a problem as it seems if you understand the language of our new mathematics and sets and the like. Namely, if the domain of 'f' happens to look like this and the domain of 'g' happens to look like this, what we do is we restrict the sum to the intersection of the two domains. In other words, referring back to our function 'h', we define the domain of 'h' to be the intersection of these two domains. And that way, for any 'x', which is in the domain of 'h', it automatically belongs to both the domain of 'f' and the domain of 'g'. And so this becomes well-defined.
Another way of looking at this is what you're really saying is that 'f' and 'g' must include in their domain intervals surrounding 'x' equals 'a'. And since you're only interested in what's happening near 'x' equals 'a', you don't really care whether these have the same domains or not, provided they have what? An intersection that can serve as a common domain. But that, I think, is more clear from the context. It's a very, very important fine point. It's a tragedy to try to add two numbers, one of which doesn't exist. I don't know if it's a tragedy, it's certainly futile.
At any rate, though, let's see what this thing then says. If we now define 'h' to be 'f plus g', what we want to prove is that the limit of 'h of x' as 'x' approaches 'a' equals 'l1 plus l2'. Now here's the point again. What does this mean by definition? It means that given epsilon greater than 0, we must be able to find a delta such that when 'x' is within delta of 'a' but not equal to 'a', 'h of x' is within epsilon of 'l1 plus l2'. That's probably kind of hard to keep track of, so I've taken the liberty of writing this down for you right over here.
See, given epsilon greater than 0-- so that's given, we have no control over that-- what we must do is find delta greater than 0, such that 0 less than the absolute value of 'x minus a' less than delta, implies-- now, what is 'h of x'? It's ''f of x' plus 'g of x'', and our limit that we're looking for in this case is 'l1 plus l2'. So mathematically, we replace 'h of x' by ''f of x' plus 'g of x'', the limit is 'l1 plus l2'. So what must we show that the absolute value of the quantity of ''f of x' plus 'g of x'' minus the quantity 'l1 plus l2' is less than epsilon. And now we start to play detective again. This is the expression that we want to make less than epsilon. So what we do is we look at this particular expression and we try to see what kind of cute things we can do with it.
Now, what do I mean by a cute thing? Well, we're assuming that the limit of 'f of x' as 'x' approaches 'a' equals 'l1'. Let me write this as an aside over here. Among other things, what this tells us is that we have a hold on expressions like this. In other words, the fact that the limit of 'f of x' as 'x' approaches 'a' is equal to 'l1' tells us that we can make this as small as we want. I'll use a subscript over here for epsilon 1 because it doesn't have to be the same epsilon that was given here. For any positive number, say epsilon 1, the point is I can make 'f of x' within epsilon 1 of 'l1' just by choosing 'x' sufficiently close to 'a' by definition of what limit means.
So in other words, I like expressions of this form. And similarly, I like expressions of this form. And again, the reason is that since the limit of 'g of x' as 'x' approaches 'a' is 'l sub 2', it gives me a hold on the difference between 'g of x' and 'l2'. So again, using the old adage that hindsight is better than foresight by a darn site, knowing exactly what it is I have to do, I come back here and try to doctor things up for myself.
The first thing I observe is that this can be rewritten. There's no calculus in this, notice. Just plain ordinary algebra, arithmetic. This can be rewritten so that I can group the 'f sub 'f of x' and 'l1'' together and 'g of x' and 'l2' together. In other words, this is indeed nothing more than 1. The absolute value of the quantity of ''f of x' minus l1' plus the absolute value of the quantity ''g of x' minus l2'. Now, the point is since we already know that the absolute value of a sum is less than or equal to the sum of the absolute values, that tells me that-- treating this is one number and this is another number, the absolute value of a sum is less than or equal to the sum of the absolute values. What this now tells me is I can say that this, that I'm trying to get a hold on, is less than this.
But look at this expression. This expression is the absolute value of ''f of x' minus l1'. And this expression is the absolute value of ''g of x' minus l2'. In other words, then, since I can make 'f of x' as arbitrarily nearly equal to 'l1' as I want and 'g of x' as close to 'l2' as I want just by choosing 'x' sufficiently close to 'a', why don't I choose 'x' close enough to 'a' so each of these will be less than epsilon over 2? Now again, this calls for a little aside. When one talks about epsilon, that is 'a', what shall we say? A generic name for any number which exceeds 0. Or I could have written that less mystically by just saying any positive number. Well, if epsilon is positive, what can you say about half of epsilon? It's also positive.
In other words, if I had chosen a different epsilon, say 'epsilon sub 1', equal to the original epsilon divided by 2, then I'm guaranteed what? That I can get 'f of x' with an epsilon 2 of 'l1', 'g of x' with an epsilon 2 of 'l2'. And now adding these two together, if this term is less than epsilon over 2 and this term is less than epsilon over 2, the whole sum is less than epsilon. And it seems now semi-intuitively-- what do I mean by semi-intuitively? Well, this is far from an intuitive job over here. It's quite mathematical. It's rigorous in that sense. It's intuitive in the sense that I'm not playing around with the deltas here. All I'm saying is look, I can make this as small is I want, I could make this as small as I want, therefore I can make the sum as small as I want. And the fancy way of saying that is I can make it less than any given epsilon, and therefore it appears that this will be the limit.
Using our old adage again of being able, knowing what we want, to be able to doctor things up rigorously, once we've gone through this it's now relatively easy to clean up the details. In other words, for those of us who are mathematically-oriented enough to say, the way you've proven this last result is the same sloppiness that I was used to seeing in certain types of engineering proofs where people were more interested in the result than with the answer, I still don't see how you used the epsilons and the deltas here. Let me show you what a simple step it is to now go from the semi-rigorous approach to the completely rigorous approach.
All we do is reword what we've done before. In fact, this is true in most mathematics. You take a geometry book and there's a theorem that says something like if 'a', 'b', 'c', and 'd' are true, then 'e' is true. And you learn this proof quite mechanically. You sort of memorize it. Well you know, the man who proved that theorem didn't, in general, start out by saying, I wonder what happens if 'a', 'b', 'c', and 'd' are true. In general, what he tries to do is to prove that some result, like 'e', is true. And as he's proving it, he hits pitfalls. And he says, you know, if I could only be sure 'a' was true, I could get over this pitfall. And if I could be sure 'b' was true, I could get over the second pitfall, et cetera. And when he makes enough assumptions to get over all the pitfalls and he has his answer, he then we writes down the answer in the reverse order from which he invented it.
Namely, he says what? Suppose 'a', 'b', 'c', and 'd' are true. Let's prove that 'e' is true. And the student is then robbed of any attempt to see intuitively how this whole thing came about. As a case in point, let me show you what I'm driving at here. My first exposure to formal limit proofs was something like this. When somebody said prove the limit of a sum equals the sum of the limits, something like this would happen. The first statement in the book would say, given epsilon greater than 0, let epsilon 1 equal epsilon over 2. Now, I respected my teacher, I respected the author of the book. If he says let epsilon 1 equal epsilon over 2, all right, fine. We can do that. And more to the point, it turned out that the problem worked if you did that.
The part that bothered me is why did he say epsilon over 2? Why not 2 epsilon over 3? Or epsilon over 5? Or epsilon over 6,872? Why epsilon over 2? And the point was that he had cheated. He had already done the problem that we had over here, and knowing what he needed, then came back here and said let epsilon 1 equal epsilon over 2. And notice how this is going to mimic everything we said before. For this choice of epsilon 1, we can find a delta 1 greater than 0 such that if the absolute value of 'x minus a' is less than delta 1 but greater than 0, then the absolute value of ''f of x' minus l1' is less than epsilon 1. Why do we know that? That's the definition of what it means to say that the limit of 'f of x' as 'x' approaches 'a' equals 'l1'.
In a similar way, he says we can find delta 2 greater than 0, such that whenever the absolute value of 'x minus a' is greater than 0 but less than delta over 2, we can make the absolute value of ''g of x' minus 'l sub 2'' and be less than epsilon 1. And now comes the beautiful step. He says, now that these delta 1 and delta 2 exist separately, pick delta to be the minimum of these two. In other words, if I let delta equal the minimum of these two, what does that guarantee me? If delta is the minimum of these two, it guarantees me that both of these conditions are met at the same time.
What does that tell me? It tells me that as soon as the absolute value of 'x minus a' is less than delta but greater than 0, automatically these two conditions hold. And that, in turn, tells me what? That the absolute value of ''f of x' minus l1' is less than epsilon 1 and the absolute value of ''g of x' minus l2' is less than epsilon 1. And now by adding unequals to equals, that tells me what? That this plus this is less than 2 epsilon 1. But see, using my hindsight, I picked epsilon to be what? Epsilon over 2. So 2 epsilon 1 is just another way of saying epsilon. In other words, what this now implies is that the absolute value of ''f of x' minus l1' plus the absolute value of ''g of x' minus l2' is less than 2 epsilon 1, which equals epsilon.
But you see, this in turn is what? This is greater than the absolute value of ''f of x' minus l1' plus ''g of x' minus l2'. And so this was the thing we wanted to make smaller than epsilon. Since this is smaller than this and this is already smaller than epsilon, then this must be smaller than epsilon too. Again, notice, this is rigorous. But if you understand what's happening here piece by piece, you never really have to memorize a thing. Let's just take a look back here to reinforce what I'm saying here. Notice what we did. We started with the answer that we wanted to show, worked around to get ahold of things that we wanted. We were lucky enough-- and this is true in any game, for example. One can plot masterful strategy and still lose, there's no guarantee we're going to win with our masterful strategy. But we usually do in this course, usually. What we do is we masterfully come back to this, see what has to be done. Knowing what the right answer has got to be, we come back here and then formalize it. Essentially, what we've done is reverse the steps here.
I guess there is one thing that bothers many people that I should make an aside about. Why do you need a different delta 1 and delta 2 for the same epsilon 1? See, if you're memorizing, there's a danger that you won't realize this and why this happens. Let me show you in terms of a picture what this means. Let this, for the sake of argument, be the curve 'y' equals 'f of x'. And let this be the curve 'y' equals 'g of x'. All we're saying is this. That when you prescribe an epsilon, the same epsilon that surrounds 'l2', if you have that surround 'l1'. Because the curves may have different slopes. Look what happens over here, even as badly as I've drawn this. Notice that in epsilon, neighborhood of 'l2', projects down into this size neighborhood around 'a'. On the other hand, an epsilon neighborhood of 'l1'. You get what you pay for, I guess. An epsilon, neighborhood of 'l1', projects into a much smaller region.
And by the way, that's exactly what we meant. This is a delta 1, for example. This is delta 2. And when we said pick delta to be the minimum of delta 1 and delta 2, all we were saying was listen, if we guarantee that 'x' stays in here, then certainly both of these two things will be true at the same time. Well again, we have many exercises and reading material to reinforce these points. All I want to do with the lecture is to give you an overview as to what's happening so that you see these things. And I'm afraid I might cure you with details if I just keep hammering home these rigorous little points. As I say, I hope you get the main idea from what we're doing.
Let me just, for the sake of argument, try to work with just one more idea and we'll see how this works out also. Let's, for example, try to play around with the idea that a companion to the limit of a sum equals the sum of the limits would be what? The limit of a product equals the product of the limits. In other words, as before, if the limit of 'f of x' as 'x' approaches 'a' is 'l1' and the limit of 'g of x' as 'x' approaches 'a' is 'l2', let's form a new function, which we could call 'k of x' which is equal to the product of 'f' and 'g'. Again, noticing that the 'f' and 'g' have to have a common domain here. You want to show that the limit of 'f of x' times 'g of x' is 'l1' times 'l2'.
And this is the hard part of the course, this is the part of the course that I don't think anybody in the world can really teach. All one can do is try to expose the student to ideas and hope that the student has the knack of putting these things together to form his own repertoire. The idea is something like this. I'll show you alternative methods and the like. One way is we want to get ahold of 'f of x' times 'g of x'. Now what do we have a hold on? It's very clever what we do here. We have a hold of ''f of x' minus l1' and we have a hold of ''g of x' minus l2'.
So as we so often do in mathematics, we simply add and subtract the same thing. We frequently add on zeroes in this cute way. We add and subtract the same thing. Notice what we did over here. We wrote 'f of x' as 'l1' plus ''f of x' minus l1'. Again, why did we do that? Because from our definition of the limit of 'f of x' as 'x' approaches 'a' equaling 'l1', we know that we can control this side. And the same thing is true over here. The fact that the limit of 'g of x' as 'x' approaches 'a' equals 'l2' means we have some control over this.
Now let's just multiply everything out. This is what? 'f of x' times 'g of x'. I'm going to save myself some space and keep the board somewhat symmetric. When I multiply these terms out, I'm going to get what? An 'l1' times 'l2' term over here as one of my four terms? Let me already transpose that one so I kill two birds with one stone. One is I keep a little bit of symmetry in what I'm going to write. And secondly, notice that later on, this is what I want to get a hold on. In other words, notice that the proof of the limit of 'f of x' times 'g of x' as 'x' approaches 'a' is 'l1' times 'l2', this is precisely the expression I must make small. Our show can be made as small is I wish just by picking 'x' sufficiently close to 'a'. And again, I will not go through all the details here, I will simply outline what we do here.
Namely, let's multiply out the rest of this thing. We have what here? This times this, which is what? 'l1' times ''g of x' minus l2'. Then we have this times this, which is 'l2' times ''f of x' minus l1'. And now we have what? This times this. That's ''f of x' minus l1' times ''g of x' minus l2'. Now, the point is this is the thing that we'd like to make very small in absolute value. Well again, the absolute value of this is equal to the absolute value of this, which is less than or equal to-- and I'm going to go through these details rather rapidly, and allow you to fill these in for yourself-- all I'm using is what?
That the absolute value of a sum is less than or equal to the sum of the absolute values. The absolute value of a product is equal to the product of the absolute values, so this is going to be less than or equal to. Let's break these things up. And now, you see what the key idea is? 'l1' and 'l2' are certain fixed numbers, fixed numbers. Notice that because 'g of x' can be made as close to 'l2' as I want and 'f of x' can be made as close to 'l1' as I want, notice that no matter what epsilon I'm given, I can certainly make this as small as I wish, just by picking 'x' close enough to 'a'.
For example, for a given epsilon, how many terms do I have here? One, two, three. To make this whole sum less than epsilon, it's sufficient to make each of these three factors less than epsilon over 3. I can make these two less than epsilon over 3 pretty easy. How do I make this less than epsilon over 3? And the answer is, if you want this times this to be less than epsilon over 3 where these are positive numbers, make each of these less than the square root of epsilon over 3. Then when you multiply these two together, if this is less than this and this is less than this, this times this will be less than epsilon over 3.
And now what you see what we do is very simple. To finish this proof off, all we do now is say, let epsilon greater than 0 be given. Choose epsilon 1 to equal epsilon over 3. Choose epsilon 2 to equal epsilon over 3. Choose epsilon sub 3 and epsilon sub 4 to each be the square root of epsilon over 3. Then we can find delta 1, delta 2, delta 3, delta 4, et cetera. Clean up all of these, you see, and pick delta to be the minimum of the four deltas involved. Again, this is done much more explicitly both in the text and in the exercises. I just wanted you to get an overview here. And by the way, while we're speaking of this, there's always the danger that some of you may respect the professor too much. So that danger is one I don't mind living with. The problem is this. You may get the idea out of respect for me that I have invented a unique proof here.
Let me tell you this. One, I did not invent this proof. Two, it is not unique. There are many different ways of proving the same result. For example, a person being told to work on this-- and I'm not going to carry the details out here-- but a person being told to work on something like this might have decided that instead of doing the clever thing that we did before, he was going to do the clever thing of adding and subtracting 'l1' times 'g of x'. Because you see, if he did that, what would happen? He could now factor out a 'g of x' from here and rewrite this as 'g of x' times ''f of x' minus l1'. And these two terms could be combined together, we factor out an 'l1' times what? ''g of x' minus 'l2'.
And even though the details would have been considerably different, the intuitive approach would have been-- well, look. This is pretty close to 'l2' when 'x' is near 'a'. This I can make as small as I want. Similarly, I can do the same things over here, and pretty soon you've got the idea that you can make this sum as small as you want just by choosing these sufficiently small. And there's no unique way of doing this. Now, here's a main point. Once all this work has been done, for a wide variety of problems we never again have to use an epsilon or a delta.
Let me illustrate with one problem. Let's suppose we were given the problem limit of 'x squared plus 7x' as 'x approaches 3', and we wanted to find that limit. Our intuitive thing would be to do what? Let 'x' equal 3, in which case we get 9 plus 21 is 30. But we know by now that this instruction says that 'x' can't equal 3. This is the problem. We get what appears to be a nice answer that we believe in, but by an illegal method. Can we use our legal methods to gain the same result? The answer is yes. Because, you see, what is 'x squared plus 7x'? It's the sum of two functions, and we've just proven that the limit of the sum is the sum of the limits.
For example, what I can say is this. I don't know if this is true. What I do know is true is that the limit of 'x squared plus 7x' as 'x' approaches 3 is a limit of 'x squared' as 'x' approaches 3 plus the limit of '7x' as 'x' approaches 3. How do I know that? I've proven that the limit of a sum is the sum of the limits. Now what is this? This is really a product. This is really the limit of 'x' times 'x'. But we already know that the limit of a product is the product of the limits. See, we proved that theorem. Well, we almost proved it, certainly close enough so I think that we can say that we did. And this is a product also.
So you see, we can get from here to here to here just by our theorem. Now didn't we also prove as one of our theorems earlier that the limit of 'x' as 'x' approaches 'a' is 'a' itself? Sure, we did. In particular then, the limit of 'x' as 'x' approaches 3 has already been proven to be 3. So this is 3, this is 3. What is the limit of 7 as 'x' approaches 3? Well, 7 is a constant, and we already proved that the limit of a constant as 'x' approaches 'a' is that same constant. So what is the constant here? It's 7. And what's the limit of 'x' as 'x' approaches 3? That's 3. So by using our theorems, we get from what? From here to here to here to here. And now hopefully from a theorem that comes from some place around the third grade, 3 times 3 is 9, 7 times 3 is 21, the sum is 30. And now you see this is no longer a conjecture. This follows, inescapably, from the rules of our game, from the rules and our basic definition.
By the way, you may have a tendency to feel when you see something like this that, why did we need the epsilons and deltas in the first place? Wasn't it a terrible waste? Well, two things. First of all, we couldn't prove our theorems without the epsilons and deltas. And secondly, and don't lose sight of this, in many real life situations you may very well be faced with the type of a problem that doesn't ask you to prove that the limit of 'x squared plus 7x' is 30 as 'x' approaches 3, but rather might say how close must 'x' be chosen to 3 if we want 'x squared plus 7x' to be less than 30.023.
And then, you see, if that's the kind of a problem you have, these new theorems will not solve that problem for you. So we're not making a choice here. All we're saying is that the epsilons and deltas are the backbone of limits, but that fortunately through mathematical theorems, we can get simpler ways of getting important results. And that was our main purpose of today's lecture, these two things. That completes our lecture for today, and so on until next time, good bye.
NARRATOR: Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.
Free Downloads
Video
- iTunes U (MP4 - 100MB)
- Internet Archive (MP4 - 100MB)
Subtitle
- English - US (SRT)