Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Relationship to the Fourier transform; Class of rational transforms and the concept of poles and zeroes; Region of convergence (ROC); Inverse transforms using partial fraction expansion.
Instructor: Prof. Alan V. Oppenheim
Lecture 20: The Laplace Tra...
Related Resources
The Laplace Transform (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free.
To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
[MUSIC PLAYING]
PROFESSOR: Over the last series of lectures, in discussing filtering, modulation, and sampling, we've seen how powerful and useful the Fourier transform is. Beginning with this lecture, and over the next several lectures, I'd like to develop and exploit a generalization of the Fourier transform, which will not only lead to some important new insights about signals and systems, but also will remove some of the restrictions that we've had with the Fourier transform.
The generalization that we'll be talking about in the continuous time case is referred to as the Laplace transform, and in the discrete time case, is referred to as the z transform. What I'd like to do in today's lecture is begin on the continuous time case, namely a discussion of the Laplace transform. Continue that into the next lecture, and following that develop the z transform for discrete time. And also, as we go along, exploit the two notions together.
Now, to introduce the notion of the Laplace transform, let me remind you again of what led us into the Fourier transform. We developed the Fourier transform by considering the idea of representing signals as linear combinations of basic signals. And in the Fourier transform, in the continuous time case, the basic signals that we picked in the representation were complex exponentials. And in what we had referred to as the synthesis equation, the synthesis equation corresponded to, in effect, a decomposition as a linear combination, a decomposition of x of t as a linear combination of complex exponentials. And of course, associated with this was the corresponding analysis equation that, in effect, gave us the amplitudes associated with the complex exponentials.
Now, why did we pick complex exponentials? Well, recall that the reason was that complex exponentials are eigenfunctions of linear time-invariant systems, and that was very convenient. Specifically, if we have a linear time-invariant system with an impulse response h of t, what we had shown is that that class of systems has the property that if we put in a complex exponential, we get out a complex exponential at the same frequency and with a change in amplitude. And this change in amplitude, in fact, corresponded as we showed as the discussion went along, to the Fourier transform of the system impulse response.
So the notion of decomposing signals into complex exponentials was very intimately connected, and the Fourier transform was very intimately connected, with the eigenfunction property of complex exponentials for linear time-invariant systems.
Well, complex exponentials of that type are not the only eigenfunctions for linear time-invariant systems. In fact, what you've seen previously is that if we took a more general exponential, e to the st, where s is a more general complex number. Not just j omega, but in fact sigma plus j omega. For any value of s, the complex exponential is an eigenfunction. And we can justify that simply by substitution into the convolution integral. In other words, the response to this complex exponential is the convolution of the impulse response with the excitation.
And notice that we can break this term into a product, e to the st e to the minus s tau. And the e to the st term can come outside the integration. And consequently, just carrying through that algebra, would reduce this integral to an integral with an e to the st factor outside. So just simply carrying through the algebra, what we would conclude is that a complex exponential with any complex number s would generate, as an output, a complex exponential of the same form multiplied by whatever this integral is. And this integral, of course, will depend on what the value of s is. But that's all that it will depend on. Or said another way, what this all can be denoted as is some function h of s that depends on the value of s.
So finally then, e to the st as an excitation to a linear time-invariant system generates a response, which is a complex constant depending on s, multiplying the same function that excited the system. So what we have then is the eigenfunction property, more generally, in terms of a more general complex exponential where the complex factor is given by this integral.
Well, in fact, what that integral corresponds to is what we will define as the Laplace transform of the impulse response. And in fact, we can apply this transformation to a more general time function that may or may not be the impulse response of a linear time-invariant system. And so, in general, it is this transformation on a time function which is the Laplace transform of that time function, and it's a function of s.
So the definition of the Laplace transform is that the Laplace transform of a time function x of t is the result of this transformation on x of t. It's denoted as x of s, and as a shorthand notation as we had with the Fourier transform, then we have in the time domain, the time function x of t, and in the Laplace transform domain, the function x of s. And these then represent a transform pair.
Now, let me remind you that the development of that mapping is exactly the process the we went through initially in developing a mapping that ended up giving us the Fourier transform. Essentially, what we've done is just broadened our horizon somewhat, or our notation somewhat. And rather than pushing just a complex exponential through the system, we've pushed a more general time function e to the st, where s is a complex number with both a real part and an imaginary part.
Well, the discussion that we've gone through so far, of course, is very closely related to what we went through for the Fourier transform. The mapping that we've ended up with is called the Laplace transform. And as you can well imagine and perhaps, may have recognized already, there's a very close connection between the Laplace transform and the Fourier transform.
Well, to see one of the connections, what we can observe is that if we look at the Fourier transform expression and if we look at the Laplace transform expression, where s is now a general complex number sigma plus j omega, these two expressions, in fact, are identical if, in fact, sigma is equal to 0. If sigma is equal to 0 so that s is just j omega, then all that this transformation is, is the same as that. Substitute in s equals j omega and this is what we get.
What this then tells us is that if we have the Laplace transform, and if we look at the Laplace transform at s equals j omega, then that, in fact, corresponds to the Fourier transform of x of t.
Now, there is a slight notational issue that this raises, and it's very straightforward to clean it up. But it's something that it's-- you have to just kind of focus on for a second to understand what the issue is. Notice that on the left-hand side of this equation, x of s representing the Laplace transform. When we look at that with sigma equal to 0 or s equal to j omega, our natural inclination is to write that as x of j omega, of course.
On the other hand, the right-hand side of the equation, namely the Fourier transform of x of t, we've typically written as x of omega. Focusing on the fact that it's a function of this variable omega.
Well, there's a slight awkwardness here because here we're talking about an argument j omega, here we're talking about an argument omega. And a very straightforward way of dealing with that is to simply change our notation for the Fourier transform, recognizing that the Fourier transform, of course, is a function of omega, but it's also, in fact, a function of j omega. And if we write it that way, then the two notations come together. In other words, the Laplace transform at s equals j omega just simply reduces both mathematically and notationally to the Fourier transform.
So the notation that we'll now be adopting for the Fourier transform is the notation whereby we express the Fourier transform no longer simply as x of omega, but choosing as the argument j omega. Simple notational change.
Now, here we see one relationship between the Fourier transform and the Laplace transform. Namely that the Laplace transform for s equals j omega reduces to the Fourier transform. We also have another important relationship. In particular, the fact that the Laplace transform can be interpreted as the Fourier transform of a modified version of x of t. Let me show you what I mean.
Here, of course, we have the relationship that we just developed. Namely that s equals j omega. The Laplace transform reduces to the Fourier transform. But now let's look at the more general Laplace transform expression. And if we substitute in s equals sigma plus j omega, which is the general form for this complex variable s, and we carry through some of the algebra, breaking this into the product of two exponentials, z to the minus sigma t times z to the minus j omega t. We now have this expression where, of course, in both of these there is a dt.
And now when we look at this, what we observe is that this, in fact, is the Fourier transform of something. What's the something? It's not x of t anymore, it's the Fourier transform of x of t multiplied by e to the minus sigma t.
So if we think of these two terms together, this integral is just the Fourier transform. It's the Fourier transform of x of t multiplied by an exponential. If sigma is greater than 0, it's an exponential that decays with time. If sigma is less than 0, it's an exponential that grows with time.
So we have then this additional relationship, which tells us that the Laplace transform is the Fourier transform of an exponentially weighted time function.
Now, this exponential weighting has some important significance. In particular, recall that there were issues of convergence with the Fourier transform. In particular, the Fourier transform may or may not converge. And for convergence, in fact, what's required is that the time function that we're transforming be absolutely integrable.
Now, we can have a time function that isn't absolutely integrable because, let's say, it grows exponentially as time increases. But when we multiply it by this exponential factor that's embodied in the Laplace transform, in fact that brings the function back down for positive time. And we'll impose absolute integrability on the product of x of t times e to the minus sigma t. And so the conclusion, an important point is that the Laplace transform, the Fourier transform of this product may converge, even though the Fourier transform of x of t doesn't. In other words, the Laplace transform may converge even when the Fourier transform doesn't converge. And we'll see that and we'll see examples of it as the discussion goes along.
Now let me also draw your attention to the fact, although we won't be working through this in detail. To the fact that this equation, in effect, provides the basis for us to figure out how to express x of t in terms of the Laplace transform. In effect, we can apply the inverse Fourier transform to this, thereby to this, account for the exponential factor by bringing it over to the other side.
And if you go through this, and in fact, you'll have an opportunity to go through this both in the video course manual and also it's carried through in the text, what you end up with is a synthesis equation, an expression for x of t in terms of x of s which corresponds to a synthesis equation. And which now builds x of t out of a linear combination of not necessarily functions of the form e to the j omega t, but in terms of functions or basic signals which are more general exponentials e to the st.
OK, well, let's just look at some examples of the Laplace transform of some time functions. And these examples that I'll go through are all examples that are worked out in the text. And so I don't want to focus on the algebra. What I'd like to focus on are some of the issues and the interpretation.
Let's first of all, look at the example in the text, which is Example 9.1. If we take the Fourier transform of this exponential, then, as you well know, the result we have is 1 over j omega plus a. And that can't converge for any a. In particular, it's only for a greater than 0. What that really means is that for convergence of the Fourier transform, this has to be a decaying exponential. It can't be an increasing exponential.
If instead we apply the Laplace transform to this, applying the Laplace transform is the same as taking the Fourier transform of x of t times an exponential, and the exponent that we would multiply by is e to the minus sigma t. So in effect, taking the Laplace transform of this is like taking the Fourier transform of e to the minus at e to the minus sigma t.
And if we carry that through, just working through the integral, we end up with a Laplace transform, which is 1 over s plus a. But just as in the Fourier transform, the Fourier transform won't converge for any a. Now what happens is that the Laplace transform will only converge when the Fourier transform of this converges. Said another way, it's when the combination of a plus sigma is greater than 0. So we would require that, if I write it over here, a plus sigma is greater than 0. Or that sigma is greater than minus a.
So in fact, in the Laplace transform of this, we have an expression 1 over s plus a. But we also require, in interpreting that, that the real part of s be greater than minus a. So that, essentially, the Fourier transform of x of t times e to the minus sigma t converges. So it's important to recognize that the algebraic expression that we get is only valid for certain values of the real part of s.
And so, for this example, we can summarize it as this exponential has a Laplace transform, which is 1 over s plus a, where s is restricted to the range the real part of s greater than minus a.
Now, we haven't had this issue before of restrictions on what the value of s is. With the Fourier transform, either it converged or it didn't converge. With the Laplace transform, there are certain values of s. We now have more flexibility, and so there's certain values of the real part of s for which it converges and certain values for which it doesn't.
The values of s for which the Laplace transform converges is-- the values are referred to as the region of convergence of the Laplace transform. And it's important to recognize that in specifying the Laplace transform, what's required is not only the algebraic expression, but also the domain or set of values of s for which that algebraic expression is valid.
Just to underscore that point, let me draw your attention to another example in the text, which is Example 9.2. In Example 9.2, we have an exponential for negative time, 0 for positive time. And if you carry through the algebra there, you end up with a Laplace transform expression, which is again 1 over s plus a. Exactly the same algebraic expression as we had for the previous example. The important distinction is that now the real part of s is restricted to be less than minus a.
And so, in fact, if you compare this example with the one above it, and let's just look back at the answer that we had there. If you compare those two examples, here the algebraic expression is 1 over s plus a with a certain region of convergence. Here the algebraic expression is 1 over s plus a. And the only difference between those two is the domain or region of convergence.
So there is another complication, or twist, now. Not only do we need to generate the algebraic expression, but we also have to be careful to specify the region of convergence over which that algebraic expression is valid.
Now, later on in this lecture, and actually also as the discussion of the Laplace transform goes on, we'll begin to see and understand more about how the region of convergence relates to various properties of the time function.
Well, let's finally look at one additional example from the text, And this is Example 9.3. And what it consists of is the time function, which is the sum of two exponentials. And although we haven't formally talked about properties of the Laplace transform yet, one of the properties that we'll see-- and it's relatively easy to develop-- is the fact that the Laplace transform of a sum is the sum of the Laplace transform. So, in fact, we can get the Laplace transform of the sum of these two terms as the sum of the Laplace transforms.
So for this one, we know from the example that we looked at previously, Example 9.1, that this is of the form 1 over s plus 1 with a region of convergence, which is the real part of s greater than minus 1. For this one, we have a Laplace transform which is 1 over s plus 2 with a region of convergence which is the real part of s greater than minus 2.
So for the two of them together, we have to take the overlap of those two regions. In other words, we have to take the region that encompasses both the real part of s greater than minus 1 and the real part of s greater than minus 2. And if we put those together, then we have a combined region of convergence, which is the real part of s greater than minus 1. So this is the expression.
And for this particular example, what we have is a ratio of polynomials. The ratio of polynomials, there's a numerator polynomial and a denominator polynomial. And it's convenient to summarize these by plotting the roots of the numerator polynomial and the roots of the denominator polynomial in the complex plane. And the complex plane which they're plotted is referred to the s-plane.
So we can, for example, take the denominator polynomial and summarize it by specifying the fact, or by representing the fact that it has roots at s equals minus 1 and at s equals minus 2. And I've done that in this picture by putting an x where the roots of the denominator polynomial are. The numerator polynomial has a root at s equals minus 3/2, and I've represented that by a circle. So these are the roots of the denominator polynomial and this is the root of the numerator polynomial for this example.
And also, for this example, we can represent the region of convergence, which is the real part of s greater than minus 1. And so that's, in fact, the region over here.
There is also, if I draw these, just the roots of the numerator and denominator of polynomials, I would need an additional piece of information to specify the algebraic expression completely. Namely, a multiplying constant out in front of the whole thing.
Well, this particular example, has the Laplace transform as a rational function. Namely, it's one polynomial in the numerator and another polynomial in the denominator. And in fact, as we'll see, Laplace transforms, which are ratios of polynomials, form a very important class. They, in fact, represent systems that are describable by linear constant coefficient differential equations. You shouldn't necessarily-- in fact, for sure you shouldn't see why that's true now. We'll see that later.
But that means that Laplace transforms that are rational functions, namely, the ratio of a numerator polynomial divided by the denominator polynomial, become very important in the discussion that follows. And in fact, we have some terminology for this. The roots of the numerator polynomial are referred to as the zeroes of the Laplace transform. Because, of course, those are the values of s at which x of s becomes 0.
And the roots of the denominator polynomial are referred to as the poles of the Laplace transform. And those are the values of s at which the Laplace transform blows up. Namely, becomes infinite. If you think of setting s equal to a value where this denominator polynomial goes to 0, of course, x of s becomes infinite.
And what we would expect and, of course, we'll see that this is true. What we would expect is that wherever that happens, there must be some problem with convergence of the Laplace transform. And indeed, the Laplace transform doesn't converge at the poles. Namely, at the roots of the denominator polynomial.
So, in fact, let's focus in on that a little further. Let's examine and talk about the region of convergence of the Laplace transform, and how it's associated both with properties of the time function, and also with the location of the poles of the Laplace transform. And as we'll see, there are some very specific and important relationships and conclusions that we can draw about how the region of convergence is constrained and associated with the locations of the poles in the s-plane.
Well, to begin with, we can, of course, make the statement as I've just made that the region of convergence contains no poles. In particular, if I think of this general rational function, the poles of x of s are the values of s at which the denominator is 0. Or equivalently, x of s blows up. And of course then, that implies that the expression has no longer converged.
Well, that's one statement that we can make. Now, there are some others. And one, for example, is the statement that if I have a point in the s-plane that corresponds to convergence, then in fact any line in the s-plane with that same real part will also be a set of values for which the Laplace transform converges.
And what's the reason for that? The reason for that is that s is sigma plus j omega and convergence of the Laplace transform is associated with convergence of the Fourier transform of e to the minus sigma t times x of t. And so the convergence only depends on sigma. If it only depends on sigma, then if it converges for one value of sigma-- I'm sorry, for a value of sigma for some value of omega, then it will converge for that same sigma for any value of omega.
The conclusion then is that the region of convergence, if I have a point, then I also have a line. And so what that suggests is that as we look at the region of convergence, it in fact corresponds to strips in the complex plane.
Now, finally we can tie together the region of convergence to the convergence of the Fourier transform. In particular, since we know that the Laplace transform reduces to the Fourier transform when the complex variable s is equal to j omega, the implication is that if we have the Laplace transform and if the Laplace transform reduces to the Fourier transform when sigma equals 0. In other words, when s is equal to j omega, then the Fourier transform of x of t converging is equivalent to the statement that the Laplace transform converges for sigma equal to 0. In other words, that the region of convergence includes what? The j omega axis in the s-plane.
So we have then some statements that kind of tie together the location of the poles and the region of convergence. Let me make one other statement, which is a much harder statement to justify. And I won't try to, I'll just simply state it. And that is that the region of convergence of the Laplace transform is a connected region. In other words, if the entire region consists of a single strip in the s-plane, it can't consist of a strip over here, for example, and a strip over there.
Well, let me emphasize some of those points a little further. Let's suppose that I have a Laplace transform, and the Laplace transform that I'm talking about is a rational function, which is 1 over s plus 1 times s plus 2. Then the pole-zero pattern, as it's referred to, in the s-plane, the location of the roots of the numerator and denominator polynomials. Of course, there is no numerator polynomial. The denominator polynomial roots, which I've represented by these x's, are shown here. And so this is the pole-zero pattern.
And from what I've said, the region of convergence can't include any poles and it must correspond to strips in the s-plane. And furthermore, it must be just one connected region rather than multiple regions. And so with this algebraic expression then, the possible choices for the region of convergence consistent with those properties are the following.
One of them would be a region of convergence to the right of this pole. A second would be a region of convergence which lies between the two poles as I show here. And a third is a region of convergence which is to the left of this pole. And because of the fact that I said without proof that the region of convergence must be a single strip, it can't be multiple strips. In fact, we could not consider, as a possible region of convergence, what I show here. So, in fact, this is not a valid region of convergence. There are only three possibilities associated with this pole-zero pattern. Namely, to the right of this pole, between the two poles, and to the left of this pole.
Now, to carry the discussion further, we can, in fact, associate the region of convergence of the Laplace transform with some very specific characteristics of the time function. And what this will do is to help us understand how for various choices of the region of convergence, the interpretation that we can impose on the related time function. Let me show you what I mean.
Suppose that we start with a time function as I indicate here, which is a finite duration time function. In other words, it's 0 except in some time interval.
Now, recall that the Fourier transform converges if the time function has the property that it's absolutely integrable. And as long as everything's stays finite in terms of amplitudes in a finite duration signal, there's no difficulty that we're going to run into here.
Now, here the Fourier transform will converge. And now the question is, what can we say about the region of convergence of the Laplace transform?
Well, the Laplace transform is the Fourier transform of the time function multiplied by an exponential. And so we can ask about whether we can destroy the absolute integrability of this by multiplying by an exponential that grows to fast or decays too fast, or whatever. And let's take a look at that.
Suppose that this time function is absolutely integrable. And let's multiply it by a decaying exponential. So this is now x of t times z to the minus sigma t if I think of multiplying these two together. And what you can see is that for positive time, sort of thinking informally, I'm helping the integrability of the product because I'm pushing this part down. For negative time, unfortunately, I'm making things grow. But I don't let them grow indefinitely because there's some time before which this is equal to 0.
Likewise, if I had a growing exponential, then for a growing exponential for negative time, or for this part, I'm making things smaller. For positive time, eventually this exponential is growing without bound. But the time function stops at some point. So the idea then kind of is that for a finite duration time function, no matter what kind of exponential I multiply by, whether it's going this way or going this way, because of the fact that essentially the limits on the integral are finite, I'm guaranteed that I'll always maintain absolute integrability. And so, in fact then, for a finite duration time function, the region of convergence is the entire s-plane.
Now, we can also make statements about other kinds of time functions. And let's look at a time function which I define as a right-sided time function. And a right-sided time function is one which is 0 up until some time, and then it goes on after that, presumably off to infinity.
Now, let me remind you that the whole issue here with the region of convergence has to do with exponentials that we can multiply a time function by and have the product end up being absolutely integrable.
Well, suppose that when I multiply this time function by an exponential which, let's say decays. But an exponential e to the minus sigma 0 t, what you can see sort of intuitively is that if this product is absolutely integrable, if I were to increase sigma 0, then I'm making things even better for positive time because I'm pushing them down. And whereas they might be worse for negative time, that doesn't matter because before some time the product is equal to 0. So if this product is absolutely integrable, then if I chose an exponential e to the minus sigma 1t where sigma 1 is greater than sigma 0, then that product will also be absolutely integrable. And we can draw an important conclusion about that, about the region of convergence from that.
In particular, we can make the statement that if the time function is right-sided and if convergence occurs for some value sigma 0, then in fact, we will have convergence of the Laplace transform for all values of the real part of s greater than sigma 0. The reason, of course, being that if sigma 0 increases, then the exponential decays even faster for positive time.
Now what that says then thinking another way, in terms of the region of convergence as we might draw it in the s-plane, is that if we have a point that's in the region of convergence corresponding to some value sigma 0, then all values of s to the right of that in the s-plane will also be in the region of convergence.
We can also combine that with the statement that for rational functions we know that there can't be any poles in the region of convergence. If you put those two statements together, then we end up with a statement that if x of t is right-sided and if its Laplace transform is rational, then the region of convergence is to the right of the rightmost pole. So we have here a very important insight, which tells us that we can infer some property about the time function from the region of convergence. Or conversely, if we know something about the time function, namely being right-sided, then we can infer something about the region of convergence.
Well, in addition to right-sided signals, we can also have left-sided signals. And a left-sided signal is essentially a right-sided signal turned around. In other words, a left-sided signal is one that is 0 after some time.
Well, we can carry out exactly the same kind of argument there. Namely, if the signal goes off to infinity in the negative time direction and stops some place for positive time, if I have an exponential that I can multiply it by and have that product be absolutely integrable. And if I choose an exponential that decays even faster for negative time so that I'm pushing the stuff way out there down even further, then I enhance the integrability even more. And you might have to think through that a little bit, but it's exactly the flip side of the argument for right-sided signals.
And the conclusion then is that if we have a left-sided signal and we have a point, a value of the real part of s which is in the region of convergence, then in fact, all values to the left of that point in the s-plane will also be in the region of convergence.
Now, similar to the statement that we made for right-sided signals, if x of t is left-sided and, in fact, we're talking about a rational Laplace transform, which we most typically will. Then, in fact, we can make the statement that the region of convergence is to the left of the leftmost pole because we know if we find a point that's in the region of convergence, everything to the left of that has to be in the region of convergence. We can't have any poles in the region of convergence. You put those two statements together and it says it's to the left of the leftmost pole.
Now the final situation is the situation where we have a signal which is neither right-sided nor left-sided. It goes off to infinity for positive time and it goes off to infinity for negative time. And there the thing to kind of recognize is that if you multiply by an exponential, and it's decaying very fast for positive time, it's going to be growing very fast for negative time. Conversely, if it's decaying very fast for negative time, it's growing very fast for positive time. And there's this notion of trying to balance the value of sigma. And in effect, what that says is that the region of convergence can't extent too far to the left or too far to the right.
Said another way for a two-sided signal, if we have a point which is in the region of convergence, then that point defines a strip in the s-plane that takes that point and extends it to the left until you bump into a pole, and extends it to the right until you bump it into a pole.
So you begin to then see that we can tie together some properties of the region of convergence and the right-sidedness, or left-sidedness, or two-sidedness of the time function. And you'll have a chance to examine that in more detail in the video course manual. Let's conclude this lecture by talking about how we might get the time function given the appliance transform.
Well, if we have a Laplace transform, we can, in principle, get the time function back again by recognizing this relationship between the Laplace transform and the Fourier transform, and using the formal Fourier transform expression. Or equivalently, the formal inverse Laplace transform expression, which is in the text.
But more typically what we would do is what we've done also with the Fourier transform, which is to use simple Laplace transform pairs together with the notion of the partial fraction expansion. And let's just go through that with an example.
Let's suppose that I have a Laplace transform as I indicated here in its pole-zero plot and a region of convergence which is to the right of this pole. And what we can identify from the region of convergence, in fact, is that we're talking about a right-sided time function. So the region of convergence is the real part of s greater than minus 1.
And now looking down at the algebraic expression, we have the algebraic expression for this, as I indicated here, equivalently expanded in a partial fraction expansion, as I show below. So if you just simply combine these together, that's the same as this. And the region of convergence is the real part of s greater than minus 1.
Now, the region of convergence of-- this is the sum of two terms, so the time function is the sum of two time functions. And the region of convergence of the combination must be the intersection of the region of convergence associated with each one. Recognizing that this is to the right of the poles, that tells us immediately that each of these two then would correspond to the Laplace transform of a right-sided time function.
Well, let's look at it term by term. The first term is the factor 1 over s plus 1 with a region of convergence to the right of this pole. And this algebraically corresponds to what I've indicated. And this, in fact, is similar to, or a special case of the example that we pointed to at the beginning of the lecture. Namely, Example 9.1. And so we can just simply use that result. If you think back to that example or refer to your notes, we know that time function of the form e to the minus a t gives us the Laplace transform, which is 1 over s plus a with the real part of s greater than minus a. And so this is the Laplace transform of the first. Or, I'm sorry, this is the inverse Laplace transform of the first term.
If we now consider the pole at s equals minus 2, and here is the region of convergence that we originally began with. In fact, we can having removed the pole at minus 1, extend this region of convergence to this pole. And we now have an algebraic expression, which is minus 1 over s plus 2, the real part of s greater than minus 1. Although, in fact, we can extend the region of convergence up to the pole.
And the inverse transform of this is now, again, referring to the same example, minus e to the minus 2t times the unit step. And if we simply put the two terms together then, adding the one that we have here to what we had before, we have a total inverse Laplace transform, which is that. So essentially, what's happened is that each of the poles has contributed an exponential factor. And because of the region of convergence being to the right of all those poles, that is consistent with the notion that both of those terms correspond to right-sided time functions.
Well, let's just focus for a second or two on the same pole-zero pattern. But instead of a region of convergence which is to the right of the poles as we had before, we'll now take a region of convergence which is between the two poles. And I'll let you work through this more leisurely in the video course manual.
But when we carry out the partial fraction expansion, as I've done below, we would now associate with this pole a region of convergence to the right. With this pole, a region of convergence to the left. And so what we would have is the sum of a right-sided time function due to this pole. And in fact it's of the form e to the minus t for t positive. And a left-sided time function due to this pole. And in fact, that's of the form e to the minus 2t for t negative. And so, in fact, the answer that we will get when we decompose this, use the partial fraction expansion, being very careful about associating the region of convergence of this pole to the right and of this pole to the left, we'll have then, when we're all done, a time function which will be of the form e to the minus t times the unit step for t positive.
And then we'll have a term of the form e to the-- I'm sorry, this would be e to the minus 2t since this is at minus 2 and this is at minus 1. This would be a plus sign and this would be minus e to the minus t for t negative. And you'll look at that a little more carefully when you sit down with the video course manual.
OK, well, what we've gone through, rather quickly, is an introduction to the Laplace transform. And a couple of points to underscore again, is the fact that the Laplace transform is very closely associated with the Fourier transform. And in fact, the Laplace transform for s equals j omega reduces to the Fourier transform. But more generally, the Laplace transform is the Fourier transform of x of t with an exponential weighting. And there are some exponentials for which that product converges. There are other exponentials for which that product has a Fourier transform that doesn't converge. That then imposes on the discussion of the Laplace transform what we refer to as the region of convergence.
And it's very important to understand that in specifying a Laplace transform, it's important to identify not only the algebraic expression, but also the values of s for which it's valid. Namely, the region of convergence of the Laplace transform.
Finally what we did was to tie together some properties of a time function with things that we can say about the region of convergence of its Laplace transform.
Now, just as with the Fourier transform, the Laplace transform has some very important properties. And out of these properties, both are some mechanisms for using the Laplace transform for such systems as those described by linear constant coefficient differential equations. But more importantly, the properties will help us. As we understand them further, will help us in using and exploiting the Laplace transform to study and understand linear time-invariant systems. And that's what we'll go on to next time. In particular, talking about properties, and then associating with linear time-invariant systems much of the discussion that we've had today relating to the Laplace transform. Thank you.
Free Downloads
Video
- iTunes U (MP4 - 119MB)
- Internet Archive (MP4 - 119MB)
Subtitle
- English - US (SRT)