Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Geometric evaluation of frequency responses from pole-zero plots; Difference equation and system function for first-order and second-order systems; Effect of properties; Overdamped and underdamped systems; Analysis and characteristics of second-order systems; Demonstration of use in speech synthesis.
Instructor: Prof. Alan V. Oppenheim
Lecture 21: Continuous-Time...
Related Resources
Continuous-time Second-order Systems (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
[MUSIC PLAYING]
PROFESSOR: Last time, we introduced the Laplace transform as a generalization of the Fourier transform, and, just as a reminder, the Laplace transform expression as we developed it is this integral, very much similar to the Fourier transform integral, except with a more general complex variable. And, in fact, we developed and talked about the relationship between the Laplace transform and the Fourier transform. In particular, the Laplace transform with the Laplace transform variable s, purely imaginary, in fact, reduces to the Fourier transform. Or, more generally, with the Laplace transform variable as a complex number, the Laplace transform is the Fourier transform of the corresponding time function with an exponential weighting. And, also, as you should recall, the exponential waiting introduced the notion that the Laplace transform may converge for some values of sigma and perhaps not for other values of sigma. So associated with the Laplace transform was what we refer to as the region of convergence.
Now just as with the Fourier transform, there are a number of properties of the Laplace transform that are extremely useful in describing and analyzing signals and systems. For example, one of the properties that we, in fact, took advantage of in our discussion last time was the linearly the linearity property, which says, in essence, that the Laplace transform of the linear combination of two time functions is the same linear combination of the associated Laplace transforms. Also, there is a very important and useful property, which tells us how the derivative of a time function-- rather, the Laplace transform of the derivative-- is related to the Laplace transform. In particular, the Laplace transform of the derivative is the Laplace transform x of t multiplied by s. And, as you can see by just setting s equal to j omega, in fact, this reduces to the corresponding Fourier transform property.
And a third property that we'll make frequent use of is referred to as the convolution property. Again, a generalization of the convolution property for Fourier transforms. Here the convolution property says that the Laplace transform of the convolution of two time functions is the product of the associated Laplace transforms.
Now it's important at some point to think carefully about the region of convergence as we discuss these properties. And let me just draw your attention to the fact that in discussing properties fully and in detail, one has to pay attention not just to how the algebraic expression changes, but also what the consequences are for the region of convergence, and that's discussed in somewhat more detail in the text and I won't do that here.
Now the convolution property leads to, of course, a very important and useful mechanism for dealing with linear time invariant systems, very much as the Fourier transform did. In particular, the convolution property tells us that if we have a linear time invariant system, the output in the time domain is the convolution of the input and the impulse response. In the Laplace transform domain, the Laplace transform of the output is the Laplace transform of the impulse response times the Laplace transform of the input.
And again, this is a generalization of the corresponding property for Fourier transforms. In the case of the Fourier transform, the Fourier transform [? of ?] the impulse response we refer to as the frequency response. In the more general case with Laplace transforms, it's typical to refer to the Laplace transform of the impulse response as the system function.
Now in talking about the system function, some issues of the region of convergence-- and for that matter, location of poles of the system function-- are closely tied in and related to issues of whether the system is stable and causal. And in fact, there's some useful statements that can be made that play an important role throughout the further discussion. For example, we know from previous discussions that there's a condition for stability of a system, which is absolute integrability of the impulse response. And that, in fact, is the same condition for convergence of the Fourier transform of the impulse response. What that says, really, is that if a system is stable, then the region of convergence of the system function must include the j omega axis. Which, of course, is where the Laplace transform reduces to the Fourier transform. So that relates the region of convergence and stability.
Also, you recall from last time that we talked about the region of convergence associated with right sided time functions. In particular for a right sided time function, the region of convergence must be to the right of the rightmost pole. Well, if, in fact, we have a system that's causal, then that causality imposes the condition that the impulse response be right sided. And so, in fact, for causality, we would have a region of convergence associated with the system function, which is to the right of the rightmost pole.
Now interestingly and very important is the consequence, if you put those two statements together, in particular, you're led to the conclusion that for stable causal systems, all the poles must be in the left half of the s-plane. What's the reason? The reason, of course, is that if the system is stable and causal, the region of convergence must be to the right of the rightmost pole. It must include the j omega axis. Obviously then, all the poles must be in the left half of the s-plane. And again, that's an issue that is discussed somewhat more carefully and in more detail in the text.
Now, the properties that we're talking about here are not the only properties, there are many others. But these properties, in particular, provide the mechanism-- as they did with Fourier transforms-- for turning linear constant coefficient differential equations into algebraic equations and, corresponding, lead to a mechanism for dealing with and solving linear constant coefficient differential equations. And I'd like to illustrate that by looking at both first order and second order differential equations.
Let's begin, first of all, with a first order differential equation. So what we're talking about is a first order system. What I mean by that is a system that's characterized by a first order differential equation. And if we apply to this equation the differentiation property, then the derivative-- the Laplace transform of the derivative is s times the Laplace transform of the time function. The linearity property allows us to combine these together. And so, consequently, applying the Laplace transform to this equation leads us to this algebraic equation, and following that through, leads us to the statement that the Laplace transform of the output is one over s plus a times the Laplace transform of the input.
We know from the convolution property that this Laplace transform is the system function times x of s. And so, one over s plus a is the system function or equivalently, the Laplace transform of the impulse response. So, we can determine the impulse response by taking the inverse Laplace transform of h of s given by one over s plus a.
Well, we can do that using the inspection method, which is one way that we have of doing inverse Laplace transforms. The question is then, what time function has a Laplace transform which is one over s plus a? The problem that we run into is that there are two answers to that. one over s plus a is the Laplace transform of an exponential for positive time, but one over s plus a is also the Laplace transform of an exponential for negative time. Which one of these do we end up picking?
Well, recall that the difference between these was in their region of convergence. And in fact, in this case, this corresponded to a region of convergence, which was the real part of s greater than minus a. In this case, this was the corresponding Laplace transform, provided that the real part of s is less than minus a. So we have to decide which region of convergence that we pick and it's not the differential equation that will tell us that, it's something else that has to give us that information. What could it be? Well, what it might be is the additional information that the system is either stable or causal.
So for example, if the system was causal, we would know that the region of convergence is to the right of the pole and that would correspond, then, to this being the impulse response. Whereas, with a negative-- I'm sorry with a positive-- if we knew that the system, let's say, was non-causal, then we would associate with this region of convergence and we would know then that this is the impulse response. So a very important point is that what we see is that the linear constant coefficient differential equation gives us the algebraic expression for the system function, but does not tell us about the region of convergence.
We get the reach of convergence from some auxiliary information. What is that information? Well, it might, for example, be knowledge that the system is perhaps stable, which tells us that the region of convergence includes the j omega axis, or perhaps causal, which tells us that the region of convergence is to the right of the rightmost pole. So it's the auxiliary information that specifies for us the region of convergence. Very important point. The differential equation by itself does not completely specify the system, it only essentially tells us what the algebraic expression is for the system function.
Alright that's a first order example. Let's now look at a second order system and the differential equation that I picked in this case. I've parameterized in a certain way, which we'll see will be useful. In particular, it's a second order differential equation and I chosen, just for simplicity, to not include any derivatives on the right hand side, although we could have. In fact, if we did, that would insert zeros into the system function, as well as the poles inserted by the left hand side.
We can determine the system function in exactly the same way, namely, apply the Laplace transform to this equation. That would convert this differential equation to an algebraic equation. And now when we solve this algebraic equation for y of s, in terms of x of s, it will come out in the form of y of s, equal to h of s, times x of s. And h of s, in that case, we would get simply by dividing out by this polynomial [? in ?] s, and so the system function then is the expression that I have here. So this is the form for a second order system where there are two poles. Since this is a second order polynomial, there are no zeros associated with the fact that I had no derivatives of the input on the right hand side of the equation.
Well, let's look at this example-- namely the second order system-- in a little more detail. And what we'll want to look at is the location of the poles and some issues such as, for example, the frequency response. So here again I have the algebraic expression for the system function. And as I indicated, this is a second order polynomial, which means that we can factor it into two roots. So c1 and c2 represent the poles of the system function. And in particular, in relation to the two parameters zeta and omega sub n-- if we look at what these roots are, then what we get are the two expressions that I have below.
And notice, incidentally, that if zeta is less than one, then what's under the square root is negative. And so this, in fact, corresponds to an imaginary part-- an imaginary term for zeta less than one. And so the two roots, then, have a real part which is given by minus zeta omega sub n, and an imaginary part-- if I were to rewrite this and then express it in terms of j or the square root of minus one. Looking below, we'll have a real part which is minus zeta omega sub n-- an imaginary part which is omega sub n times this square root. So that's for zeta less than one and for zeta greater than one, the two roots, of course, will be real.
Alright, so let's examine this for the case where zeta is less than one. And what that corresponds to, then, are two poles in the complex plane. And they have a real part and an imaginary part. And you can explore this in somewhat more detail on your own, but, essentially what happens is that as you keep the parameter omega sub n fixed and vary zeta, these poles trace out a circle. And, for example, where zeta equal to zero, the poles are on the j omega axis at omega sub n. As zeta increases and gets closer to one, the poles converge toward the real axis and then, in particular, for zeta greater than one, what we end up with are two poles on the real axis.
Well, actually, the case that we want to look at a little more carefully is when the poles are complex. And what this becomes is a second order system, which as we'll see as the discussion goes on, has an impulse response which oscillates with time and correspondingly a frequency response that has a resonance. Well let's examine the frequency response a little more carefully. And what I'm assuming in the discussion is that, first of all, the poles are in the left half plane corresponding to zeta omega sub n being positive-- and so this is-- minus that is negative. And furthermore, I'm assuming that the poles are complex. And in that case, the algebraic expression for the system function is omega sub n squared in the numerator and two poles in the denominator, which are complex conjugates.
Now, what we want to look at is the frequency response of the system. And
that corresponds to looking at the Fourier transform of the impulse response, which is the Laplace transform on the j omega axis. So we want to examine what h of s is as we move along the j omega axis. And notice, that to do that, in this algebraic expression, we want to set s equal to j omega and then evaluate-- for example, if we want to look at the magnitude of the frequency response-- evaluate the magnitude of the complex number. Well, there's a very convenient way of doing that geometrically by recognizing that in the complex plane, this complex number minus that complex number represents a vector. And essentially, to look at the magnitude of this complex number corresponds to taking omega sub n squared and dividing it by the product of the lengths of these vectors.
So let's look, for example, at the vector s minus c1, where s is on the j omega axis. And doing that, here is the vector c1, and here is the vector s-- which is j omega if we're looking, let's say, at this value of frequency-- and this vector, then, is the vector which is j omega minus c1. So in fact, it's the length of this vector that we want to observe as we change omega-- namely as we move along the j omega axis. We want to take this vector and this vector, take the lengths of those vectors, multiply them together, divide that into omega sub n squared, and that will give us the frequency response.
Now that's a little hard to see how the frequency response will work out just looking at one point. Although notice that as we move along the j omega axis, as we get closer to this pole, this vector, in fact, gets shorter, and so we might expect , that the frequency response-- as we're moving along the j omega axis in the vicinity of that pole-- would start to peak. Well, I think that all of this is much better seen dynamically on the computer display, so let's go to the computer display and what we'll look at is a second order system-- the frequency response of it-- as we move along the j omega axis.
So here we see the pole pair in the complex plane and to generate the frequency response, we want to look at the behavior of the pole vectors as we move vertically along the j omega axis. So we'll show the pole vectors and let's begin at omega equals zero. So here we have the pole vectors from the poles to the point omega equal to zero. And, as we move vertically along the j omega axis, we'll see how those pole vectors change in length. The magnitude of the frequency response is the reciprocal of the product of the lengths of those vectors.
Shown below is the frequency response where we've begun just at omega equal to zero. And as we move vertically along the j omega axis and the pole vector lengths change, that will, then, influence what the frequency response looks like. We've started here to move a little bit away from omega equal to zero and notice that in the upper half plane the pole vector has gotten shorter. The pole vector for the pole in the lower half plane has gotten longer. And now, as omega increases further, that process will continue. And in particular, the pole vector associated with the pole in the upper half plane will be its shortest in the vicinity-- at a frequency in the vicinity of that pole-- and so, for that frequency, then, the frequency response will peak and we see that here.
From this point as the frequency increases, corresponding to moving further vertically along the j omega axis, both pole vectors will increase in length. And that means, then, that the magnitude of the frequency response will decrease. For this specific example, the magnitude of the frequency response will asymptotically go to zero. So what we see here is that the frequency response has a resonance and as we see geometrically from the way the vectors behaved, that resonance in frequency is very clearly associated with the position of the poles. And so, in fact, to illustrate that further and dramatize it as long as we're focused on it, let's now look at the frequency response for the second order example as we change the pole positions. And first, what we'll do is let the polls move vertically parallel to the j omega axis and see how the frequency response changes, and then we'll have the polls move horizontally parallel to the real axis and see how the frequency response changes.
To display the behavior of the frequency response as the poles move, we've changed the vertical scale on the frequency response somewhat. And now what we want to do is move the poles, first, parallel to the j omega axis, and then parallel to the real axis. Here we see the effect of moving the poles parallel to the j omega axis. And what we observe is that, in fact, the frequency location of the resonance shifts, basically tracking the location of the pole.
If we now move the pole back down closer to the real axis, then this resonance will shift back toward its original location and so let's now see that. And here we are back at the frequency that we started at. Now we'll move the poles even closer to the real axis. The frequency location of the resonance will continue to shift toward lower frequencies. And also in the process, incidentally, the height over the resonant peak will increase because, of course, the lengths of the pole vectors are getting shorter. And so, we see now the resonance shifting down toward lower and lower frequency. And, finally, what we'll now do is move the poles back to their original position and the resonant peak will, of course, shift back up. And correspondingly the height or amplitude of the resonance will decrease. And now we're back at the frequency response that we had generated previously.
Next we'd like to look at the behavior as the polls move parallel to the real axis. First closer to the j omega axis and then further away. As they move closer to the j omega axis, the resonance sharpens because of the fact that the pole vector gets shorter and responds-- or changes in length more quickly as we move past it moving along the j omega axis. So here we see the effect of moving the poles closer to the j omega axis. The resonance has gotten narrower in frequency and higher in amplitude, associated with the fact that the pole vector gets shorter. Next as we move back to the original location, the resonance will broaden once again and the amplitude will decrease.
And then, if we continue to move the poles even further away from the real axis, the resonance will broaden even further and the amplitude of the peak will become even smaller. And finally, let's now look just move the poles back to their original position and we'll see the resonance narrow again and become higher. And so what we see then is that for a second order system, the behavior of the resonance basically is associated with the pole locations, the frequency of the resonance associated with the vertical position of the poles, and the sharpness of the resonance associated with the real part of the poles-- in other words, their position closer or further away from the j omega axis.
OK, so for complex poles, then, for the second order system, what we see is that we get a resonant kind of behavior, and, in particular, then that resonate behavior tends to peak, or get peakier, as the value of zeta gets smaller. And here, just to remind you of what you saw, here is the frequency response with one particular choice of values-- well, this is normalized so that omega sub n is one-- one particular choice for zeta, namely 0.4. Here is what we have with zeta smaller, and, finally, here is an example where zeta has gotten even smaller than that. And what that corresponds to is the poles moving closer to the j omega axis, the corresponding frequency response getting peakier.
Now in the time domain what happens is that we have, of course, these complex roots, which I indicated previously, where this represents the imaginary part because zeta is less than one. And in the time domain, we will have a form for the behavior, which is a e to the c one t, plus a conjugate, e to the c one conjugate t. And so, in fact, as the poles get closer to the j omega axis-- corresponding to zeta getting smaller-- as the polls get closer to the j omega axis, in the frequency domain the resonances get sharper. In the time domain, the real part of the poles has gotten smaller, and that means, in fact, that in the time domain, the behavior will be more oscillatory and less damped.
And so just looking at that again. Here is, in the time domain, what happens. First of all, with the parameter zeta equal to 0.4, and it oscillates and exponentially dies out. Here is the second order system where zeta is now 0.2 instead of 0.4. And, finally, the second order system where zeta is 0.1. And what we see as zeta gets smaller and smaller is that the oscillations are basically the same, but the exponential damping becomes less and less.
Alright, now, this is a somewhat more detailed look at second order systems. And second order systems-- and for that matter, first order systems-- are systems that are important in their own right, but they also are important as basic building blocks for more general, in particular, for higher order systems. And the way in which that's done typically is by combining first and second order systems together in such a way that they implement higher order systems. And two very common connections are connections which are cascade connections, and connections which are parallel connections.
In a cascade connection, we would think of combining the individual systems together as I indicate here in series. And, of course, from the convolution property, the overall system function is the product of the individual system functions. So, for example, if these were all second order systems, and I combine n of them together in cascade, the overall system would be a system that would have to n poles-- in other words, it would be a two n order system. That's one very common kind of connection.
Another very common kind of connection for first and second order systems is a parallel connection, where, in that case, we connect the systems together as I indicate here. The overall system function is just simply the sum of these, and that follows from the linearity property. And so the overall system function would be as I indicate algebraically here. And notice that if each of these are second order systems, and I had capital N of them in parallel, when you think of putting the overall system function over one common denominator, that common denominator, in general, is going to be of order two N. So either the parallel connection or the cascade connection could be used to implement higher order systems.
One very common context in which second order systems are combined together, either in parallel or in cascade, to form a more interesting system is, in fact, in speech synthesis. And what I'd like to do is demonstrate a speech synthesizer, which I have here, which in fact is a parallel combination of four second order systems, very much of the type that we've just talked about. I'll return to the synthesizer in a minute. Let me first just indicate what the basic idea is.
In speech synthesis, what we're trying to represent or implement is something that corresponds to the vocal tract. The vocal tract is characterized by a set of resonances. And we can think of representing each of those resonances by a second order system. And then the higher order system corresponding to the vocal tract is built by, in this case, a parallel combination of those second order systems.
So for the synthesizer, what we have connected together in parallel is four second order systems. And a control on each one of them that controls the center frequency or the resonant frequency of each of the second order systems. The excitation is an excitation that would represent the air flow through the vocal cords. The vocal cords vibrate and there are puffs of air through the vocal cords as they open and close. And so the excitation for the synthesizer corresponds to a pulse train representing the air flow through the vocal cords. The fundamental frequency of this representing the fundamental frequency of the synthesized voice. So that's the basic structure of the synthesizer
And what we have in this analog synthesizer are separate controls on the individual center frequencies. There is a control representing the center frequency of the third resonator and the fourth resonator, and those are represented by these two knobs. And then the first and second resonators are controlled by moving this joystick. The first resonator by moving the joystick along this axis and the second resonator by moving the joystick along this axis. And then, in addition to controls on the four resonators, we can control the fundamental frequency of the excitation, and we do that with this knob.
So let's, first of all, just listen to one of the resonators, and the resonator that I'll play is the fourth resonator. And what you'll hear first is the output as I vary the center frequency of that resonator.
[BUZZING]
So I'm lowering the center frequency. And then, bringing the center frequency back up. And then, as I indicated, I can also control the fundamental frequency of the excitation by turning this knob.
[BUZZING]
Lowering the fundamental frequency. And then, increasing the fundamental frequency.
Alright, now, if the four resonators in parallel are an implementation of the vocal cavity, then, presumably, what we can synthesize when we put them all in are vowel sounds and let's do that. I'll now switch in the other resonators. When we do that, then, depending on what choice we have for the individual resonant frequencies, we should be able to synthesize vowel sounds. So here, for example, is the vowel e.
[BUZZING "E"].
Here is
[BUZZING "AH"]
--ah.
A.
[BUZZING FLAT A]
And, of course, we can--
[BUZZING OO]
--generate
[BUZZING "I"]
--lots of other vowel sounds.
[BUZZING "AH"]
--and change the fundamental frequency at the same time.
[CHANGES FREQUENCY UP AND DOWN]
Now, if we want to synthesize speech it's not enough to just synthesize steady state vowels-- that gets boring after a while. Of course what happens with the vocal cavity is that it moves as a function of time and that's what generates the speech that we want to generate. And so, presumably then, if we change these resonant frequencies as a function of time appropriately, then we should be able to synthesize speech. And so by moving these resonances around, we can generate synthesized speech. And let's try it with some phrase. And I'll do that by simply adjusting the center frequencies appropriately.
[BUZZING "HOW ARE YOU"]
Well, hopefully you understood that. As you could imagine, I spent at least a few minutes before the lecture trying to practice that so that it would come out to be more or less intelligible.
Now the system as I've just demonstrated it is, of course, a continuous time system or an analog speech synthesizer. There are many versions of digital or discrete time synthesizers. One of the first, in fact, being a device that many of you are very likely familiar with, which is the Texas Instruments Speak and Spell, which I show here. And what's very interesting and rather dramatic about this device is the fact that it implements the speech synthesis in very much the same way as I've demonstrated with the analog synthesizer. In this case, it's five second order filters in a configuration that's slightly different than a parallel configuration but conceptually very closely related.
And let's take a look inside the box. And what we see there, with a slide that was kindly supplied by Texas Instruments, is the fact that there really are only four chips in there-- a controller chip, some storage. And the important point is the chip that's labeled as the speech synthesis chip, in fact, is what embodies or implements the five second order filters and, in addition, incorporates some other things-- some memory and also the [? DDA ?] converters. So, in fact, the implementation of the synthesizer is pretty much done on a single chip.
Well that's a discrete time system. We've been talking for the last several lectures about continuous time systems and the Laplace transform. Hopefully what you've seen in this lecture and the previous lecture is the powerful tool that the Laplace transform affords us in analyzing and understanding system behavior.
In the next lecture what I'd like to do is parallel the discussion for discrete time, turn our attention to the z transform, and, as you can imagine simply by virtue of the fact that I have shown you a digital and analog version of very much the same kind of system, the discussions parallel themselves very strongly and the z transform will play very much the same role in discrete time that the Laplace transform does in continuous time. Thank you.
Free Downloads
Video
- iTunes U (MP4 - 93.1MB)
- Internet Archive (MP4 - 93.1MB)
Subtitle
- English - US (SRT)