Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Analysis of the open loop system and exploration of choices for the feedback dynamics; Behavior description through a second-order linear constant-coefficient differential equation; Root-locus analysis; Combination of proportional and derivative feedback to achieve pendulum stability; Demonstration: inverted pendulum on a track, effect of modifying dynamics, effect of modifying damaging characteristics.
Instructor: Prof. Alan V. Oppenheim
Lecture 26: Feedback Exampl...
Related Resources
Feedback Example: The Inverted Pendulum (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
[MUSIC PLAYING]
PROFESSOR: Last time, we began a discussion of feedback. And I briefly introduced a number of applications. For example, the use of feedback in compensating for non-ideal elements in a system. And also, as another example, the use of feedback in stabilizing unstable systems. In this final lecture, what I'd like to do is focus in more detail on the use of feedback to stabilize unstable systems.
And the specific context in which I'd like to do that is in the context of the inverted pendulum. Now, I've referred to the inverted pendulum in several past lectures. And, basically, as everyone realizes, a pendulum is essentially a rod with a weight on the bottom. And, naturally, an inverted pendulum is just that, upside down. And so it's essentially a rod that's top-heavy.
And the idea with the inverted pendulum is, although by itself it's unstable around a pivot point at the bottom, the idea is to apply an external input, an acceleration, essentially to keep it balanced. And we've all probably done this type of thing at some point in our lives, either childhood or not. And so that's the system that I'd like to analyze and then illustrate in this lecture.
Now, the context in which we'll do that is not with my son's horse, but actually with an inverted pendulum which is mounted on a cart. And first I'd like to go through an analysis of it, and then we'll actually see how the system works. So basically, then, the system that we're talking about is going to be a system which is a movable cart. Mounted on the cart is a rod with a weight on the top. And this then becomes the inverted pendulum.
And the cart has an external acceleration applied to it, which is the input that we can apply to the system. And then, in general, we can expect some disturbances. And I'm going to represent the disturbances in terms of an angular acceleration, which shows up around the weight at the top of the rod.
So if we look at the system, then, as I've kind of indicated it here, basically we have two inputs. We have an input, which are the external disturbances, which we can assume that we have no control over. And then there is the acceleration that's apply to the cart externally, or in the case of balancing my son's horse, it's the movement of my hand.
And then, through the system dynamics, that ends up influencing the angle of the rod. And we'll think of the angle of the rod as the system output. And basically the idea, then, is to try to keep that angle at 0 by applying the appropriate acceleration.
Now, as I've mentioned several times previously, if we know exactly what the system dynamics are, and if we know exactly what the external disturbances are, then theoretically we can choose an acceleration for the cart which will exactly balance the system, or balance the rod. However, since the system is inherently unstable, any deviation from that model-- in particular, any unanticipated external disturbances-- will excite the instability. And what that means is that the rod will fall.
So the idea, then, is, as we'll see, to stabilize the system by placing feedback around it. In particular through a measurement of the output angle, processed through some appropriate feedback dynamics, using that to control the acceleration of the cart. And if we choose the feedback dynamics correctly, then, in fact, we can end up with a stable system, even though the open-loop system is unstable.
Well, as a first step, let's analyze the system in its open-loop form. In particular, let's analyze the system without feedback, demonstrate that, indeed, it is unstable, and then see how we can stabilize it using feedback. So the system, then, is-- as we've indicated, the variables involved are the measured angle, which represents the output, the angular acceleration due to external disturbances, and then there is the applied external acceleration on the cart.
So these represent the variables, with L representing the length of the rod, and s of t I indicate here, which we won't really be paying attention to, as the position of the cart. And without going into the details specifically, what we'll do is set up the equation, or write the equation, in terms of balancing the acceleration. And if you go through that process, then the basic equation that you end up with is the equation that I indicate here.
And this equation, then, tells us how the basic forces, or accelerations, are balanced, where this is the second derivative of the angle. And then we have the acceleration due to gravity, reflected through the angular acceleration times the sine of the angle, the angular acceleration due to the external disturbances, and finally, the angular acceleration due to the motion of the cart. Now, this equation, as it's written here, is a nonlinear equation in the angle theta of t.
And what we'd like to do is linearize the equation. And we'll linearize the equation by making the assumption that the angle is very small, close to 0. In other words, what we'll assume is that we're able to keep the rod relatively vertical. And our analysis, because we're linearizing the equations, will obviously depend on that.
So making the assumption that the angle is small, we then assume that the sine of the angle is approximately equal to the angle, and the cosine of the angle is approximately equal to 1. That will then linearize this equation. And the resulting equation, then, in its linearized form, is the one that I indicate here.
So we have an equation of motion, which linearizes the balance of the accelerations. And what we see on the right-hand side of the equation is the combined inputs due to the angular acceleration of the rod and the acceleration of the cart. And on the left-hand side of the equation, we have the other forces due to the change in the angle. So this is, then, the differential equation associated with the open-loop system, the basic dynamics of the system.
And if we apply the Laplace transform to this equation and solve for the system function, then the basic equation that we're left with expresses the Laplace transform of the angle, equal to the system function, the open-loop system function, times the Laplace transform of the combined inputs. And I remind you again that our assumption is that this is an input that we have no control over. This is the input that we can control. And the two together, of course, form the combined input.
Now, we can, of course, solve for the poles and 0's. There are no 0's. And there are two poles, since this is a second-order denominator. And looking at the poles in the s-plane then, we see that we have a pair of poles, one at minus the square root of g over L, and one at plus the square root of g over L.
And the important observation, then, is that while this pole represents a stable pole, the right half-plane pole represents an unstable pole. And so this system, in fact, is an unstable system. It's unstable because what we have is a pole in the right half-plane for the open-loop system.
Now, before we see how to stabilize the system, let's, in fact, look at the mechanical setup, which we won't turn on for now, and just see essentially what this instability means and what the physical orientation of the equipment is. So what we have, which you just saw a caricature of in the view graph, is an inverted pendulum. This represents the pendulum. And it's mounted with the pivot point at the bottom. It's mounted on a cart.
And here we have the cart. And this is a pivot point. And, as you can see, the cart can move back and forth on a track. And the external acceleration, which is applied to the cart, is applied through this cable. And that cable is controlled by a motor. And we have the motor at this end of the track. And so as the motor turns, then the cart will move back and forth.
OK. Now, since the system is not turned on and there's no feedback-- in fact, the system, as we just saw in the transparency, is an unstable system. And what that instability means is that, for example, if I set the angle to 0 and then let it go, then as soon as there's a slight external disturbance, it will start to fall.
Now, you can imagine that not only is the system unstable as it is, but certainly can't accommodate changes in the system dynamics. For example, if we change, let's say, the weight of the pendulum. So if we thought, for example, of changing the weight by, let's say, putting something like a glass with something in it on top, and I think about trying to balance it by itself, obviously that's hard. In fact, I would say, without turning the system on, guaranteed to be impossible.
So the basic system, as we've just analyzed it, is inherently an unstable system. The instability reflecting in the fact that the pendulum, following its own natural forces, will tend to fall. And now what we want to look at is how, through the use of feedback, we can, in fact, stabilize the system.
So let's first, once again, look at the open-loop system. And the open-loop system function that we saw was a system function of this form, 1 over Ls squared minus g, and that represents the system function associated with the two inputs, one being the external disturbances, the other being the externally applied acceleration. And the system function here represents the system dynamics. And the resulting output is the angle.
Now, the feedback, the basic strategy behind the feedback, is to in some way use the measured angle, make a measurement of the angle, and use that through appropriate feedback dynamics to stabilize the system. And so with feedback applied around the system then, there would be some measurement of the angle through some appropriately chosen feedback dynamics. And that then would determine for us the applied external acceleration corresponding to what the motor does by pulling the cable back and forth. And we would like to choose G of s so that the overall system is stable.
Now, as you recall from the lecture last time, with the feedback around the system and expressed here in terms of negative feedback, the overall transfer function then is the transfer function associated with the basic feedback loop, where H of s is the open-loop system and G of s corresponds to the feedback dynamics. Now, that's the basic strategy.
We haven't decided yet how to choose the feedback dynamics. And that becomes the next step. And we want to choose the dynamics in such a way that the system ends up being stabilized. Well, I think the thing to do is just simply begin with what would seem to be the most obvious, which is a simple measurement of the angle.
Let's take the angular measurements, feed that back, let's say through a potentiometer and perhaps an amplifier, so that there's some gain, and see if we can stabilize the system simply through feedback which is proportional to a measurement of the angle, what. Is typically referred to as "proportional feedback." Well, let's analyze the results of doing that.
Once again we have the basic feedback equation, where the open-loop transfer function is what we had developed previously, given by the second-order expression. Now, using proportional feedback, we choose the acceleration a of t to be directly proportional through a gain constant or attenuation constant K1, directly proportional to the measured angle theta of t. And consequently the system function in the feedback path is just simply a gain or attenuate K1. So this, then, is the system function for the feedback path.
Well, substituting this into the closed-loop expression, then the overall expression for the Laplace transform of the output angle is, as I indicate here, namely that theta of s is proportional through this system function. So the Laplace transform of the input X of t-- and let me remind you that X of t, the external disturbances, now represent the only input, since the other input corresponding to the applied acceleration to the cart is now controlled only through the feedback loop.
So we have this system function, then, for the closed-loop system. We recognize this once again as a second-order system. And the poles of this second-order system are then given by plus and minus the square root of g minus K1, divided by L. So K1, the feedback constant, clearly influences the position of the poles. And let's look, in fact, at where the poles are in the s-plane.
Well, first of all, with K1 equal to 0, which, of course, is no feedback and corresponds to the open-loop system, we have the poles where they were previously. And if we now choose K1, let's say less than 0, then what will happen as K1 becomes more and more negative, so that this term is more and more positive, is that the left half-plane pole will move further into the left half-plane, the right half-plane pole will move further into the right half-plane. So clearly with K1 negative, this pole, which represents an instability, becomes even more unstable.
Well, instead of K1 negative, let's try K1 positive and see what happens. And with K1 positive, what happens in this case is that the left half-plane pole moves closer to the origin, the right half-plane pole moves closer to the origin. What one would hope is that they both end up in the left half-plane at some point. But, in fact, they don't.
What happens is that eventually they both reach the origin, split at that point, and travel along the j-omega-axis. Now, this movement of the poles, as we vary K1 either positive or negative, what we see is that with the open-loop system, as we introduce feedback, those basic poles move in the s-plane, either this way, as K1 becomes more negative, or for this particular case, if K1 is positive, they move in together, split, and move along the j-omega-axis.
And that locus of the poles, in fact, is referred to in feedback terminology as the "root locus." And as is discussed in much more detail in the text, there are lots of ways of determining the root locus for feedback systems without explicitly solving for the roots. For this particular example, in fact, we can determine the root locus in the most straightforward way simply by solving for the roots of the denominator of the system function.
Now, notice that in this case, these poles, with K1 positive, have moved together. They move along the j-omega-axis. And before they come together, the system is clearly unstable. Even when they come together and split, the system is marginally stable, because, in fact, the system would tend to oscillate.
And what that oscillation means is that with the measurement of the angles, essentially if the poles are operating on the j-omega-axis, what will happen is that things will oscillate back and forth. Perhaps with the cart moving back and forth trying to compensate, and the rod sort of moving in the opposite direction. In any case, what's happened is, with just proportional feedback, we apparently are unable to stabilize the system.
Well, you could think that the reason, perhaps, is that we're not responding fast enough. For example, if the angle starts to change, perhaps, in fact, we should make the feedback proportional to the rate of change of angles, rather than to the angle itself. And so we could examine the possibility of using what's referred to as "derivative feedback."
In derivative feedback, what we would do is to choose, for the feedback equation, or for the feedback system function, something reflecting a measurement of the derivative of the angle. And so here again, once again, we have the open-loop system function with derivative feedback. We'll attempt to-- or we will use this feedback. And acceleration, which instead of being proportional to the angle, as it was in the previous case, an acceleration which is proportional to the derivative of the angle.
And so the basic feedback dynamics, or feedback system function, is then G of s is equal to K2 times s. The multiplication by s reflecting the fact that in the time domain it's measuring the derivative of the angle. The associated system function is then indicated here. And if we solve again this second-order equation for its roots, that tells us that the location of the poles are given by this equation.
And so now what we would want to look at is how these poles move as we vary the derivative feedback constant K2. So let's look at the root locus for that case. And first, once again, we have the basic open-loop poles. And the open-loop poles consist of a pole on the left half-plane and a pole on the right half-plane, corresponding in this equation to K2 equal to 0. That is, no feedback in the system.
If K2 is negative, then what you can see is that this real part will become more negative. Since K2 is squared here, this is still a positive quantity. And, in fact, then with K2 less than 0, the root locus that we get is indicated by this.
Now, notice, then, that this right half-plane pole is becoming more unstable. And the left half-plane pole likewise is becoming more unstable. And this point, by the way, corresponds to where this pole ends up, or where the root locus ends up, when K2 eventually becomes infinite. So clearly K2 negative is not going to stabilize the system.
Let's try K2 positive. And the root locus dictated by this equation, then, is what I indicate here. The left half-plane pole moves further into the left half-plane. That's good. That's getting more stable. The right half-plane pole is moving closer to the left half-plane, but unfortunately never gets there. And, in fact, it's when K2 eventually becomes infinite that this pole just gets to the point where the system becomes marginally stable.
So what we found is that with proportional feedback, we can't stabilize the system, with the derivative feedback, we can't stabilize the system, by themselves. And a logical next choice is to see if we can stabilize the system by both measuring the angle and at the same time being careful to be responsive to how fast that angle is changing, so that if it's changing too fast, we can move the cart or our hand under the inverted pendulum more quickly.
Well, now then what we want to examine is the use of proportional plus derivative feedback. And in that case, we then have a choice for the acceleration, which is proportional with one constant to the angle and proportional with another constant to the derivative of the angle. And so the basic system function, then, with proportional plus derivative feedback, is a system function which is K1 plus K2 times s.
We then have an overall closed-loop system function, theta of s, which is given by this equation. And so the roots of this equation then represent the poles of the closed-loop system. And those poles involve two parameters. They involve the parameter K2, which is proportional to the derivative of the angle, and the constant K1, which is proportional to the angle.
And we'll, first of all, examine this just with K2 positive, because what we can see is that as we vary K1, if K2 were negative, that would, more or less immediately, put poles into the right half-plane. And the more negative K2 got, the larger this term is going to get. So, in fact, as it will turn out, we can stabilize the system, provided that we choose K2 to be greater than 0.
All right. Now, with K2 greater than 0, what happens in the location of the poles is that if K2 is greater than 0, and we choose K1 greater than 0-- I'm sorry. We choose K1 equal to 0, then in effect the influence of K2 is to shift the poles of the open-loop system slightly. And so with K2 greater than 0 and K1 equal to 0, we have a set of poles, which are indicated here, and so this is just a shift, slight shift, to the left, depending on the value of K2, a shift to the left of the open-loop poles.
Now, as we vary K1, and in particular we're going to choose K1 greater than 0, what happens is that the poles will begin to move together, as they did previously when we looked at the variation of K1. The poles will move together, reach a point where we have a second-order pole, and then those poles will split and move parallel to the j-omega-axis. So what's indicated here is the root locus. And what this represents, then, as long as we make K1 large enough so that this poles moves into the left half-plane, is that it represents now a stable system.
OK. So what we've seen is that proportional feedback by itself or derivative feedback by itself won't stabilize the system, whereas with the right choice of feedback constants, proportional plus derivative feedback will. And we saw that basically by examining the root locus in the s-plane. All right. Well, let's actually watch the system in action. And I described it to you previously. Basically, an inverted pendulum on a cart. And so I still have it off.
And, of course, we have the pendulum. And as I indicated, it's pivoted at the base. And the angle is measured by a potentiometer that we have attached to the pivot point. And the measurement of the angle is fed back through this wire to a motor that we have at the other end of the table. And then that motor basically is used to provide the acceleration to drive the cart.
OK. Well, let's turn it on. And when we turn it on, it'll take just an instant to stabilize. And fortunately we have the constants set right, and there we have now the stabilization of an unstable system. Remember that with the feedback off, the system is unstable because the pendulum will fall, whereas now it's stabilized.
Now, also, as you can see, not only have we stabilized it, but we're able to compensate through the feedback to changes in the external disturbances. For example, by tapping it, because of the feedback and the measurement of the angle, it will more or less automatically stabilize. Now, in addition to being stable in the presence of external disturbances, it also remains stable and remains balanced even if we were to change the system dynamics.
And let me just illustrate that with the glass that we've talked about before. Let's first not be too bold, and we'll take the liquid out of the glass. And presumably if it can adjust to changes in the system dynamics, then if I put the glass on, in fact, it will remain balanced. And indeed it does. And let me point out, by the way, that I don't have to be very careful about exactly where I position the glass.
And furthermore, I can change the overall system even further by, let's say for example, pouring a liquid in. And now let me also comment that I've changed the physics of it a little bit. Because the liquid can slosh around a little bit, it becomes a little more complicated a system. But as you can see, it still remains balanced.
Now, if we really don't want to be too conservative at all, we could wonder whether, with the feedback constants we have, we could, in fact, balance the pitcher on the top. And, well, I guess we may as well give that a try. And so now we're changing the mass at the top of the pendulum by a considerable amount. And, again, the system basically can respond to it.
Now, this is a fairly complicated system. The liquid is sloshing around. We, in fact, as you can see, have an instability right now, although it's controlled. And that's because the physics of the dynamics has changed.
And we can put a little bit more mass into the system, and maybe or maybe not that will cut down on the instability. OK. Well, in fact, what happened there is that we increased the mass at the top of the pendulum slightly. And that provided just enough damping to stabilize the system.
OK. Well, with this lecture and this demonstration, this concludes this entire set of lectures. It concludes it especially if the pitcher happens to fall. But, seriously, this concludes the set of lectures as we have put them together. And let me just comment that, as a professor of mine once said, and which I've never forgotten, the purpose of a set of lectures, or of a course, or, for that matter anything that you study, is not really to cover a subject, but to uncover the subject. And I hope that, at least to some degree, we were able to uncover the topic of signals and systems through this series of lectures.
There are a lot of topics that we got only a very brief glimpse into. And I hope that at least what we've been able to do is get you interested enough in them so that you'll pursue some of these on your own. And so I'd like to conclude by thanking you, both for your patience and your interest. And I hope that you have enough interest to pursue some of these topics further. Thank you.
FILMING DIRECTOR: Okay. That's a wrap.
Free Downloads
Video
- iTunes U (MP4 - 75.3MB)
- Internet Archive (MP4 - 75.3MB)
Subtitle
- English - US (SRT)