Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Lecture 19: Quadratic/Cubic...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: OK. All right. Good morning. So we're doing finite elements. The element that we considered so far was the basic linear element. Continuous, but of course, the slopes have jumps. The slope was minus one over delta x for that one. This one was a plus and then a minus. So a jump in slope but no jump in the function. So actually, my abbreviation for that would be C zero, saying that it's continuous but no derivative is continuous. And now we'll get to some elements where the slope is continuous. It's sort of fun to create these finite elements of higher degree. It's pretty straightforward in 1-D. And that's where we are now. So we'll get second degree elements and third degree elements. And that gives us, as we'll see, higher accuracy.
So I want to connect the degree of the polynomials to the accuracy of the approximation. Part of that connection is to recognize that these problems have a strong form, as we know, the equation; a weak form, that's the one that has test functions; and also a minimum form that we'll see.
So how would I get some quadratics, so second degree elements, parabolas, into the picture? You remember Galerkin's idea? Choose trial functions. And we're taking those to be the same as the test function. So these are the trial functions we've chosen. One, two, three, four of them. And they're linear. And that limits the accuracy that you can get, because your approximations then are combinations of those. So they're like broken-line functions, linear approximations. And the accuracy is not great. It's sort of the lowest level possible. So how would you get parabolas? So this was first guy. The second guy is going to be continuous and quadratic. So it's going to have new trial functions. In addition to these, I'm going to put in some more. Galerkin's happy with that. I still proceed as usual. My approximation is some combination of those. It's only going to be continuous. So it'll just be a C zero guy again. That means jump in slope. The first derivative isn't there. Eventually, I want to get to a C one where the slopes are continuous.
OK, but how would I get some quadratics? All I want now is my functions, my space, my combinations, should be the piecewise parabolas instead of piecewise linear. And the pieces are broken at the nodes.
OK, so here is a way to do it. Inside each interval, I'm going to add, I'll just call them bubble functions. So these will be new additional guys. So this will be my first phi. You remember that half hat? Because the problem I was doing was a free fixed problem. That's why I had a half hat at this end, because there was no boundary condition that my functions had to satisfy. At this end, there was. It was fixed. So that's why the hat ended, and there was no half hat, there's no extra function there.
So I have right now one, two, three, four. I'm going to add four more bubble functions. Each one will be inside an interval. So it'll be a little parabola. This is function number whatever. If I number this number one, let's say, phi_1, now -- that's probably a change in numbering -- phi_2 is going to be my bubble. And you see what my bubble function is? It's a function that goes there and straight. So it is continuous, no jumps, and it is second degree. It's a parabola, and I'll make its height one.
And then there'll be another function. If I had another color I could draw it. Well, I'll just do it with broken lines, maybe. So there'll be another bubble function in here, a third bubble function in the third interval, and a fourth in the fourth interval. You see that I've now got my old phi_1, phi_3, phi_5, and phi_7 were the hat functions. But now I've got a phi_2, phi_4, phi_6, and phi_8 that are these new trial functions.
So part of the message is, we can throw in additional functions. They don't have to be polynomials, but those are the simplest choices. Why are they simple? Because, you remember, that in the end when I made the choice, I have to do various integrations. So you remember that I have to integrate to find entries K_ij. Do you remember what that integral looked like? You certainly remember F_i. That was the integral from zero to one of whatever function phi_i we had, times the f(x)dx, times the load.
And we computed these. You remember we computed these for the piecewise linear guys. But I don't think I wrote down the expression that we were really doing, so let me just do that. It's c(x), there's a du-- No. d phi_j/dx and a dV_i/dx. Those were the integrals that we had to do. And we were taking phis to be the same as V's. Maybe I'll just do that here, because I don't plan to make any other choices at all. d phi_i/dx.
It's a symmetric matrix now. K_ji, because when I'm choosing phis the same as the V's, this is what it looks like, and if I switch j and i, I don't see any difference.
So these are the things that have to be integrated. And those are the ones we did integrate when phi was piecewise linear. When phi was piecewise linear, the slope was piecewise constant, and we had really easy integrals. Very easy integrals. We had to pay attention to where we were was the slope on a minus interval or on a plus interval, but they were easy to compute. And they led us back to the kind of matrix that we've seen before. The twos and minus ones. And our right hand sides looked familiar.
Now, we've got new functions. We still have the same formulas. No change in formulas. The system is really quite successful, because these are the things that we have to compute. So now I'll have to integrate these parabolas, these little parabolas, half of the phis will be little parabolas, and their derivatives will be linear. So you see, I'll have more calculations to do. Which I don't plan to do, but more integrations to-- For example, the diagonal entry, say 2, 2, which will come from that bubble with itself. K_22, then, will be the integral of c(x), times the derivative of that bubble, which will be a straight line times itself, so it would be squared. And c is positive, so this K_22 is going to be some nice positive number. But we'll have to figure out what it is.
Maybe I'll just say one fact that we'll come back to. That this K is symmetric positive definite. You thought it would be. By using the letter K, we kind of expected it to be. And it will be. It'll be symmetric because the phis and the V's are the same. And it turns out it's positive definite. So it's just great. Just great. We have a little more effort, either to use a formula for integrating polynomials, or using numerical integration. One way or another, and I won't concentrate right now on that point, we get these numbers.
Okay. Here's something to concentrate on. What kind of a matrix K do we have? Where will it be non-zero? So it'll be eight by eight, right? I'll follow through on that choice. Just to say, where will I see non-zeros here? Because if you get that point, you see the way things come together. I'll just put a little x for non-zero.
So K_11. So what's that first row of K? It's coming from the first function, integrated against itself. K_11, if for 1, 1, we'll get something there. Will we have something in the 1, 2 position? That's my question, do we have something in the 1, 2 position? What's 1, 2? That's this function against the bubble function, yes? Right? They're non-zero at the same place. We can expect something there. What about K_13? That's what we've done before, that's this one against this one. Yes? We expect a non-zero there. But then what? After that, what will the rest of that row be? Zero. Because that first half hat doesn't touch any of the others.
So let's go on. Of course it'll be symmetric. I know this much. So this is the half hat row, and this is the first bubble row. Because the half hat was phi_1 and now the first bubble is phi_2.
What non-zeros do we get in the stiffness matrix? Again, we could unconstruct it entry by entry. Another way to construct it will be element by element, stamp them in. You're beginning to see the idea of that. So what do I get for that bubble? I just look to see which elements touch that bubble. And which ones do? One, two and three, and not four. Right? In that row, we only get -- so from that bubble, I think we only get that much.
Now, we're not quite seeing the picture yet. Let me go to the next hat. The hat, phi_2, and then I'll do the bubble. Oh, no, sorry, the hat's numbered phi_3, and then the next bubble is numbered phi_4.
Where do I get zeros? You can tell me, where do I get zeros? From inner products, from these guys, when i is three. So which phis does phi number three overlap? That's all I'm asking. Does it overlap number one? Yes. Does it overlap phi number two? You want to highlight, so we're now looking at phi_3, at this hat.
God, where's it gone? That's the one we're doing now?
So what does it overlap? It overlaps the half hat, does it overlap the first bubble? Yes. Does it overlap itself? Yes. Does it overlap the second bubble? Yes. Does it overlap the next hat? Yes. And then all zeros. Okay, and now do one more row. Bubble four. So now I'm looking at this guy, this next bubble. phi_4. What does that overlap? Does it overlap the first half hat? Nope. Of course, symmetry told us that. Does the second bubble overlap the first bubble? No. Big point: zero there. Does the second bubble overlap the hat? Yes. Does the second bubble overlap itself? Certainly, on the diagonal we have something. Does the second bubble overlap the next hat, phi_5? Yes. And that's it. I think. The second level does not overlap the following bubble.
I don't know if you see what pattern we're getting here. Those were special rows, because that was only a half hat. These are typical rows. A typical hat function, that row is showing us five non-zeros, because it overlaps itself, the neighboring hats, and the neighboring bubbles. But the bubble row only has three, because a bubble overlaps itself, the neighboring hat on each side, but not the neighboring bubbles. So we have only three non-zeros. Do you see that the next row will have five? Will I get it right? I hope so. The next row we'll have, I think they'd be here. And then the next row will have only three guys, maybe here, here, here.
Well, it's certainly a band matrix. So you could say, okay, it's a band matrix. I wouldn't call it tri-diagonal anymore. If I showed you that matrix and said, what kind of a matrix, you'd say a band matrix. If you wanted to tell me that it had five bands, you could maybe say penta-diagonal, or something. But it's easy to work with, of course. That's the point of finite elements, is that all the functions are local, so that we get all zeros when trial functions don't overlap.
My additional point was just a small one that's not a big deal, but it's a little bit worth noticing. These rows with only three entries, three non-zeros. I guess what I want to say is I have to solve eight equations and eight unknowns. And the normal way to do it would be just elimination. LU, that would work fine. Start from the top, eliminate, and you've got it. And of course in one dimension, nobody would do anything else. That would be simple. I just want to say, these bubbles, by giving me extra zeros, I could eliminate the bubbles first. Can I just make this point but not labor it? I could eliminate the bubbles first. I could use this equation to express the bubble coefficient in terms of its neighbors. I could use this one to express the bubble coefficient in terms of it neighbors. And I could plug back into the other equations. I could simplify this. I could get the bubbles done first if I wanted. I can see that to go into the gory details is probably not wise. But bubbles are easy to do. However there are better elements.
So that's my discussion of quadratic elements, almost complete. It's not a big favorite, because cubics are better.
So why are cubics better? Why are cubics better? So you're going to say, okay, upgrade to cubics. How shall I do that? And I want to say a word about the error here. Of course, the reason quadratics are better than cubics-- Sorry, the reason why quadratics are better than linear, and cubics will be better than quadratics is I'm getting more accuracy.
Suppose my true solution may be some curve like that. Okay. My piecewise linear elements, suppose the piecewise linear elements happen to be, as they would in a special model problem, right on the money, at the nodes. Usually they won't be. But what would be the error in that one?
Well, no error at all at the nodes as I've drawn it. But that's not what I'm interested in. I'm interested in, how big is that? How far off is the displacement? What's the maximum error in the displacement? Do you have any idea? If this is size h. Delta x, shall I call it delta x, or h? How far does a curving function escape from the-- I need to blow that up, don't I? So I have a curving function and a linear function, and I want to know how far apart they are over a distance of length delta x. What's this scale? That's the question. It's just good, it'll have a simple answer and it's great to know it.
Anybody want to make a guess? Is that scale of size delta x? Is it of size delta x squared, size delta x cubed? It's that exponent of delta x that is telling me how big is the error? And it's easy to find once you get the hang of it. Anybody want to make a guess? Delta x? Squared. Squared would be the right guess. Squared would be the right guess. I could just turn that picture, if we wanted, to-- Again, this is delta x. Now it would look like that, pretty much. Doesn't have to be symmetric, of course, because this could be a complicated function. But when I focus on a little delta x interval, every function looks like a little polynomial. The error there, let's see. What would that function be?
I could go forever on this. But look, if the slope is something, whatever, let me change numbers here. Let me call it from zero to y, what would be a little parabola that has a slope of one, let's say, at both ends. What would that parabola be? We probably have seen that before. If I wanted a slope of one at both ends, the polynomial would be something like-- What would it be? Sorry, tell me that little polynomial. It's a polynomial in x, it's just a quadratic. Its slope is one, so it maybe starts with an x.
I've got to bring it down here. Is it x times one minus x over-- I didn't like y ever in the first place. What do I want to put there? I don't want to put a one. That would make it look big. y is there. Okay, I think that quadratic is zero at zero, because of that term. It's zero at x=y, because of that term. It's second degree. And I think its height is a maximum right there. And what is that height? At y/2, this is y/2, this is y/2. That height is y squared over four. That's what I was shooting for. The square. That, in a little interval of length y, for length delta x, if I draw a little parabola and I'm matching at the ends, then the height it reaches is like y squared. That's the scale. So my conclusion is that if I use these basic hat function elements, the error I get is-- So can I list the errors? The error is delta x squared. That's the displacement error. The error in u.
I'm not proving anything. The careful discussion of the accuracy is a later section in the book. But I'm trying to make the main point, is that if we're fitting functions by straight lines, then we have an error of delta x squared. And what's the slope error? What do you think is the slope error? Because for us that slope is important. That's the error in the stretching and the strain. So the error in the function is delta x squared. The error in the slope will be one order less, just delta x.
Okay, I'll come back to all this. Now, make a guess. Suppose I include these bubble functions. With delta x as my length scale horizontally, what will be the scale of the error? What do you guess is the expected error in displacement for a general problem, for a general c(x) and f(x). Which I won't get exactly right, but how close will I come? I'll come within delta x to what power? Make a guess, please.
Four is an optimist. I won't get up to four. Cubed. I'd only get cubed. I'll get one by increasing the degree of the polynomial by one, I'll get one degree better. So you could look at it this way. Suppose I have any function. This is a another way to think about the accuracy. Suppose I have any function F(x). The whole point of calculus is that I could start, if I start where it is at zero, then I add in F'(0), the slope, times x. Then I add in 1/2 F''(0) times x squared, and so on. Right? It's called the Taylor series. And we're not paying any attention to convergence, or high order. It's the early terms that I'm interested in. And the point is that if my functions include linear functions, which the hats did, they will be able to get these terms right, and this will be the error that I missed. I'm just looking to see what's the first term in the Taylor series that I will not get. And if I only have hat functions, I can't get an x squared. I can't get a parabola. But when I go here and include the x squareds, I can get that term right. So then it'll be the 1/6 f triple prime x cubed that I miss. So the error will be the next missing term.
Okay, so that's thoughts about the error. And of course that's why those elements are better than these. They take more work, but they are worth it. But now I want to tell you about the next elements. Cubics. Where you're going to expect to get delta x to the fourth. So now we're getting serious accuracy. Now we're getting good accuracy. Of course our problem is not the most difficult problem. It's in 1-D. But this is good.
Okay. This was now the fun in the golden age of finite elements. To construct cubics. What shall I use as basis functions for cubics? So I want to have a cubic in each piece. First of all suppose I just want no more than that. Suppose I'm happy with just continuous functions and I let the slope jump. What new trial function shall I put in? So I'm going to put in new trial functions. What will they look like? Little cubics? Little third degree pieces. Instead of parabolas, they'll be little pieces of third degree. And I could put in four more bubbles. Four cubic bubbles. So I would be up to twelve degrees, twelve by twelve matrices, twelve functions. And for that size delta x, that would give me delta x to the fourth. So that would be okay.
There's a better idea. You can see that I left space. I'm going to make the slope also continuous. I'm not going to allow jumps in slope. Think, how will I do that? So I'm going to call those C-- what will I call that when the slope is continuous? The first derivative, I'll call that C one, continuous first derivative.
Okay. Now I'm actually in section 3.2, where these better elements, these really nifty elements are constructed. C^1, continuous slope, cubics. Okay. Ready for those? What shall be my trial function for continuous slope cubics? So I have to start again. I have to start again because the hat functions are out now. Those hat functions have a jump in slope. The bubble functions have a jump in slope. I'm rethinking here to create a better element.
Okay. So let's just think, if we've got a chance at it, how could these elements work? Okay, so here is the idea, then. Here is my interval. Zero to one, and here's a typical interval. And now at a typical node, like node one, I plan to have as unknowns the height of the function, as before, and also the slope. So I want the function, my trial function is going to have some height and some slope. And at node two, it's going to have some height and some slope. And here's the question. Here's the good point. That those four numbers, the two heights and the two slopes, that gives me four things, four quantities. How many quantities do I need to determine a cubic? So by a cubic, of course, I mean by a cubic something like a_0 plus a_1 x plus a_2 x squared and a_3 x cubed. It's called a cubic because it's x cubed.
So how many numbers here? Four. Perfect match. There's exactly one cubic that has a specified height and a specified slope at these two ends. There's one cubic that'll do that. And then whatever the height here is and whatever the slope there is, there'll be one cubic with that height and that slope that comes into this one. And you see that they will have continuous slope. Because of course the slope is continuous in between; it's a polynomial. The question is always at the nodes. But I use the same number coming from the left and from the right. The slope has become an extra unknown. The slope has become an extra unknown. So I have height, slope at every point.
So that's one way to describe these trial functions now. The trial functions have height and also slope at each node. So what does that mean? That means that I'm going to have two unknowns. Two functions, two trial functions, each with its own coefficient at each node. So if I take a typical node there, I want two functions. Okay, this is interesting. But you see what I'm creating. I think I'm going to get two functions there, two functions there, two functions there, two functions there, right? Because nobody's constraining that. So I'm up to eight.
And how many functions do you think I'm going to have associated with that node? Only one. Why? Because the height is fixed. So I think I've got nine trial functions here. And if we can see what those are, then the system will take over. They're my phi_1 to phi_9, whatever, they plug in here, they plug in the right-hand side, I'll have a nine by nine stiffness matrix. It'll be local again. Well, let's see if we can figure out these functions.
Okay, so you have the idea? I'm expecting two trial functions. One is sort of a round hat. All right, let me draw that. The round hat function will be the function-- These will be the round hats, and they'll be associated with, they give me heights. And then I'll also have an additional one, except at the last node. And these will be-- I don't know what to call them yet. You'll have to give me a name. These will give me the slopes.
Okay. So what does a round hat look like? Now these have to be, follow my rules, they have to be continuous, their slope has to be continuous. And I want to take the one that has height one and zero slope there. And it should have height zero and zero slope, here. Height zero, zero slope. You see what it's going to be? This phi, whatever number it is, it'll be the phi whose coefficient tells me the height at node one. So here's node one.
What will it look like? What will this function do? Well, there is exactly one cubic that starts from zero with slope zero and ends there, ends at one with slope zero. Right? That's what we said; four numbers determine that cubic in that interval. Then there's another cubic that, with those two numbers again, that keeps the continuous slope, and these two numbers in this interval. And of course it'll just be symmetric. You see the round hat? So that's the basis function, the trial function that has continuous slopes and heights, of course, and it has height one at that point.
And now let me draw the one that has height zero, slope zero; height zero, slope zero. And what do I want it to do there? What should this function be like? It should be the one that tells-- Its coefficient will tell me the slope. So I want it to have a slope of one and a height of zero. Do you see these functions, shall I call these, the height functions, phi h 1? That's the phi, that's the trial function that tells me the height at node one. When I take combinations, it gets multiplied by U h 1, which is exactly the height at node one. Now what about this guy? This guy is going to start with zero slope at zero. It's going to be a cubic, and there's exactly one cubic that'll do it. It'll look a little like this. Then there'll be exactly one cubic that does that and gets back to zero. You see that that's possible? In each interval, I've got four numbers: two heights, two slopes. So this would be a picture of the phi slope at node one function.
So that's a standard function, it's a cubic, piecewise cubic. Local again, because in all these intervals it's zero. And it will be, when I go to take combinations of all these guys, it'll be multiplied by its coefficient, U slope one. And then I'll have nine altogether. But those two are the typical ones.
Do you do see how that's going? It's more subtle than hat functions. Suppose whoever's writing the finite element code gets a formula for those phis and plugs them into the integrals, comes out with a stiffness matrix. Actually, we could even look at that stiffness matrix. This is a good way to understand the picture.
Now it'll be nine by nine. Right? So here we'll have a typical, this'll be our phi height 1 row, and this'll be our phi slope 1 row, and this'll be our phi height 2 row, and so on. Of course, I didn't leave room for all-- What will a typical row of this stiffness matrix have in it? I'm just asking about the overlaps. phi_1 height certainly overlaps itself. Does phi_1 height overlap phi_1 slope? Yes or no? Sure. Sure. Does phi_1 height overlap phi_2 height? Yes. Yes. Because the phi_2 height will go up like that. You see? And the phi_2 slope. So actually we'll have, I think we'll have six non-zeros on a typical row.
Is that right? Six non-zeros? Because a typical h-- this is maybe not so typical, because to the left of it there's only one, there's no-- No, there are two. Right? There's a phi_0, phi h 0 and a phi s 0. Sure, there are two here, the two guys here, there's one height guy, and there's one-- what's cooking in that? Oh, it's got a slope of one and it gets back to zero.
What I'm drawing now in little dashed lines was the phi slope 0. The one that gives me a slope at node zero, and this is the one that gives me a height. Yes. Do you see it? So above this was a phi slope 0, and stuck in there was a phi height 0. Six-diagonal matrix. I think it helps to draw that little thing with x's and zeros, because then you sort of see how things are fitting together.
Okay. So these functions now, I've gone into section 3.2 for that. I want to go to a slightly different topic, and then I'll come back in section 3.2 to these cubics. So these are C^1 cubics, continuous slope cubics. Very interesting construction. Are you seeing how it could go in more dimensions? I mean, that's what we'll see for Laplace's equation, how can you construct quadratics, cubics in a plane. It gets interesting. But you'll get the knack of these guys. These are pretty direct, and very useful.
So what's the effect? The effect is that we get a matrix. It looks quite like a difference matrix. Well, actually, the height rows and-- the numbers in the height rows and the slope rows look different. We're getting something new here. We're getting a matrix, a KU=F, that's going to give us fourth order accuracy. So the accuracy has moved up. So we've got up to fourth order accuracy, which we could get by finite differences by a lot of patience. We get them from finite elements in a straight way.
Okay, any question or discussion? I'm talking real fast to get this new idea of constructing finite elements here.
I do want to say something about that line. Because that's a part of this business of estimating the accuracy. It's a key idea in the background of the Galerkin method. And the minimum form would be associated with names like Rayleigh and Ritz. All right. I'll just go directly to that, if I may.
So what I want to do is tell you, for our model problem, I want to tell you the strong form-- Let me do it this way. I'll put the strong form, the weak form, and then I want to add in the minimum form. Okay. So the strong form of our equation was minus the derivative of c*du/dx equal f. Okay. What was the weak form? This is an f(x). The weak form, how do you get to the weak form? You multiply both sides by a test function, you integrate, you integrate by parts, and you get this beautifully symmetric form that we have up there, du/dx*dv/dx*dx, equals the integral of f(x)*v(x)*dx. I write that again, just so you see the nice symmetry of that weak form. And it's for all test functions v. Okay.
I'm shooting for a third description. A third description of the same problem. And it's really neat to see that you have that. Let me just see it first in the discrete case. The discrete case, the strong form would be A transpose C Au=f. That's the strong form. Right? I always like to see the discrete one first, and then the continuous. Okay, what would be the weak form in the discrete case? I would multiply by a vector v, and I would take inner products, A transpose C Au inner product with v, equals f inner product with v. You can use dot, or-- So that would be the weak form. I've just taken the dot product of both sides with v. Now you'll see the weak form better if, what should I do? What would make that look nice? So that's the dot product of A transpose C Au with v.
And what do I do to make that look nice? Do you get the idea yet? It doesn't look pretty to me. It's all lopsided. Right? So what can I do with A transpose? What's the rule about A transpose? That if I have A transpose times something, dotted with something, what can I do? I can move the A transpose over to the other guy. And what will it be when I do that? So I take it away from here, and what do I put there? A. That's the whole point of transposes. Transposes, you put them on the other side of the dot product, you take the transpose, so it would be literally, maybe A transpose transpose, which is A.
What I just did there is integration by parts. Well, summation by parts, because I'm in the discrete case. The whole idea of integration by parts amounted to taking A transpose off of u, off of this, and putting A over there. Isn't that neat? And you see that this CAu, Av is just what I have here. C, A is derivative, so this is CAu, Av. Inner product. That's cool. That's just like how it should be. I just followed that rule, that A transpose times something, shall I call it w, inner product with u, is the same as (w, Au). That if I bring A transpose over, it becomes an A. If I bring an A over, it would become an A transpose.
All right, what about the minimum form? Have I got one minute to do the minimum form? Yes. So what's the minimization that's hiding behind this? The minimization in the discrete case, do you remember? We're looking at Ku=f. And some quadratic quantity from least squares has its minimum when Ku=f. And it's 1/2 u transpose Ku minus u transpose f. Where K is A transpose C A. This is the minimum statement of the problem. That if I look for the u that minimizes that quadratic, it leads me to the equation Ku=f. So that's the minimum statement. And if we want it to really look perfectly like the others, I would put in A transpose C A. Okay. Can I write down next time, because our time is really up. It's not fair to-- Al I'm going to do is write down the same thing here. I'm minimizing 1/2-- Oh, I'm going to do it anyway. c(x) du/dx squared, minus the integral of f(x)u(x). So that's the minimum problem. Minimize over all u, this quadratic. This is the right way to see these problems. You see a differential equation, which we use for finite differences; you see a weak form, which we use for finite elements; and now you see a minimum form.
Okay, that gives you something to think about. And there'll be a homework on finite elements that'll give you a chance to use them. Okay, thank you.
Free Downloads
Video
- iTunes U (MP4 - 114MB)
- Internet Archive (MP4 - 212MB)
Subtitle
- English - US (SRT)