Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Lecture 12: Graphs and Netw...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: OK, so this is the start -- I won't be able to do it all in one day -- of what I think of as the number one model in applied math, in discrete implied math, I'll say. Let me review what our four examples are. Just so you see the big picture. So the first example was the springs and masses. That was beautiful. It's simple. The masses are all in a line, and the matrix K, the free-fixed and fixed-fixed and free-free come out closely related to our K, T, B matrices. So that was the natural place to start, and actually we also got a chance to do the most important equation in time. Ku''-- Sorry, Mu''+Ku=0. So that was a key example. Then least squares. Very important, I'm already getting questions from the class about problems that come up in your work, least square problems. Maybe I'll just mention that the professional numerical guys don't always go to A transpose A. If it's a badly conditioned problem, and that conditioning is a topic that was in 1.7 and we'll eventually come back to, if it's a badly conditioned problem, matrix A then-- A transpose A kind of makes it worse. So there's another way to orthogonalize in advance. And if you're working with orthogonal vectors, or orthonormal vectors, numerical calculations are as safe as they can be. Yeah. Wall Street is more like A transpose A. And the orthonormal is the safe way.
Alright, this is today's lecture. You'll see the matrix a for a graph, for a network. It's simple to construct, and it just shows up everywhere. Because networks are everywhere. And, just, looking ahead, trusses are there partly because they're the most fun. You'll enjoy trusses. I mean, it's kind of fun to figure out is the truss going to collapse or not. It's good. And actually, what's the linear algebra in there? The collapsing or not will depend on solutions to Au=0. Let me just recall the equation Au=0. If A is our key matrix in each example, it's different in each example. And we sort of hope that Au=0 doesn't have solutions, or that it has solutions we know. Because if Au=0 has solutions that's the case where A transpose A is not invertible and we have to do something. Very useful to review. What were the solutions to Au=0, in the case of springs? Well, there were some in the free-free case. The all ones vector was the solution u, or all constant was the solution u in the free-free case and that's why we couldn't invert it. But the fixed-free or the fixed-fixed, when we have one support or two supports, that removed the all ones solution. Good.
Least squares, we assume there weren't any. We assumed-- Because we wanted to work directly with A transpose A, the normal equations, so we assumed that the columns of A were independent. We assumed that there were no non-zero solutions to Au=0. Because if there were, that would have made A transpose A singular, and we would have had to do something different. Here, this'll be a lot like this one. Today, once you see A, you'll spot the solutions to Au=0. This is A for a network. And the solution is going to be that same guy, all ones. And that only tells us again that we have to ground a node, I may use an electrical term. Grounding a node is like fixing a displacement. Once you've fixed one of those, say at zero, whatever, but zero's the natural choice. Once you've said one of the potentials, one of the voltages is zero, then you know all the rest. You can find all the rest from our equations. So this is like this in having this all ones solution. And as you'll see with trusses, that could, depending on the truss, have more solutions. And if there are more solutions that's when the truss collapses. So the trusses need more than just a single support to hold up a whole truss. OK.
So that's the Au=0. Now we're ready for the lecture itself. Graphs and networks. OK, let me start with, what's a graph. A graph is a bunch of nodes and some or all of the edges between them. Let me take just a particular example of a graph. And this of course you spot in the book. Oh and everybody recognized that, and it's probably now corrected, that in the homework where it said 3.4 it meant 2.4, of course. And this is Section 2.4 now. Let me draw a different graph. Maybe it'll have four nodes, at those four edges, let me put in a fifth edge. OK, that's a graph. It's not a complete graph because I didn't include that extra edge. It's not a tree because there are some loops here. So complete graphs are one extreme where all the edges are in. A tree is the other extreme, where you have a minimum number of edges. It would only take probably three edges. So just while we're looking at it, there are a bunch of possible trees that would be sort of inside this graph. Sub-graphs of this graph, if I knock out those two edges I have a tree, going out. Or a tree could be like this. Or a tree could be like this. Anyway, five edges is in this graph, six in a complete graph, it would be three edges in a tree. OK, and the number of edges is always m. So five edges. And the number of nodes is always n, for nodes. So A will be five by four.
OK. And it's called, so we get a special name in this world, it's called the incidence matrix of the graph. The incidence matrix. Or, of course, these things come up so often they have other names, too. But incidence matrix is a pretty general name. OK, I have to number the nodes just so we can create the matrix A. One, two, three, four. And I have to number the edges. If I don't number them, I don't know which is which. So let me call this edge one, from one to two, and I'll draw an arrow on the edges. So from one to two, maybe this'll be edge two, from one to three. This'll be edge three. Oh no, let me put edge three there, would be a natural one, say from two to three. And how about edge four there, from two to four. And edge five going from three to four. OK, so now I have numbered, I've identified the nodes, and I've identified the edges. And there were five edges and four nodes. Usually m is bigger than n. We're in this-- Except for trees, m will be at least as large as n. And I've put arrows on, so you could say it's a directed graph. Because I've given a direction. You'll see that the directions, those arrow directions, which are just to tell me which way current should count as plus, if it's with the arrow, or which way it should count as minus if it's against the arrow. Of course, current could go either way. It's just, now I have a convention of which is plus and which is minus.
OK, so now let me tell you the incidence matrix. So everybody can get it right away, how do you create this incidence matrix? A five by four. So it's going to have five rows, one for every edge. So what's the row for edge one? And it's got four columns, one for every node. So these are the nodes. Nodes one, two, three, four. So there's a column for every node and a row for every edge. OK, edge one. This is just going to tell me everything about the graph. So exactly what's in that picture will be in this matrix. If I've erased one, I could reproduce it by knowing the other one. OK, edge one goes from node one to node two. So it leaves node one, I'll put a minus one there. In the first column. And a plus one in the second column. Edge one doesn't touch nodes three and four. So there you go, that's edge one. Let me do edge two and then you'll be able to fill in the rest. So edge two goes from one to three, minus one, and a one. Edge three goes from two to three, I'll just keep going. Minus one and a one. Edge four goes from two to four. And edge five goes from three to four. OK. Simple, right? Got it. That matrix has got all the information that's in my picture, and the matrix-- But the point about matrices is, they do something. They multiply a vector u to produce something. They have a meaning beyond just a record of the picture. So A is a great thing. In fact, what does it do? Let's see.
So that's the matrix A that we work with. Oh, first tell me about Au=0. Because we brought up that subject already. Are those four columns independent? I've got four columns, they're sitting in five-dimensional space, there's plenty of room there for four independent vectors. Are these four columns independent vectors? No. No, they're not. Because what combination of them produces the zero vector? [1, 1, 1, 1]. If I take that column plus that, plus that, plus that, I'm multiplying by-- So, A, I'll just put that up here and then I won't have to write it again. A times [1, 1, 1, 1], is five zeroes. So that u, that particular u, of all ones, is, I would say, in the null space of the matrix. The null space is all the solutions at Au=0. In other words, so these four columns, tell me about the geometry again. These four columns, if I take all their combinations, yeah. Think about this. If I take all four combinations, all combinations, any amount of this column, this column, this column, that fourth column, those are all vectors in five-dimensional space. Now, this isn't essential but it's good. Do you have an idea of what you'd get? What would you get if you took, so this, think of four vectors, pointing along, take all their combinations, that kind of fills in. Whatever fill in may mean. And what does it fill in? What do I get? What's your image? Frankly, I don't know. I can't visualize five-dimensional space. That well. But still, we can use words. What do you think?
You get a something subspace. You got a something, you get something flat. I don't know if you do. It's pretty flat, somehow. Like I'm just asking you to jump up from a case we know. Where we had columns in three-dimensional space and we took a combination and they gave us a plane. Right, when they were dependent? Now, how would you visualize the combinations in five-dimensional space? Just for the heck of it? It's some kind of a subspace, I would say. And what's its dimension, maybe that's what I want to ask you. What's the dimension? Do I get, like, a four-dimensional subspace of five-dimensional space when I take the combinations of these particular four guys? Yes or no? Do I get a four-dimensional subspace, whatever that may mean? No. Right answer, I don't. I don't. Somehow the dimension of that subspace, whatever I get, isn't four because this fourth guy is not contributing anything new. The fourth one is a combination of the first three. So I get a three-dimensional subspace. The rank of this matrix is three. If you allow me to introduce that key word, rank is the number of independent columns. It tells you how big the matrix really is. You know, if the matrix, if I pile on a whole lot of zero columns, or a lot of zero rows, the matrix looks bigger. But of course it isn't truly bigger. The heart of the matrix, the core of the matrix is somehow just three. And actually, I tell you now and we'll see it happen, can I tell you the key result in the first half of linear algebra? It's this. That if I have three independent columns, and by the way any three are independent, it's just all four together are dependent. This has three independent columns, then the great fact is, it has three independent rows. That's kind of fantastic. Since it's such a beautiful and remarkable and basic fact, look at the rows. That what linear algebra is all about. Looking at a matrix by columns, and then by rows, and seeing what are the connections.
And the connection is, the key connection is, that these five rows, now what space are they in? What what space are these rows in? Four-dimensional space. They only have four components. So I had four columns in 5-D, I have five rows in 4-D. But now, are those five rows independent? Let me just ask that question. Are those five independent rows, are they pointing in different directions, or could any combination give the zero vector in 4-D, looking at those five rows? What do you say, wait a minute. Five vectors, in four-dimensional space? Dependent, of course. Right. So they're dependent. There couldn't be five independent vectors in 4-D. But are there four in this particular case? And here's the great fact, no, there are three. If there are three independent columns and no more, then there are three independent rows and no more. And we'll get to see which rows are independent. And which are not. That's a question about A transpose, and we haven't got to A transpose yet. OK, are you OK with that incidence matrix? Because this is like the central matrix of our subject. We can figure out A transpose A, that's kind of fun. If I do A transpose A then you'll see the core computations of this neat section. So if I do A transpose A, so I'm going to bring in A transpose and you know that I'm not just bringing it in from nowhere, that networks-- the balance law is going to involve A transpose. So let's just anticipate.
What do you think A transpose A looks like? Now, how am I going to do this for you? May I write-- May I erase this for a moment, and try to squeeze in A transpose here? So that you can multiply it by sight and see the answer, and then you'll see the pattern. That's the great thing about math. You do a few examples, and you hope that a pattern reveals itself. So let me show A transpose. So now I'm going to take that column and make it a row. I'm going to take that column and make it a row, it's going to be a little squeezed but we can do it. Take that column, [0, 1, 1, 0, -1]. And the last column, [0, 0, 0, 1, 1]. OK. So I just wrote A transpose here. And now could you help me with A transpose A. Which is the key matrix in the graph here. What size will it be? Everybody knows it's going to be square, it's going to be symmetric, and just tell me the size. Four by four. Right, we have a four by five times a five by four, we're expecting this to be four by four. And what's the first entry? Two. Right, take row one, dot it with column one. I get two ones and then a bunch of zeroes, so I just get a two. What's the next entry? Take row one against column two, can you do that in your head? Row one, column two, the top one is going to hit on a minus one, and I think that's all there is, right? Then this one hits a zero and those three zeroes, so. And then what about the next guy here? A minus one. And the last guy? A zero.
So that's row one of A transpose A. Can I just look at that for a moment before I fill in the rest? And then, when you fill in the rest it'll confirm the idea. Why do I have a zero there? Why did a zero appear in the 1, 4 position? If I look back at the graph, what is it about nodes one and four that told me ahead of time? You're going to get a zero in that A transpose A. Everybody see what nodes one and four are? Yeah, say it again. Not connected. No edge. Here there was an edge from node one to two. Here is an edge from node one to three. Those both produce the minus ones. And on the diagonal came the two to balance it. What does that two represent? That two represents the number of edges that do go into node one. See, that row is all about node one. So there are two edges into it, and then an edge out, and an edge out, and the edge out and the no edge. OK. So, now I know it's going to be a symmetric matrix, so I could speed up and fill those in. What's the next entry here? What's the guy on this diagonal? So that's row two against column two, so I have a one there, a one there, a one there, that makes a three. Why a three? Because there are, yeah, you got it. There are three edges into node number two. Three edges into node number two, and now I'm going to have some minus ones off the diagonal for those edges. So what are these entries going to be here? They're both minus ones. Edge two is connected to all three other nodes. So I'm going to see a minus one and a minus one there, and it's going to be symmetric. And I'm nearly there.
Of course, I'm describing a pattern that you're just seeing unfold, but I'm doing it that way so that you'll feel hey, I can write down A transpose A, or check it quite quickly, without doing this complete matrix multiplication. So what number goes there? That's to do with node three, and I see node three connected to all three other nodes, and so what do you expect there? Minus one there, and a minus one there, and what do you expect here? Two. And so now I have my matrix. The A transpose A matrix. And that's square and it's symmetric. Now I ask you, is it positive definite? Or is it only semi-definite? Right, we know that A transpose A is always positive definite in the best case. But only positive semi-definite if it's singular, if there's some vector in its null space, if a transpose a times some vector gives zero. If some combination of those columns gives me the zero column. Which is it? Have I got a singular matrix or an invertible matrix here? Singular. Why singular? Because a had some solutions to Au=0. So if Au equaled zero, then I could multiply both sides by A transpose, that same u, A transpose times zero will still be zero, it might be a different size zero, but it'll be zero. And what's the u, then? It's the all ones vector. What am I saying about the columns of A transpose A? They're dependent. They add up-- Because it's the [1, 1, 1, 1] vector that's guilty, every row adds to zero. Every row adds to zero.
Let me just say for a moment, introduce two notation for the diagonal matrix. D, that's the diagonal matrix, two, three, three, two. And then I'll put in a minus sign, and this is and I'll call it W. So you can pick out what D and W are, but let me do it for sure. So D, the degree matrix. See, this is this is like fun because I'm not doing anything yet. I'm just giving names here. Two, three, three, two. The degree of a node, the degree means how many edges go from it. How many edges touch it. And W is also a great matrix, it's called the adjacency matrix. It's also beautiful. Now it'll have plus ones because I wanted minus W, so it has, these nodes are not adjacent to themselves but it's got this one and this one and this one this one and that one, and that's a zero. So there are five, the adjacency matrix tells me which nodes are connected to which other nodes. And of course the connections are going both ways. So I see five ones from five edges. And I see five more ones below the diagonal, because the edges are connecting both ways. Ones connected to three, and three is connected to one. One is not connected to four, and four is not connected to one. One is not connected to itself. By an edge. If we allowed, like, little self loops, then a one could appear on the diagram. But we don't. OK, so that's D and W.
Here are the key matrices. This is actually, I venture to say that any afternoon at MIT there's a seminar that involves these matrices. One name for this is the graph Laplacian, from Laplace's equation. And we'll see pretty soon where that name's coming from. But it's there. And should I think, I think I should, just about networks. Like where, does the networks come from? I think we've got networks all around us. Right? Electrical networks are the simplest, maybe in some ways the simplest to visualize. So that's the example, that's the language I'll use. Now, I get a network, I use the word network when there's a c_1, c_2, c_3, c_4, c_5. Those extra numbers. I've got the A, and now the network comes from the C part, that diagonal matrix. And if I'm talking electricity, these could be resistors. Instead of springs, they're resistors. So it's the conductance in those five resistors, are c_1, c_2, c_3, c_4, and c_5. So I'm ready for that. Ready for the C matrix, because we got the A matrix. And we've got A transpose A, but the applications throw in the C matrix also. What are other applications, I was saying, like this one is the one, I'll use the word current, for flow in the edges, or I'll use the word flow. A network of oil, or natural gas, or water pipes would be just that, and then the electrical-- People study the-- Professor Verghese in Course 6 studies the electric grid. The US electric grid, or the western, often the western half of the US electric grid. So that's got a whole lot of things. Pumping stations. You see it? Actually, the world wide web, the internet, is a giant network that people would love to understand. And the phone company would love to understand those networks of phone calls. I mean, those are really, that's what, giant businesses are are dependent on understanding and maintaining networks.
OK, so I'm going to use resistors. Of course, I'm staying linear. And I'm staying steady state. So by staying linear there aren't any transistors in this net. By staying steady state, there aren't any capacitors or inductors. Those guys would be linear elements, but they would be coming in a time-dependent problem. A u_(tt) problem. And I'm just staying now with Ku=f, I'm trying to create K. The stiffness matrix, which maybe here we might call the conductance matrix. OK, so ready for the picture now? That these come into? You know what the picture looks like, it's going to have the usual four, we'll start with these potentials u at the nodes, potentials at nodes, so those will be u_1, u_2, u_3, u_4. Voltages, if I'm really speaking, those units would be volts, and now comes the matrix A. And now I get, what do I get from A? What do I get from A? Key question. If I multiply A times u, and you know that's coming, right? If I multiply A times u, so I'll erase A transpose now, because we've got that. So there's A, and now I'll make space to multiply by u, alright? So now I want to look at Au. So A multiplies a bunch of potentials, a bunch of voltages. And let's just do this multiplication and see what it produces. This is the great thing about matrices, they produce something. OK, what's the first component of Au? Of course, Au is going to be five by five. It's going to be associated with edges. Right, u's associated with nodes, Au with edges. Just, the pattern is so nice. Alright, what's the first component? Just do that multiplication and what do you get? u_2-u_1. What do you get in the second component? Do that multiplication and you get u_3-u_1. The third one will be u_3-u_2. The fourth one would be u_4-u_2, and the fifth one will be u_4-u_3.
Just like our first difference matrices. But this one deals with, I mean, our first difference matrices were exactly like this when the graph was all in a line. The big step now is that the graph is not in a line, not even necessarily in a plane. Could be in, it's a bunch of points, and edges. Actually, the position of those points, we don't have to know are they in a plane. I think of them as nodes and edges. OK, what's the natural name for Au? I would call those potential differences, right? Voltage differences. So that's what we see here and those will be e. e_1, e_2, e_3, e_4, e_5. will be potential or voltage differences. Voltage drops, you might say. Potential differences, voltage drops. Oh well, now. When I say voltage drops, that's because, as we noted before, the current goes from a higher to a lower potential. It goes in the direction of the drop. And I think that what we need now is minus Au, for e. So I think we need a minus sign and it's quite common to have the minus sign. We saw it already with least squares. And let me say also, so this is e. I'll abbreviate those five e's I just wrote down, five of them. So you would remember there are five. We're talking about the currents. We're talking about, this is the e in E=IR. The electromotive-- The voltage drop. That makes some current go. Now, also, just as with least squares, so it was great that we saw it before, there could be a source term here. So I'm completing the picture here, allowing the source term. And we'll come back to what does that mean, physically. But at that point could enter b, and b is really standing for batteries. I work hard to make the language match the initials. These letters. OK, now what? That step just involved A, nothing physical. Now comes the step that involves C, so w will be Ce. And these will be the currents on the edges. And that's the law, then, with a matrix C, of course C is our old friend c_1 to c_5. And tell me first the name. Whose law is this? That the current is proportional to the voltage drop? Ohm. So this is Ohm's law. Instead of Hooke's law, it's Ohm's law. And I've written it with conductances, not resistances. So resistances are 1 over-- R, the usual R in E=IR, would be-- I'm more looking at it as I, current, equals Ce, instead of E=IR. So I'm flipping the, the, the resistance, or the impedance to give the conductance.
OK, and now finally can you tell me what the last step is going to be? If life is good, well you might wonder whether life is good, reading the papers, but it's still good here. OK, what matrix shows up there? Everybody knows it. A transpose. So the final equation, the balance equation, will be, let me write it so I don't catch it up here. Will be A transpose w equals whatever. Will be the balance equation. The current balance, it's the balance of currents, balance of charge, whatever you like to say. At each node, it's the balance at the nodes. Because when we're up on this line, we're in the node picture. We have four equations here, right? We're talking about at each node. Here we're talking about on each edge. There it's so critical. These two variables. Which we're seeing physically as node variables and edge variables. That pair of variables just shows up everywhere. In displacements and stresses, it's fundamental in elasticity. And oh, there are just so many in optimization, it's everywhere. And a big part of this course is to see it everywhere. OK, why don't I, just so you see the main picture. We're going to have the A transpose C A matrix that I'm going to maybe call K again. And now of course there could be current sources. Just the way there could be forces that we had to balance. There could be, not always but there could be, current sources from outside. External current sources. So these are external voltage sources. These are external current sources. So in a way, we now have combined our first two examples, our springs and masses only had forces external. Our least squares problem had an external b. Measurements. This picture is the whole deal. It's got b and f, and actually I could put in even a little more.
Sources like, well, we already kind of caught on to the fact that we'd better ground the node or A transpose C A, as it stands, A transpose C A as it stands will be singular. You know, it's the matrix, there's A transpose A and the C in the middle isn't going to help any. That's singular. If we wanted to be able to compute voltages, we've got to set one of them. It's like setting one temperature, it's like deciding where is absolute zero. Let's put absolute zero down here. u_4=0. Grounded the node. OK, so we've fixed a potential. So here's a boundary condition coming in u_4=0. That's another source term, another thing coming, you could say sort of from outside the A transpose C A. We could fix another voltage at, I mean, I'm thinking now about what's the picture. What's the whole problem? So the problem could have batteries, in the edges. It could have current sources into the nodes. It could fix u_1 at some voltage like ten. Our problem could fix-- We must fix one of them. Otherwise our matrix isn't-- is singular. But once we've set up the matrix, and when we fix u_4=0 by the way, what happens to our matrix?
Let me take u_4=0, so this is a key step here. When I set u_4=0, I now know u_4. It's not an unknown any more. So I've removed u_4 from the problem. And then it'll be also removed from A transpose A. So this, is you could say, like a reduced A, or a grounded matrix A. It's now five by three. And A transpose A, what shape will the a transpose a matrix be? It'll be three by three, right? I now have five by three, three by five. Multiplying five by three gives me three by three. This column is gone, and that row is gone. Because the row came from A transpose and the column came from A, and we've just thrown them away. By grounding that node. Now give me the key fact about that A transpose A matrix? What do you see there? Now, you see a reduced, a grounded A transpose A. What kind of a matrix have I got? Positive definite. Good. Positive definite. It's now not singular any more, its determinant is some positive number. And everything is positive, its eigenvalues are all positive, everything's good about that matrix. OK, and I guess what I was starting to say here, if I wanted to fix, this would be a natural problem. Fix the top voltage at one, say. Fix u_1=1 and see how much current flows. That would be a natural question. What's the system resistance between the top node and the bottom, if I'm given-- Or the system conductance. If I'm given a c_1, a c_2, a c_3 a c_4 and a c_5, I could say I could fix that voltage at one, I could fix this at zero. Maybe one of the homework problems asks you for something like this. And then you find all the currents. And the voltages, you solve the problem. And you know what the currents are. You know the total current that leaves node one, enters node four when the voltages drop by one between, right?
So current can flow down here, cross over here, down here whatever. Somehow all these five numbers are going to play a part in that system resistance. So that would be an interesting number to know. Out of those five numbers, somehow five c's, there's a system resistance between that node and that node. And we can find it by setting this to be one, this to be zero, having the reduced matrix-- Oh, well what will happen? How many unknowns will I have? Just do this mental experiment. Suppose I introduce u_1 to be one, for example. This is just one type of possible problem. If I take u_1 to be one, what happens to my matrix A? It loses its first column, too. u_1 is not unknown any more. u_1 will not be unknown. And that value one is somehow going to move to the right-hand side, right? People have asked me after class, well what happens if a boundary condition isn't zero? Suppose we have this fixed springs and we pull this spring down to make its displacement 12. Well, somehow that 12 is going to show up on the right side of the equation. It's a source, it's an external term. OK, so if we had u_1 equals whatever, this u_1 would disappear. I would only have a two by two problem. Because I would only have two, I now have only two unknown u's, right? So that's where sources can come. And can I just complete the picture of the source stuff? We could fix, we could. Look, here's what I'm going to say. External stuff. Sources can come into here. They can come into here. They can come into here, so of course everybody says why shouldn't they come in here? And the answer is we could send them here. So we could fix, we could fix some w's.
Of course, you understand we can't do everything. I mean, there's a limit to how much we can put on the system. We want to have some unknowns left. Some matrix still, but anyway. I like this picture now, it's more complete. That you now see the node variables and node equations, the edge variables, e and w. The currents. These guys are the big ones. w and u are what I think of as the crucial unknowns. e is sort of on the way. f is the source. But now we have the possibility of sources at all four positions. OK, let's see. If I wrote out, If I looked at A transpose C A, would you like to tell me, yeah. Have we got? No, we don't. I was going to say, what's a typical row of A transpose C A, can I just say it in words? It'll be too quick to really catch. So without the C, this is what we had. So what do you think that two becomes if there's an A transpose C A, if there's a C in the middle. Have you got the pattern yet? That two was there because of two edges. Edges one and two, it happened to be. So instead of the two, I'm going to see c_1+c_2. Right. When those were ones, I got the two. So this will be c_1+c_2, this'll be a minus c_1, and that'll be a minus c_2, when we do it out. And you could do it out for yourself. Just tell me what would show up there. In A transpose C A, so I'm talking now about A transpose C A. So instead of one plus one plus one, what do I have? What am I going to have, and you really want to multiply it out, because it's so nice to see it happen. What do I have? I'm looking at node two, I'm seeing three edges out of it. And instead of one, one, one, I'll have c_1+c_3+c_4. c_1+c_3+c_4 will be sitting here. And minus c_1 will be here, and minus c_3 will be here, and minus c_4 will be there. The pattern's just nice. So if you can read this part of the section, I'll have more to say Friday about the A transpose w, the balance. That critical point we didn't do yet. But the main thing, you've got it.
Free Downloads
Video
- iTunes U (MP4 - 109MB)
- Internet Archive (MP4 - 205MB)
Subtitle
- English - US (SRT)