Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Video Description: Herb Gross describes the "game" of matrices — the rules of matrix arithmetic and algebra. He also covers non-singularity and the inverse of a matrix.
Instructor/speaker: Prof. Herbert Gross
Lecture 2: The "Game" of Ma...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. I was just thinking of a story that came to mind about the woman who went on vacation. And she said, "We went to Majorca." And her friend said to her, "Where's that?" And she said, "How should I know? We flew." And today we're going to fly through matrix algebra.
In other words, we're going to devote an entire lecture to the subject called matrix algebra, which is a rather fast pace from one point of view. From another point of view, it's a rather slow pace because hopefully by this stage in the development of our course, we're pretty well familiar with what we mean by the game of mathematics. And hence, in particular, if we now try to make a structure out of matrices, we should be in pretty good position to generalize many of our early remarks.
Let me say at the outset that whereas matrices in general can be m rows by n columns, the interesting case occurs when m and n are equal. In other words, the interesting case is when a matrix has the same number of rows as it has columns. And again, using as our motivation a system of equations, the easiest way to see this is that, for example, if you have a certain number of equations and a certain number of unknowns, the really interesting case occurs when you have just as many equations as you have unknowns.
For example, if you have more unknowns than equations, you usually have quite a few degrees of freedom. In other words, a far-fetched case, suppose I have one equation with fifteen unknowns. Then you see I can pick fourteen of the unknowns in general at random and solve for the fifteenth in terms of the specific choices of the fourteen that I picked at random.
On the other hand, if I were to have fifteen equations and just one unknown, the chances are I would have some contradictions. Because after all, with only one unknown, I could only impose one condition. And therefore, given fifteen conditions-- unless they were all equivalent conditions-- I might wind up with a contradiction.
So in general, the easiest way to summarize this is that when you have more unknowns than you have equations, you usually get too many answers to a problem. And if you have more equations then you have unknowns, you usually get too few answers, whereby too few, you mean none. So the interesting case, as I say, is when you have the same number of unknowns as equations.
Motivating matrices from this point of view, the game of matrices will be played in our course as follows. We will let the set S sub n denote the set of all n by n matrices. In other words, n could be 3. n could be 4. It could be 2. It could be 5. But whatever value we choose for n, for example S sub 5, would be what? The set of all 5 by 5 matrices.
So we're just gonna deal with a general n. By the way, I should point out that these computations get kind of sticky. So we will, for illustrative purposes, usually pick examples with n equals 2 or n equals 3 just so that we can see what's going on.
But the notation that we'll use is that if the matrix A is a member of S sub n, what does that mean? A belongs to S sub n means that A is an n by n matrix, which usually is written this way. And if you wish to abbreviate it, we just write what? The square brackets and a sub ij inside the brackets this way.
Now again, keep in mind that the play of any game, it's not enough to have the equipment. We have to have the rules which defines for us how the terms behave. So we're going to define equality of two matrices as follows. Given the two matrices A and B, we will define them to be equal if and only if entry by entry they happen to be equal. In other words, the term in the ith row, jth column of the first matrix has to be the same as the term in the ith row, jth column of the second matrix for all possible values of i and j.
And again, if you want to motivate why we choose this kind of a definition-- even though definitions do not have to be defended-- notice that as soon as you change some of the coefficients-- if you change even one coefficient of an equation given n equations with n unknowns, as soon as you change one of the coefficients, you've changed the system of equations. Therefore, for these systems to be exactly equal you want what? Term by term equality.
And correspondingly, to add two n by n matrices, in other words to add the matrix A to the matrix B, we define the rule to be what? That the sum of these two matrices is the matrix C where each element of C is obtained by the term by term addition of the elements in A and B. In other words, we add two matrices by adding them entry by entry. In other words, we add the two entries in the first row, first column to obtain the entry in the first row, first column of the sum, et cetera. In other words, we add term by term.
As far as multiplication is concerned-- and again, this is the point I was making in the last lecture and which I like to keep re-emphasizing because it's very important. One would like to say something like, "Gee, it was very natural to add two matrices term by term. Why don't we multiply them term by term?" And the answer is certainly you could have made up a game of matrices in which the product of two matrices was obtained by multiplying the matrices term by term.
However, if we keep in mind what problem we want matrix multiplication to solve-- in other words, notice now, we are playing the game of matrices the way the real-life scientist plays games, not the way the abstract mathematician plays games. You see, in mathematics what we essentially say is, let's make up the rules and we'll worry about models that obey the rules later. In real life we say, let's look at a model that we need, a physical, realistic model. And then based on the properties that this model has, let's make up the abstract rules that govern the system.
And what we saw in the last lecture was that the sensible definition of matrix multiplication in terms of a chain rule was that to multiply two matrices to get the term in the ith row, jth column, you dot the ith row of the first matrix with the jth column of the second matrix. And the way we write that-- and again, this notation may seem a little bit heavy to you. But all it does is it states symbolically what we said in words last time.
What we're saying is that to dot the ith row with the jth column, notice that the ith row is always prefixed by the first subscript being i. Whereas the second subscript k can run the whole gamut from 1 through n. In other words, the second subscript can denote any column you want. Correspondingly, to indicate that you've stayed in the jth column, the row can be arbitrary. And that means, again, that the subscript k denoting the row of the B matrix, can run the full gamut from 1 to n. In other words, this formal set up here simply says in mathematical, concise language the statement that we said verbally before. Namely, to find the ijth term in the product, dot the ith row of the first matrix with the jth column of the second matrix.
And again, I suspect that the easiest way to see all of these things is in terms of a specific example. And again, the simplest example I can think of will involve the 2 by 2 case, n equals 2. So by means of illustration, let A be the 2 by 2 matrix all of whose entries are 1. Let B be the 2 by 2 matrix all whose entries are 1. Except in the first row, second column the entry will be a 2.
Now, my first claim is that in terms of our definition of equality, matrix A is unequal to matrix B. Why is that? Well, notice that they differ in the what? First row, second entry. In other words a sub 1,2, the entry of A matrix in the first row, second column, is 1. Whereas the entry of the B matrix in the first row, second column is 2. Don't say, gee, they look equal almost because three out of the four match up. Remember, equality by definition says what? Entry by entry. Three out of four isn't good enough. It has to be all four. Therefore, this matrix A is not equal to matrix B.
How do we add two matrices? Well, to add them, we said that we would add them entry by entry. Therefore, the first row, first column of the sum should be obtained by adding the first row, first column terms in each matrix together, et cetera. And just going through this quickly, we would get what? 1 plus 1 is 2. 1 plus 2 is 3. 1 plus 1 is 2 again. 1 plus 1 is still 2 over here. In other words A plus B in this case would be the two by two matrix [2, 3; 2, 2].
Now, how do we multiply the two matrices A and B? Remember what the rules were for multiplication. To multiply A by B, to find the term in the first row, first column, we dot the first row of A with the first column of B. That gives us 1 times 1, which is 1, plus 1 times 1, which is also 1. The sum is therefore 2. So the entry in our first row, first column is a 2.
To find the entry which turns out to be 3 in the first row, second column, how do we do that? We dot what? We want what? First row, second column. We dot the first row of A with the second column B. We get 1 times 2 plus 1 times 1. 2 plus 1 is 3, et cetera. Carrying out the details, the matrix A multiplied by the matrix B in this case would be the matrix [2, 3] in the first row, [2, 3] in the second row.
By the way, B times A would involve what? Writing in the matrix B first followed by the matrix A. And leaving the details for you just to check out for yourselves, notice-- well, let me just do one for you. Suppose I want to see what was in the first row, first column here. I would have to do what? Dot the first row of B with the first column of A, the first row of B with the first column of A. That gives me 1 times 1 plus 2 times 1, which is 3.
And by the way, even if I went no further here, notice that already tells me, in this case at least, that AB is unequal to BA. Why is that? Because the entry in the first row, first column of AB happens to be 2. The entry in the first row, first column of BA happens to be 3. And we've already seen that for two matrices to be equal, they must be equal entry by entry. And they're already unequal in the entry in the first row, first column.
So notice, by the way, that matrix multiplication need not be commutative. And you see this is a glaring distinction between matrix multiplication and numerical multiplication where when we multiply two numbers, the product does not depend on the order in which they're written. But when we multiply two matrices, our definition is such that the product does depend on the order in which they're written. And you might say, well isn't that an awful thing? Well, maybe it is. But notice that once we fit the definition that we want, we have to let the properties fall where they will.
And by the way, it is nice to notice-- and again, I will leave the details out. They are in the notes. You'll be checked on these in the exercises and the like. But the interesting point is that most of the rules of numerical arithmetic are obeyed by matrix arithmetic. And let me just go through a few of those with you, just mentioning what they are.
Notice, again, the wording here. I call these properties of the game rather than in terms of our usual game notation the rules of the game. Notice that we sort of refer to things as being rules when you make up the game abstractly. On the other hand, when you start with a model and tell what addition, multiplication, and equality means, from these definitions you deduce properties of the system.
So the properties of the game of matrices, meaning what? The properties based on the interpretation that we're giving this in terms of the calculus course, the systems of equations. You see, there are other ways of introducing matrices. For the calculus course, we elected to do it in terms of systems of equations.
But at any rate, you can check the following things out quite easily. First of all, if A and B are any two matrices, A plus B is equal to B plus A. And that follows, of course, from the fact that when you add two matrices, you add them entry by entry, and the entries happen to be numbers. And the sum of two numbers does not depend on the order in which you write the numbers. In other words A plus B does equal B plus A.
Similarly, if you add (B plus C) to A, it's the same thing as if you added C to (A plus B). In other words, the addition of matrices is associative. The answer does not depend on the sum, does not depend on voice inflection. A plus (B plus C) is the same as (A plus B) plus C. Again, notice how this obeys the rules of ordinary arithmetic.
Thirdly, if I define the zero matrix to be the matrix each of whose entries is 0, then I claim that if I add the zero matrix on to any matrix A, the result must still be the matrix A. And this is also a triviality to check. Namely, how do you add two matrices? You add them term by term. But every term in the zero matrix is 0, and adding 0 on to a number doesn't change the number. Consequently, if you term by term add 0 onto the entries in A, you still have the entries in A intact. So A plus zero is A.
And finally the rule of inverses. Namely, if A is the matrix each of those entries we'll call a sub ij, then if I add minus A onto A, the result will be 0,. Whereby minus A I mean what? The matrix each of whose entries is negative aij. The reason being, again, that when you add two matrices you add them term by term. Every time I add on the negative of a number to the number itself, I get the 0 number. Consequently, every entry in the sum of A and negative A will be 0. And by definition, that's the zero matrix.
Point I want to mention is to observe that with the zero matrix, defined as we've defined it, and the negative of a matrix defined as we have defined it, that the rules for matrix addition are exactly the same structurally as the rules for numerical addition. And that means that with respect to addition, we cannot distinguish matrix algebra from arithmetic algebra, numerical algebra.
In fact, many of the rules of multiplication for matrices are the same as for numerical multiplication as well. Namely, one can show that if you multiply the matrix A by the product of the matrices B and C, that this is the same as multiplying A times B by the matrix C. These can be checked by longhand computation. I have you do some of these things in the exercises.
Similarly, one can prove that matrix multiplication is distributive. That if you multiply a matrix by the sum of two matrices, A times (B plus C) is the same as A times B plus A times C.
And another rule that's interesting, there is a matrix that plays the role of the number 1. And surprisingly enough, it's not the matrix all of whose entries are 1. It's the matrix all of whose entries are 1 on the main diagonal and 0s every place else. That may sound surprising. You say, "Gee, the zero matrix is when I had all zeroes, why shouldn't the unit matrix be when I have all 1's?" And the answer is, if we were to multiply two matrices term by term, then the unit matrix would've been all 1s. But remember we agreed not to multiply matrices term by term. We agreed to do what? Dot the ith row of the first with the jth column of a second. And if we do that, like it or not, this is what the matrix has to look like.
In other words, let me take the matrix in the 3 by 3 case because that's complicated enough to have you see the overall picture and simple enough so it doesn't take pages of computation. Let me take the 3 by 3 matrix, which I'll just quickly write as [a, b, c ; d, e, f; g, h, i] and multiply that by [1, 0 0; 0, 1, 0; 0, 0, 1]. You see, that's the matrix that has what? 1s in the main diagonal and 0s every place else.
Look what happens. When I look, for example, for the term in the second row, third column-- second row, third column-- look what's going to happen? The d multiplies 0, that gives me zero. The e multiplies 0, that gives me 0. And the f multiplies 1, which gives me f. In other words, the term in the second row, third column is going to be f, which is exactly what it should be if we claim that the product matrix is going to be the same as the first matrix.
In other words, notice that this set up is such that you're going to get 0s every place but when the term in question comes up. And I don't want to go through that in too much detail for you because I think that as you just play with this thing, you'll see it for yourself.
Let me do just quickly one more. Suppose you wanted the term in the third row, second column. You see what's going to happen here. The g multiplies 0, which yields a 0. The h multiplies 1, which yields an h. And the i multiplies a 0, which yields a 0. So the dot product is just going to be h itself as it should be.
And so you see what I'm saying is, by my definition of matrix multiplication, if I define the matrix I sub n to be the matrix each of whose entries is 0, except on the main diagonal they're 1-- another way of writing that is that the entries-- in fact, we could write this way. The entries, let's call this delta ij. And what you're saying is that delta ij is 0 when i is unequal to j. And it's equal to 1 when i equals j. And what we're saying is that if you define I sub n to be that matrix, if you multiply A by I sub n, it's the same as multiplying I sub n by A. And the result will always be A.
By the way, one of your first objections might be, isn't this redundant? Why do you have to write that A times I sub n is the same as I sub n times A? Well, recall that in our first example we showed that A times B did not have to be the same as B times A. In other words, note that this remark that A times I sub n equals I sub n times A is not redundant. In general, the product of two matrices depends on the order in which they're written.
And that leads me, then, into my next subtopic. And that is, where is matrix algebra different from numerical algebra? In other words, what are the differences between matrix algebra and the usual algebra? We've already seen one difference. One difference is that AB need not equal BA. In other words, I could find A and B such that A times B equals B times A. But in general, given A and B at random, there is no reason to presuppose that A times B has to equal B times A. And again, there are plenty of drill exercises on this particular point.
The second major difference between this and numerical arithmetic is the existence of inverses. In other words, given a matrix A which is not the zero matrix, the matrix called A-inverse need not exist. Now what do you mean by A-inverse? The inverse of a number means a number such that when you multiply it by that given number you get one. Therefore, inverse matrices would be what? An inverse would be that matrix such that when you multiply it by A, you get the identity matrix I sub n, in other words, the matrix that has 0s every place except 1s on the main diagonal.
And what I'm saying is that whether we like it or not, it turns out that given the matrix A, you cannot always solve the matrix equation A times X equals I sub n. In other words, what it means in terms of equations is given n equations and n unknowns, you may not always be able to invert the equations. In other words, given the Ys in terms of the Xs, it's not always possible to solve uniquely for the Xs in terms of the Ys. Again, I'm just trying to give you an overview here. So a more detailed remark or an explanation about what I'm saying will be in the notes and in the exercises. But the key point is that not all matrices are invertible.
And again, the easiest way to see this is to explicitly go into a 2 by 2 matrix and take a look. Let's take here as an example the 2 by 2 matrix [1, 2; 2, 4]. Let X be the matrix [X sub 1,1, X sub 1,2; X sub 2,1, X sub 2,2]. Now, I know how to multiply two matrices. I multiply A times X. I obtain the matrix written down over here [(x sub 1,1) plus 2(x sub 2,1), 2(x sub 1,1) plus 4(x sub 2,1), et cetera. I sub 2, by definition, is the matrix [1, 0; 0, 1]. And to say that A sub X equals I sub 2 means that this matrix, term by term, must be the same as this matrix.
In particular, if we just focus-- let me just get a piece of chalk over here-- if we just focus our attention on this having to be the same as this and this having to be the same as, this leads to the fact that (x sub 1,1) plus 2(x sub 2,1) has to equal 1. This says that 2(x sub 1,1) plus 4(x sub 2,1) has to equal 0. And I claim that right away this shows that it's impossible for there to be numbers x sub 1,1 and x sub 2,1 that exist. Because, look it, if we found two numbers x sub 1,1 and x sub 2,1 that obeyed these two equations, we would have proved that 2 equal 0, which as I told you before is only true for small values of zero, right.
So look it, if this were true, just multiply both sides of this equation by 2. It would say what? That 2 times (x sub 1,1) plus 4 times (x sub 2,1) equals 2. But at the same time, that same quantity must equal 0. And since this is impossible, it means that this matrix does not exist. In other words, it is impossible-- we've just shown it-- to find numbers that I can plug in here such that when I multiply this matrix by this matrix, I get this matrix.
That's the best proof that A inverse need not exist. Namely what? Show one matrix A for which A inverse doesn't exist. That's all you have to do to show that an inverse doesn't have to exist. Just show one case. One counterexample is all it takes, right.
So what's the point? The point is we must beware of results which require A-inverse. Well let's think back to ordinary arithmetic. What results required the existence of an inverse? Well, for example, in ordinary arithmetic, if the product of two numbers was 0, one of the factors had to be 0. And we proved that by multiplying both sides of the equation by A inverse.
Well, we don't have inverses necessarily in matrix algebra. So for example, in matrix algebra, it's possible that the product of two matrices gives you the zero matrix. Yet neither of the two matrices itself is the zero matrix. Or, another consequence of inverse, in matrix multiplication it's possible that A times B can equal A times C, yet A need not be 0 nor B need equal C. In other words, we can have A times B equals A times C even though A is not the zero vector and B is not equal to C.
Again, the beauty of an arithmetic course is that we don't have to hand wave. We can make the emotional, subjective statements, but we can back them up simply by showing the existence of examples. So to illustrate my claims here, let me just do this, as I say, by means of an example.
Let's take the matrix that we took in example 2. Let A be the matrix [1, 2; 2, 4]. Is that the zero matrix? No, it's not the zero matrix. Why? Because the zero matrix must have 0 entries every place, and this doesn't have 0 entries every place. Let's just out of the hat pull out the matrix [2, 2; -1, -1]. And again, without going through every single detail here, if I multiply this matrix times this matrix, the result is the zero matrix.
For example just by way of illustration, 1 times 2 is 2. 2 times (-1) is (-2). 2 plus (-2) is 0. That accounts for the 0 here. If I check this whole thing through, I will find that every entry here must be 0. Therefore, I have what? Two matrices, neither of which is the zero matrix, yet their product is the 0 matrix.
On the other hand, let me now take the same matrix 1 2 2 4 and multiply that by another matrix that I pull out of the hat. The one I'm gonna pull out of the hat is minus 2 minus 6 1 3. And again, just to do a quick check over here, let's check the product in the second row, second column. It's what? 2 times minus 6 is minus 12. 4 times 3 is 12. Minus 12 plus 12 is 0. In other words, this times this, neither matrix here is the zero matrix, yet the product is a zero matrix.
And by the way, before I make a further comment here, let me say I didn't really pull these out of the hat. I leave this as an exercise for you to do. But there are infinitely many matrices that I can multiply this matrix by and get the zero matrix. In fact, all I have to do is make sure that the matrix I multiply this by has the property that the first row is minus twice the second row.
And if you don't know where I get that from, all I do is what? I solve, again, the matrix equation. I let X be the matrix I'm looking for. I take 1 2 2 4, multiply it by X, equate it to 0, and see what conditions are imposed on my coefficients. Again, this is not the point of our overview here. You can do that in the homework exercises in the reading material.
But the key point is, what I have done is what, as far as this is concerned? Look at these two examples. This matrix times this one is the zero matrix. This times this is the zero matrix. Therefore, this times this is the same as this times this. Notice that this term equals this term, right. And what I'm driving at is is that here is a case where this is not the zero matrix. The second factors are not equal to each other. You see this matrix does not equal this matrix. Yet when I cancel this, I can not conclude that this matrix equals this matrix. In other words, here I have what? A times B equals A times C. Yet A is not the zero matrix, and the matrix B is not equal to the matrix C.
And so what have I proven? I've proven that it is not necessarily true that for matrices when it comes to cancellation of factors when you're multiplying that you can take the same liberties that you can in numerical arithmetic. And this is a very troublesome thing. This bothers people. It means that we have to be particularly careful when we do the arithmetic of matrices to make sure that we're dealing with what? Invertible matrices. What it means, again, is that if the system of equations that the matrix is coding is such that the matrix does not have an inverse, it means that somehow or other we cannot invert the system of equations.
And so what we do in the study of matrix algebra is we single out a particularly important subset of the matrices. Namely, we have a special interest in those matrices A for which A inverse exists. And because we have that special interest, what we do is we define a matrix, give it a special name if it has an inverse. Namely, a definition is what? The matrix A is called non-singular provided A inverse exists.
Now what's the beauty of non-singular matrices? The beauty of non-singular matrices is if you happen to know that you're dealing with a non-singular matrix, then you can take the same arithmetic liberties that you could with numbers. For example, let me just give you an illustration here.
Suppose I know that AB equals AC where A, B, and C are matrices. And I happen to also know that A is non-singular, in other words that A inverse exists. I claimed that with this extra piece of information, provided I know that A is non-singular, I claim that from AB equals AC, I can conclude that B equals C. Not only can I conclude it, but I can conclude it as a corollary to numerical arithmetic. Because remember, how did we prove in numerical arithmetic that you could have cancellation? All it required was that the number being canceled had to have an inverse.
I can parrot the proof word for word. Namely, I'm given AB equals AC. I say, OK, I will therefore multiply both sides of the equation by A inverse. How do I know I could multiply by A inverse? Well, all I need to know is that A inverse exists. But how do I know then that A inverse does exist? Well, I said that A is non-singular and by definition that means that A inverse exists. So I multiply both sides by A inverse.
Now, I also know that multiplication is associative. That's one of my rules of the game. So I switch the voice inflection, the parentheses. In other words, the fact that this is true means that this is true. See, I just switched the voice inflection. But by definition, what property does A inverse have? It has the property that A inverse multiplied by A gives you the identity matrix I sub n. In other words, from this statement I can conclude that I sub n times B equals I sub n times C.
But what property does I sub n have? It has the property that when it multiplies any matrix, it does not change that matrix. Therefore I sub n times B is just B. I sub n times C is just C. And I've concluded what? That as a theorem, it follows inescapably in other words, that if A is non-singular, then if AB equals AC, B must equal C. And this is step by step the same proof that we used for numbers, except that we had to use what? I sub n instead of 1. If we just recopy this, this is what? Structurally, we cannot distinguish the proof here from the proof that we gave in numerical arithmetic.
Now the question that comes up is, how do you determine whether a matrix is non-singular or not? Are there any cute recipes. And we've already in this course for different reasons talked about 2 by 2 determinants and 3 by 3 determinants. Determinants get messy when you go up to higher than 3 by 3. Worse then messy, they become deceptive because the obvious recipes turn out to be false. And the correct recipes turn out to be ones that you rebel against using because somehow you're not used to them and like to believe there's a simpler way of doing it.
I am going to save the precise study of determinants for a later block of material. But I just want to mention one interesting thing in terms of determinants and why determinants come up in the studies of systems of linear equations. And that is the role of determinants is this-- and more details will be supplied in a later block-- the matrix A is non-singular if and only if its determinant is not 0. In other words, the existence of A inversed is equivalent to the fact that the determinant of the given matrix is not 0.
Now this is a messy proof. So I will restrict the proof to that case when n equals 2. Because in the case n equals 2, it's rather easy to multiply these matrices together. For example, let's suppose I take the matrix a b c d. In other words, we'll call this the matrix A. And I want to see what should I multiply that matrix by to get the identity matrix. In other words, under what conditions can I invert the matrix a b c d?
Well, what that means is I want to find x1 x2, y1 y2 such that when I multiply these two matrices together I get this matrix. Now, notice what this leads to. For example, this times this is what? It's ax1 plus by1, but that must equal 1. Similarly, this times this is cx1 plus dy1, and that must equal this, which is 0. And that means what? To find x1 and y1, I have to be able to solve the system of equations ax1 plus by1 equals 1. cx1 plus dy1 equals 0.
Oh, similarly, it also has to be true that when I multiply the first row by the second column, I have to get 0. The second row here by the second column here, I have to get 1. That also gives me this system of equations. Notice by the way, that the matrix of coefficients on both of these two systems is the same. The matrix of coefficients is what? In both cases, the matrix of coefficients is a b c d.
Now here's the point. In order to get a unique solution here, remember, both of these two equations, each of these two equations represents the equation of a straight line. The only way you can get one and only one solution is when the straight lines are not parallel. You see, if the two straight lines are parallel, if they're different straight lines, there are no intersections. And if they happen to coincide, you have infinitely many intersections. So the only time you're in trouble here is if these two lines happen to be parallel.
Algebraically, that says what? That the ratio b over a is the same as the ratio d over c. In other words, the unique solubility of these systems of equations requires only that b over a be unequal to d over c. Well, another way of saying that is what? That a times d minus b times c be unequal to 0. But what is a d minus b c? a d minus b c is the determinant of the matrix a b c d. In other words, in the two by two case, the matrix is invertible if and only if its determinant of coefficients is not 0. At least that proves it for the two by two case. Let's check it out in a couple of examples.
Let's go back again to our old friend from examples 2 and 3. Let A be the matrix 1 2 2 4. The determinant of coefficients is 1 times 4, which is 4, minus 2 times 2, which is also 4. 4 minus 4 is 0. The determinant A is 0. Therefore, according to our theorem A is singular.
By the way, I guess I've slipped here. I haven't defined what singular means. I think it's rather obvious. Singular is an abbreviation for non-non-singular. In other words, if it's false that the matrix is non-singular, then we call it singular. In other words, A does not have an inverse because its determinant is 0. And this checks with our previous result when we showed that A inverse didn't exist. OK.
Let me pick one more example to conclude today's lesson with. Let's take the matrix A now to be the matrix whose first row is 5 4 and whose second row is 7 6. Then, how do we find the determinant again? It's 5 times 6, which is 30, minus 7 times 4, which is 28. 30 minus 28. Well, it happens to be 2. But the key factor is that it's not 0. In other words, the determinant of A is not 0. Therefore, according to our general theory, A inverse exists.
And now, we come to why I have to give a second lecture on this particular topic. You see, what I have done now is I have finished matrix algebra as far as the qualitative overview is concerned. But the major problem in working with matrices, and the one which I will solve next time, is the following. If I'm given a matrix A, and I know somehow or other the determinant is not 0, or equivalently, somebody tells me that the A inverse exists, the question is, knowing that A inverse exists, how do we actually determine the value of A inverse?
You see, there's two problems here. One is given a system of equations, say you have n equations with n unknowns where Y1 up to Yn are expressed in terms of X1 up to Xn, the first question is, is it possible to solve for the Xs in terms of the Ys. Now, if the answer to that question is no, we quit. But if the answer is yes, the next question-- and from a practical point of view this is often crucial-- the next question is knowing that we can solve for the Xs in terms of the Ys, how do we actually carry out this computation?
And the matrix equivalence is this. Given the matrix A, A inverse need not exist. But if A inverse does exist, how do we compute it? And that will be the subject of our lecture next time, how to compute A inverse. And a corollary to this will be how is this related to solutions of systems of n equations and n unknowns.
But we'll talk more about that next time. And until next time, good bye.
Funding for the publication of this video was provided by the Gabrielle and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.
Study Guide for Lecture 2: The "Game" of Matrices
- Chalkboard Photos, Reading Assignments, and Exercises (PDF)
- Solutions (PDF - 3.1MB)
To complete the reading assignments, see the Supplementary Notes in the Study Materials section.
Free Downloads
Video
- iTunes U (MP4 - 90MB)
- Internet Archive (MP4 - 90MB)
Subtitle
- English - US (SRT)