Linear Algebra for Beginners: Open Doors to Great Careers | Richard Han | Skillshare

Linear Algebra for Beginners: Open Doors to Great Careers

Richard Han, PhD in Math

Linear Algebra for Beginners: Open Doors to Great Careers

Richard Han, PhD in Math

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
44 Lessons (6h 51m)
    • 1. Introduction Lecture

      3:03
    • 2. Gaussian Elimination Systems of 2 Equations

      11:13
    • 3. Gaussian Elimination and Row Echelon Form Systems of 3 Equations

      18:15
    • 4. Elementary Row Operations

      11:13
    • 5. Elementary Row Operations Additional Example

      6:32
    • 6. Vector Operations and Linear Combinations

      18:57
    • 7. Vector Equations and the Matrix Equation Ax=b

      16:16
    • 8. Linear Independence

      6:26
    • 9. Linear Independence Example 1

      11:02
    • 10. Linear Independence Example 2

      4:36
    • 11. Matrix Operations Addition and Scalar Multiplication Corrected (Am)

      7:12
    • 12. Matrix Operations Multiplication

      9:18
    • 13. Commutativity, Associativity, and Distributivity

      13:13
    • 14. Identities, Additive Inverses, Multiplicative Associativity and Distributivity

      14:25
    • 15. Transpose of a Matrix

      6:42
    • 16. Inverse Matrix

      5:30
    • 17. Gauss Jordan Elimination

      10:56
    • 18. Gauss Jordan Elimination Additional Example

      6:03
    • 19. Determinant of a 2 by 2 Matrix

      2:34
    • 20. Cofactor Expansion

      7:18
    • 21. Cofactor Expansion Additional Examples

      5:51
    • 22. Determinant of a Product of Matrices and of a Scalar Multiple of a Matrix

      11:07
    • 23. Determinants and Invertibility

      7:26
    • 24. Determinants and Transposes

      3:35
    • 25. Vector Space Definition

      7:22
    • 26. Vector Space Example

      13:43
    • 27. Vector Space Example Continued

      12:18
    • 28. Vector Space Additional Example

      16:46
    • 29. Vector Space Additional Example Continued

      4:03
    • 30. Examples of Sets that are Not Vector Spaces

      6:09
    • 31. Subspace Definition and Subspace Properties

      9:55
    • 32. Definition of Trivial and Nontrivial Subspace

      3:38
    • 33. Additional Example of Subspace

      5:17
    • 34. Subsets that are Not Subspaces

      9:13
    • 35. Subsets that are Not Subspaces Additional Example

      4:25
    • 36. Span

      15:26
    • 37. Span of a Subset of a Vector Space

      8:24
    • 38. Linear Independence 2

      9:35
    • 39. Determining Linear Independence or Dependence

      13:43
    • 40. Basis

      16:07
    • 41. Dimension

      9:52
    • 42. Coordinates

      3:27
    • 43. Change of Basis

      9:23
    • 44. Examples of Finding Transition Matrices

      13:02
15 students are watching this class
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

721

Students

--

Projects

About This Class

Would you like to learn a mathematics subject that is crucial for many high-demand lucrative career fields such as:

  • Computer Science
  • Data Science
  • Actuarial Science
  • Financial Mathematics
  • Cryptography
  • Engineering
  • Computer Graphics
  • Economics

If you're looking to gain a solid foundation in Linear Algebra, allowing you to study on your own schedule at a fraction of the cost it would take at a traditional university, to further your career goals, this online course is for you. If you're a working professional needing a refresher on linear algebra or a complete beginner who needs to learn Linear Algebra for the first time, this online course is for you.

Why you should take this online course: You need to refresh your knowledge of linear algebra for your career to earn a higher salary. You need to learn linear algebra because it is a required mathematical subject for your chosen career field such as computer science or electrical engineering. You intend to pursue a masters degree or PhD, and linear algebra is a required or recommended subject.

Why you should choose this instructor: I earned my PhD in Mathematics from the University of California, Riverside. I have extensive teaching experience: 6 years as a teaching assistant at University of California, Riverside, over two years as a faculty member at Western Governors University, #1 in secondary education by the National Council on Teacher Quality, and as a faculty member at Trident University International.

In this course, I cover the core concepts such as:

  • Gaussian elimination
  • Vectors
  • Matrix Algebra
  • Determinants
  • Vector Spaces
  • Subspaces

After taking this course, you will feel CARE-FREE AND CONFIDENT. I will break it all down into bite-sized no-brainer chunks. I explain each definition and go through each example STEP BY STEP so that you understand each topic clearly. 

Practice problems are provided for you, and detailed solutions are also provided to check your understanding.

Grab a cup of coffee and start listening to the first lecture. Enroll now!

Meet Your Teacher

Teacher Profile Image

Richard Han

PhD in Math

Teacher

Hi there! My name is Richard Han. I earned my PhD in Mathematics from the University of California, Riverside. I have extensive teaching experience: 6 years as a teaching assistant at University of California, Riverside, over two years as a faculty member at Western Governors University, #1 in secondary education by the National Council on Teacher Quality, and as a faculty member at Trident University International. My expertise includes calculus, discrete math, linear algebra, and machine learning.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction Lecture: Welcome to linear algebra for beginners, Open doors to great careers. My name is Richard Han. This is a first course in linear algebra. If you're a working professional needing a refresher on linear algebra or a complete beginner who needs to learn linear algebra for the first time, this online courses for you If you're busy schedule doesn't allow you to go back to a traditional school. This course allows you to study on your own schedule and further your career goals without being left behind. If you plan on taking linear algebra in college, this is a great way to get ahead. If you're currently struggling with linear algebra or half struggle with it in the past, now is the time to master. After taking this course, you will have refreshed your knowledge of linear algebra for your career so that you can earn a higher salary. You will have a require prerequisite for lucrative career fields such as computer science, data signs, actuarial science, financial, mathematics, engineering, cryptography and economics. You will be in a better position to pursue a master's or PhD degree, according to we use map dot org's here. Some high end salaries for such fields as Electrical Engineer, $136,690 a year, computer scientist $168,776 a year. And according to Glassdoor dot com, the average salary for a data scientist is $118,709. Are you? Here are some famous uses of linear algebra in machine learning. Again, vectors can be used to reduce the dimensionality of a data set using a technique called principal component Analysis. In cryptography, messages can be encrypted and decrypted. Using matrix operations and in finance. Regression analysis can be used to estimate relationships between financial variables. For example, the relationship between the monthly return to a given stock and a monthly return to the S and P 500 can be estimated using a linear regression model. The model can in turn be used to forecast the future monthly return of the given stock. In this course, I cover the core concepts such as Ghazi, an elimination vectors, matrix algebra, determinants, vector spaces and sub spaces 2. Gaussian Elimination Systems of 2 Equations : okay, in this section, we're going to look at solving systems of linear equations. Um, we're going to look at the process of Gaussian elimination, and it has three things that you can do. The first thing that you can do is switch to equations. The second thing is that you can multiply one equation by a non zero number. The third thing that you can do is add a multiple of one equation to a second equation. Okay, now this set of three things that you can do that's that's called galaxy and elimination. This will make a lot more sense. Um, if we look at some examples Okay, let's do one example here. Let's say you had a system of equations like this. So here you have, um, two equations and two variables X and Y. So this is a system of two equations in two variables. What we want to do here is try to get rid of, uh, this X variable here, So let's do negative for times the first equation. Add that to the second equation. If you multiply the first equation by a negative four, this part will become negative for X that will add with this forex in the second equation and give you zero X that will cancel out the X variable. So let me multiply the first equation by negative four. And then I leave the second equation as it is now here we see negative for X and we see forex here. Let's add these two equations. I get zero X plus nine. Why he goes to minus four. Zero x is just zero. So I get nine. Why he goes to minus four. Let's sulfur. Why divide both sides by nine and you get negative for nine? I want to find what exes. So I'll plug this. Why value back in two. The first equation X minus two times negative four knives. It goes to one now Simplify this sulphur X by subtracting 8/9 from those sites and I get 1/9. Okay, so X equals to 1/9. And why equals negative for nine for overnight? So these are your your values for X and Y, And that's a solution to this system of equations. Let's do another example. Let's say we had this system of equations. Um, let's multiply the first equation by negative too, and add to the second equation. Okay, so we're going to add these two equations, I get zero X plus zero y equals to one, and this left hand side equals zero. So we get a contradiction here since we get a contradiction here and the original set of equations has no solution. Okay, let's do one more example. - All right. Let's, um, try to get this to be negative 14 so that the X variables cancel out. Let's do minus two times the first equation and add it to the second equation. And the second equation just stays the same here. Okay, Now, let's add I get zero X plus zero y he goes to zero, and so zero equals 20 Well, that doesn't really tell me anything. If you look back here, notice the second equation is just twice the first equation. So, really, we only have just one equation in just the 1st 1 The second equation is redundant. So this is all we have here. Is this equation note? Why? Why Could be anything. So let let why be some parameter t some some variable t Let's plug that in for why here and insult for X seven X plus five t egos, too. To spring out five t over to the other side. Divide both sides by seven. And this is what I get. So the set of all solutions is going to be the set set of all pairs X and Y. Let's write X in terms of tea, as we have sold for it here and why? Why is just tea and he is a free variable. So it could be anything could be any real number. So we write it like this t lies in descent of real numbers. Okay, So, um, the set of all pairs like this where t is any real number? Those are the solutions to this original set of equations. 3. Gaussian Elimination and Row Echelon Form Systems of 3 Equations: okay for a system of three equations and three variables we want to solve in a similar fashion by getting rid of the variables one by one. Until we have a triangular shape. Let's look at an example. Okay, so this is our system of equations. We have three equations and three variables. X, y and Z Notice this in the second equation. Um, we have a negative X. And if we were to add that to the first equation, the ex terms would cancel out. So let's do the first equation. Add that to the second equation. Okay, the first equation just stays the same. Now, we're going to replace that second equation by taking the result of adding the first equation to the second equation. So we'll get zero x plus three y plus four z. That third equation just stays the same. So let's rewrite it down here. Okay? Now, let's try to get rid of this ex term down here in that dirt equation by multiplying the first equation by minus three. So that's gonna get be like this minus three times the first equation, plus the third equation. Okay, Sue, let's leave the first equation as it is, the second equation just stays the same. Now here, we're going to get minus three X plus three x so that that's going to give you zero x so we won't have a next term. We have minus three Y minus three wire, which is minus six Y minus three Z plus Zia's minus two Z and then minus three times zero, which is zero minus one, which is minus one. Okay, so let's look at this down here. These two equations, let's try to cancel the why term? Okay, we want this three here, the coefficient to be six, so that when we add it to the minus six, why they'll cancel out. So let's multiply the this second equation here by two. Okay, so two times a second equation, Um, add that to the third equation. Okay, So all right, let's rewrite the first equation. The second equation stays the same. And then here we get Ah, six y minus six. Why? Which is zero i A Z minus two z, which is 60 and then to minus one, which is one. Okay, so now look down here at this cigs e. We want the coefficient to be one. So let's divide out. Divide out by six. So it's going to be 16 times at third equation. So it's rewrite everything here and we get Z. He goes to you 16 Look at the coefficient of why, in this second equation, we want that to be one. So let's divide the second equation by three. So wondered times a second equation. Okay, so now we have this set of equations, all the coefficients of these leading variables here or one. Okay, Now, when you have this triangular shape here, we reduced it to this triangular shape. All the variables in the front have coefficient one. Then we say that it's in Rhode Echelon form. Okay, row at Chalon form. Okay, so that's this year, this triangular shape. If we can get the system of equations to look like this and all the coefficients Air one, we say that it's in row. Excellent form. All right, let's do another example. Let's say you had this set of equations. Okay, let's let's try to get rid of this X variable here by multiplying the first equation by negative three minus three times the first equation and add that to the second equation. What do we get? Okay, so leave the first equation as it ISS here we have, um, minus three X plus three X, which is zero X here we have minus three times minus two, which is six. So we'll have six y plus two I, which is eight y minus 15 z. Mine is E, which is 16 z and then minus six minus two, which is minus eight. Okay, so here we end up with this system of equations. Look at this second equation here. Notice Z could be anything, So let Z b t. And he is a free variable. Let's plug that in for Z here. And so, for one. Okay, let's divide out by eight. And I get why, in terms of tea, Okay, so we already have Z. We have why, and now we want to solve for X. So let's use that first equation here and plug in what we got for why that was to t minus one. Z was t. So let's plug that in here. Okay? And simplify. So we get X plus two plus t. He goes to the twos, cancel out on each side. Subtract t from both sides. We get execute su minus t. Okay, so X is minus t. Why is to t minus one? And Z is t So we have the set of all, uh, points like this Where t is any real number, Okay, He is a free variable. So all points of this form our solutions to the original set of equations. Okay, let's do one more example. - Okay , let's look at the 1st 2 equations. If we take the first equation and subtract the second equation, we can get rid of that X term. So let's do e one minus e to. Okay, So let's rewrite the first equation. And here we're going to take the first equation and subtract the second equation. So here the extremes cancel. We get why, minus minus Weiss. And that's actually to I. And then minus Z minus Z, which is minus two Z. Here we get zero minus one, which is negative one, and the dirt equation stays the same. Okay, Now look at the first equation and the third equation. We want to get rid of that X term. So let's multiply the first equation by negative. Too Negative, too. Times the first equation. And add that to the third equation. All right. Okay. So minus two X plus two X zero X minus two I plus why is minus one here? We will have to z mine is he which is just see zero plus zero, which is zero. Okay, let's look at these last two equations here. Let's try to get rid of the Y term down here. So let's do the second equation, Kate. That's this year. The second equation. Plus two times the third equation. Okay, Just going to rewrite everything here. Okay? For that third equation will have to I minus two y, which is your ally. Okay, we'll have minus two Z plus two Z, which is zero Z. So everything on the left hand side is zero minus. One plus zero is minus one. Okay, so we get a contradiction here. And since we arrive at a contradiction, we know that the original set of equations has no solution. Okay, No solution. 4. Elementary Row Operations: in this lecture, we're going to look at elementary row operations. We can rewrite a system of equations using a matrix. So, for example, look at this system of equations. We've actually solve this system of equations in an earlier lecture, we can run a matrix that encapsulate its this system of equations. Okay, Now, the way we do that is we look at the coefficients of the variables in this system of equations. So look at the first equation. We see the, um the coefficients are 11 and one. Okay, so let's right that here. One, one and one. On the right hand side, we have the constant zero, so we'll right that here. Okay, now move on to the second equation. We see that the coefficients are negative. One to and three. Okay, so let's right that NATO one to entry on the right hand side, we have one. Okay, let's move on to the third equation. The coefficients are three minus three and one. He said three minus three and one. The constant term is negative one. So let's right that here. Okay, so this matrix that we just formed that's called the Augmentin matrix. Okay, Now we can solve the system using the same three operations that we used earlier. Instead of performing operations on equations, we can perform operations on rows. The operations are called elementary row operations. Okay, the first thing you can do is switch to Rose. The second thing you can do is multiply one row by a non zero number. And finally, the third thing that you can do is add a multiple of one row to a second room. This is exactly the the same three steps that you saw earlier in galaxy and elimination. Okay, let me write that matrix again. The augmented matrix waas 11 10 minus 12 three and one three, minus three one and minus one. Okay, so this was our augmented matrix. Let's look at the 1st 2 rows. If we were to add those two rows, notice the one in the negative one, they would cancel here. So let's do that is to row one plus road to, and that will be our new road to Okay, said the first road just stays the same. The second row. We add these two rows, so we get zero three, four and one. The dirt road just stays the same. Okay, now let's look at that third road there. We want to get rid of that three. So the way to do that is we'll take minus three times a row one and then add that to row three. Okay, so let's do minus three. Row one, plus row three. Okay. That'll give us our new row three. Let me write that here. Okay. The 1st 2 rows, they just stayed the same. He's gonna rewrite it here. Now. The third row, I get minus three plus three, which is zero minus three minus three, which is minus six minus three plus one, which is minus two zero plus negative one, which is negative one. And this is what I get. All right, let's focus here on the second and third equations. We want to try to get rid of that minus six there. But if if we were to multiply the second role by two, we would get a six here, okay? And then we were will be able to add that to this the third equation and cancel out that minus six. Okay, so let's do that. I do too. times wrote to plus row three that will give us our new role. Three. Okay, let me just rewrite that here. Okay, so we had to road to plus role three to get our new row three. Okay, Let me do that right here. Okay. So that'll give us six minus six, which is zero. Okay, eight minus two. Which six? Tu minus one, Which is one. Okay. All right, let's look at that. Third row. We want this coefficient. Six to be one. So let's divide out by six. Okay? Divide the third row by six. Now to give us our new third room. Okay? And this is what we get here. All right, let's look at the second row. Okay? Look at the coefficient three. We want that to be one. So let's divide the second row by three. Okay, that will be our new second room. Okay, Let me do it right here. Okay. So I divided everything by three in that second room. OK? Notice. Notice. This triangular shape here of this matrix and all the coefficients along this diagonal part is one. Okay, when you have that, when you have this sort of form and all the coefficients along this diagonal line here, our or one. Then we say that it's in row petulant form Okay wrote at Chalon form. 5. Elementary Row Operations Additional Example: Let's do an additional example. - Okay , let's say we had this system of equations. Let's right. The augmented matrix. Look at the first equation. There's no Ekstrom here, so the coefficient is zero. The coefficient for wise one. And the coefficient for Zia's one. Okay, let's move to the second equation. Their coefficients are one minus one minus one. The constant term is one on the right hand side. Right here. Let's move to the third equation. We have to to and minus one. And the cost. Inter mystery. Okay, look at the first row here. There's a zero at the top. The top left corner. We don't want a zero there. We want a one. So let's switch the first. The first row with second room. Okay, so take the first road, switch it with the second row, then we'll get this. Okay, let's look at this stirred row here. Ah, we have a two. We want to get rid of that too. And make it zero. So we can do that by multiplying the first throw by negative too. And adding that to the third room. That'll give us our new third row case. It will get zero um here will have minus two times negative one, which is to two plus two is four tu minus one, which is one minus two plus three, which is one. Okay, so this is what we'll end up with right here. No, Let's try to get rid of that four down here. Okay, Let's take minus four times wrote to. And add that to row three to give us our new row three. Okay, Let me rewrite that here. So we did. Minus four. Road to plus row three. Okay, that's right here. Okay, so we'll get zero minus four plus 40 minus four plus one is minus three zero plus one is one. Okay, so it looks pretty good here. Um, except that last clo fishing is minus three. We want that to be a one. So let's divide out by minus three. The third row to give us our new third room. Okay, so dividing out by minus story here, Dad gave us a one in here. On the right hand side, we get minus 1/3. Okay, So notice this triangular shape here, okay? And all the coefficients are one along this diagonal, so it's in row H alone form 6. Vector Operations and Linear Combinations: in this lecture, we're going to learn about vector operations and linear combinations. Okay, So what is a vector? A vector is a list of real numbers. Okay, so a vector is a list of real numbers. OK, that's basically when a vector is, um, let's look at an example, So let's call it v que zero and minus two. This is a vector in our two are to just symbolizes the set of all pairs in, um, are too. So it's going to be pairs like this where each each of these entries are real numbers, and it's our two because there's just there were just two entries here. Let's see another example Say the vector you 03 and minus two. Notice. This vector has three entries. So this is a vector in our three. Okay? Now we want to be able to add two vectors. So how do we do that? Okay, we can at two vectors Ah u one and u two. Let's say you won, and you too. We can add two vectors by just adding their corresponding entries. Let's take a look. For example, let's say you won equals two this vector 03 minus 22. And you too. Is this vector seven to five. Now we want to add you one plus you too. Well, we just formed the vector that you get when you when you add the corresponding entries in each vector. Look at the first entry, that zero. They're gonna add that to seven. So that's seven. Then you have the second entry, which is three. Add that to the second entry here, which is to so that's gonna give you five now for the third entry, minus 22 plus five, which is minus 17. Okay, pretty simple. When you want to add two vectors, just add the corresponding entries in each of those vectors that will give you the some the addition of those two. Now we can also multiply a vector by a scaler. He scaler. See a scaler is just another. Another way of saying rial number. So C is a real number. So we want to multiply a vector by a real number. C. Let's do an example. Let's say C was eight and vector V is one minus 12 Okay, we want to take see Times V, which is eight times the vector V and the way we multiply a vector by a scaler is just to multiply each entry by that scaler. So it's very natural. It's the obvious thing to do. Just multiply each term. So that's eight minus eight and 16. Okay, no, If we have a set of vectors a set of vectors v one v two dot dot dot VK Okay, a set of vectors like but and scale er's se see one see to and all the way to CK. Then see one V one plus C two V two oops, plus dot, dot dot plus C K v K. That's called a linear combination. Clean your combination. It's a linear combination of V one through VK and the constant C one to C k. Those air called weights. Okay, um see one times V one plus C two times V two plus dot, dot dot plus ck times. VK. That whole expression is called a linear combination Ah v one to V K. And those coefficients c one C two, etcetera all the way to ck those air called the weights. Let's take a look at an example. Okay, so here's an example. Let's a V one is this V two is this and V three is this okay? What about scale? Er's see one. Let's say it's one. Si two is three. And let's say C three is minus five. Okay, so let's form the linear combination. See one V one plus C TV too. Plus C three V three. Okay, that's a linear combination. Your combination of the vectors B one, V two and victory with weights one three and negative five. Okay, no, we can simplify this linear combination here. If we were to plug in all these values here and vectors V one through V three. This is what we'll get. So we'll get one times 112 plus three times zero minus 10 plus negative five times 222 Now we know how to multiply a scaler toe a vector. So let's go ahead and do that here. So we get 112 zero minus three zero plus minus 10 minus 10 minus 10. Okay, now we simply want to add these three vectors so we'll just go ahead and add each of those entries. So one plus zero minus 10 which is minus nine. Que one minus three minus 10. That's minus 12. Two plus zero minus 10 which is minus eight. Okay, so we went ahead and simplified that linear combination. And we get this other vector here as to find a result. We can also multiply a vector X by a matrix. A So, for example, let's say a was this Matrix and X was the vector 5 to 4. We can multiply the matrix A to the vector X like this. Okay, we put the vector X right next to the matrix A. The way we multiply these two vectors is we first. Look, I'm sorry. The way to multiply The Matrix with the vector is to first look at the first row and multiply the the first entry here in the one. Multiply that to five. The first entry in this vector Do the same with zero nec corresponds to and to multiplies to four. So what we get is one times five plus zero times two plus two times for que do the same thing with the second row here. So three times five plus one times two plus negative one times four. Okay, go down to the third row here two times five plus two times two plus zero times For now, Just simplify here, so we'll get five plus eight, 15 plus two minus four 10 plus four plus zero, and we get the final result after adding these terms. 7. Vector Equations and the Matrix Equation Ax=b: we're now ready to look at vector equations and the Matrix equation. X equals two b Recall the system of equations X plus Y plus C equals zero minus X plus two Y plus three z. He goes to one three X minus three y plus Z equals minus one. We solved this system of equations in an earlier lecture. Um, I just want you to recall that we we did look at this system of equations. Now we can rewrite this as follows. Look at Look at this part here. This we can think of it as a column. Okay, so let's right, uh, X minus X three X. Okay. Now look at the white terms. That could be a column. That's right. Why? To I minus three. Why now? Look at the Z terms. That's right, that as a column. Okay. And the numbers on the right hand side, Let's right that as they call him, Okay, Now leads. Factor out this x variable here, So pull that l Same thing with the y. Variable. Pull that out and dizzy. Variable. Pull that out. Okay. Now look at this bottom equation here. So to ask if there is a solution to the system of equations are original system up here. That's the same as asking if we can write 01 minus one. This column vector. Um, if we can write that as a linear combination of the column vectors here. Okay, so the column were taken to column vectors of the coefficient matrix of the original system of equations. And, uh, we're we've written it here as a linear combination, and X, y and Z, those are the weights. Okay. And 01 minus one. That's just the column vector of Constance that we have on the right hand side of the original equation. Now let's define the span. The span of a cent of vectors is dis edible linear combinations of those vectors. Thus, we want to know if the vector 01 minus one we want to know of that vector lies in the span. It lies in the span of those column vectors we had earlier. Okay, remember, let's go back here, okay? We we were looking at this call on Victor and we were saying that to find a solution to the original equation that's like, that's like asking, Can we right? This column Vector as a linear combination of the columns of this coefficient matrix up here. So that this these are two columns right here. The column vectors. And we were asking, Can we right this constant vector on the right hand side as a linear combination of the columns of that original coefficient matrix for this system of equations? Okay. But that's just asking the same as, um, whether this column vector lies in the span of those column vectors, that's just by definition of span. Note that the vector equation a vector equation that we had earlier just rewriting it here. Okay, I know that this vector equation this can be written as a matrix equation like this. So we take the columns and we form the Matrix having those columns, and then we take the weights X, y and Z put them here as one column vector. And on the right hand side, we have the constant vector. Okay, now the left hand side here, um, we want to see what happens when we multiply this l using matrix multiplication. So the left hand side. If we multiply that out, we'll get this and we can rewrite this as the some of the columns. And that's equal to this. If we factor out the variables. Okay, Now this right here. Remember, that's just the linear combination of the column vectors. And that's this Here. Okay, we show that this this stuff here is just what we have on the left hand side down here on the on the matrix equation. So really, this equation can be rewritten like this using a matrix. And this is what we have shown here. Okay, so let me let me repeat what's going on here. Our original system of equations can be rewritten as a matrix equation. A X It goes to be where a is the coefficient matrix, their coefficient, matrix and x X is the column Vector Colin Vector of weights, which looked like this x, y and z and be that column Vector. On the right hand side is our call on vector. That's our column. Vector of Constance, which was zero one minus one 8. Linear Independence: in this lecture, I want to introduce the notion of linear independence. Let V one through VK be vectors in r n. Okay, RN is just the scent of vectors which have n entries. So there. So each of these vectors we want to VK they're they're going to be lists of n real numbers . Then the set V one to V. K is linearly independent, linearly independent Just in case the vector equation, See one V one plus dot i dy c k v k equals zero If that vector equation has on Lee the trivial solution on a trivial solution being where all the constants are zero Okay, so the only Constance C one through C k that make this vector equation true are those which were zero everywhere. So C one c two c k. They're all zero. That's the trivial solution. Okay, Otherwise, the set is said to be linearly dependent, linearly dependent. Okay, So if c one through c k, they don't all have to be zero. If like one of them was se non zero and dis equation was still true, then the original set of vectors we want to be K. They're said to be linearly dependent. Note the vector equation. See one V one plus dot, dot dot plus C K v k. Note that that vector equation can be rewritten as a X equals zero. We're a A is the matrix with the one through VK, as did call and ah as the columns of that matrix. So if we were to put V one to V. K like this as columns and form the Matrix like that, then that's that's are a right there and the X hear that vector That's going to be our constant C one through C k those air like the weights. Okay, so this vector equation, it can be rewritten as a matrix equation like this, a X equals 20 Okay, is there just is just unaltered native way of writing it, but they're they're the same 9. Linear Independence Example 1: Let's do an example. Determine if the set V one, V two and V three is linearly independent. The first vector view one is given as this specter and V two is given here and we have the three. Okay, so we want to know if this set of vectors is linearly independent. Let's, um, let's form the matrix for the Matrix that has V one, V two and V three as columns. Okay, so let's take V one and form the column. Take V two for him. The second column and victory for that there Colon. And we want to know what C one C two and C three are. This is our, um calling vector of the weights C one, C two and C three set that equal to zero the zero vector, which has zeros in every entry. Now we want to know if this system system of equations has Onley the trivial solution. Okay, let's form the augmented matrix and then let's try to do a bunch of Roe operations on this augmented matrix to sell for the solution. And if we get the trivial solution, then we know that that would be the only solution and so the original set of vectors would be linearly independent. If we find out this system of equations actually has nontrivial solutions, then we know that the original set of vectors are linearly dependent. Okay, so let's look at the 1st 2 rows. Okay, let's let's add the 1st 2 rows. So are one plus R two. That'll give us our new row. Two. Look at the second and third rose. Let's take the second row and subtract the third row and let's make that our new third row Look at the second row Notice. Seven is a leading coefficient. Let's make that a one. So divide out by seven. Okay, look at the second row here. Notice the variable for C three is free. It could be anything. So let's see. Three b t ah, a free parameter. Now, from the second row, we know that C two plus C three is zero. So plugging T for seed jury and sulfur C two. Okay, we get minus t from the first row, we get C one plus three c two plus 13 c three ecos is Aargh Plug in minus T for C two plugging T for C three and simplify. So for C one and I get minus 10 t. Okay, so now we have C one, C two and C three. Let's plug the values in for each of those. See values. We get minus 10. T minus T and T. Let's factor out 80. Okay, sir. Are set of solutions. Looks like this. It's all multiples of this vector minus 10 minus 11 So there's a There's a bunch of solutions here, and, um, there they're nontrivial, meaning that they're not zero. They're not. Zero vector. Okay, now, since the system has a nontrivial solution, the set V one V two victory is linearly dependent. So, for example, let t let tv one then C one is minus 10. Si two is minus one. C three is one that's from plugging anti. Goes to one here. Okay, so I would just get minus 10 minus 11 for my C values. Okay, that's what I have here. Remember that this matrix equation is equivalent to the vector equation, which has a linear combination of the column vectors of this matrix and waits C one, C two and C three. So we would have C one v one plus c two v two, plus C three Victory ECOSOC Hero Plugging in the values for C We get minus 10 V one minus V two plus V eatery ik o Caesar. So this vector equation here is true and therefore we have a leaner dependence relation between the vectors v one, V two and v three. 10. Linear Independence Example 2: Let's look at a second example. Suppose you're given vectors V one, V two and V three and you want to determine if these vectors are linearly independent. Let's form the augmented matrix using the vectors as columns and the right hand side would be the column of zeros. Now let's try to solve this matrix here, this augmented matrix. Okay, look at Row One and Row three. If we were to add these rose, we would get a zero. So let's let's take Row one, plus Row three. Make that our new row three. Let's look at the second row notice. The leading coefficient is, too. Let's make that a one. So divide out by two in road to now . Look at the second row and third room. Let's multiply this second row by minus three, then added to the third row. Okay, so that will give us minus three plus tree, which is zero six plus eight. That's 14 zero plus 00 So we end up with this here from third row. We know 14. C three is zero. So C 30 from the second row. We know C two minus two C three is zero but see 30 So we get C to zero. From the first row we get C one plus three c two plus four c three ecos is Eero. But we know see to a zero and C 30 So see one is zero since C one and see to and C three are all zero. We know that the original set of vector is V one, V two and V three are linearly independent linearly independent. 11. Matrix Operations Addition and Scalar Multiplication Corrected (Am): in this lecture, we're going to look at Matrix operations. But first we want to know what a Matrix is and and by N Matrix is an array which looks like this. So the first entry is a 11 The next one after that is a sub 12 all the way into the reach. A sub one n In the second role, we have the terms ace up to one ace up 22 dot a dot Asep to end, and we go all the way down until we reach a sub m one, followed by a sub m two all the way into the reach. Asep m n. So there are M Rose here and n columns. That's what makes it. And and by n matrix, we can add to bein tresses if they have the same size. So, for example, let's say we had these two matrices and we want to add these two matrices. We just go ahead and look at each of the corresponding entries two plus zero. We just to minus one plus zero, which is minus one zero plus eight and then we move on to the next row. So one plus eight is nine one plus two. History, minus one plus three is too. So adding two matrices is pretty straightforward. Now we can also multiply a matrix by scaler. So let's do an example that, let's say, the scaler was three. And let's say the Matrix A is given by this. Okay, let's take see Time's A, which is three times the matrix A and the way we multiply a scaler toe. A matrix is by just taking that scaler and multiplying every entry in The Matrix by that number, so three times minus one is mine. History. Three times two is six, three times four is 12 three times 00 and three times. One is three and three times one is three. Let's do another example. - Okay , so let's multiply. See times A. So that's negative. Two times this matrix, we go ahead and multiply that negative two into every entry inside, and that's what we get. We can define subtraction of two matrices, a minus bi. We can define that as a plus. Negative one times be, for example, Let's see is given by this and be is this. Let's say we want to find a minus bi. Well, that's just a plus. Negative one times be okay, So let's multiply here by negative one. And now we just add 12. Matrix Operations Multiplication: We can also multiply two matrices A and B As long as the number of columns of A is equal to the number of rows of B. Let's take a look at an example. Let's say, is a two by three matrix. Let's say B is a three by two matrix. Now let's multiply a times B. This will give us a two by two matrix notice that the number of columns for A is three. And that matches up with the number of rows of be. And we need this because we're going to multiply each entry with its corresponding entry here. So one lines up with 13 lines up with one minus one lines up with minus one. Okay, so we're gonna multiply these corresponding entries and add them up. So we get one plus three plus minus one times minus one, which is one. So that's going to be five. Okay, Now, staying with this this, um, first row, move onto the second column and go ahead and multiply each corresponding entry. We get zero plus zero minus one. Okay, now we move on to the second row here and use the first column and be Okay, multiply this out. We get zero times one, which is zero plus two minus two, which gives a zero. Okay, Now, for this last entry here, we use that second row and we move on to the second column of B. So that will give a zero plus zero plus two. So this is our result of multiplying a times B. Now what if we had a matric see, which is to buy three. Then we can't multiply. We can't multiply a and see. Remember a was this matrix and C is this. Look at the number of columns for a that's three. Then look at the number of rows for C, which is to, and they don't match a so we can't multiply them. Okay, this matrix a is to buy three see, is to by three as well. And since three does does not match up with two, we can't multiply these two matrices. If the number of rows is the same as the number of columns, then the matrix is called a square matrix and we can multiply two square matrices in any order. Let's do an example. So let's say we had a given as this matrix here. Let's say B was this. Then we can multiply a times B. Okay, so we have two columns here to rose. So they match up. Now multiply the corresponding entries. We get six plus zero, which is six minus two plus zero three minus four minus one plus zero. So we get this. Let's try multiplying B times A in the river. Reverse order. So here we go. There's be multiplied by a Okay, so this is six minus one, which is five zero plus two four plus zero zero plus zero. OK, notice that a Times B is not the same as B times A There. We get two different results if we, um, multiply in the reverse order. Okay, So note note that a B is not equal to be a and this is true in general. In general, matrix multiplication is not community 13. Commutativity, Associativity, and Distributivity: in this section, we're going to look at properties of matrix addition and scaler. Multiplication. The 1st 2 properties we're going to look at our communicative ITI and associative ity matrix addition is communicative and associative for the community property. We write it like this. A plus B equals two B plus A. So the competitivity property for addition just says that the order doesn't matter. You can just add it in either order. Associative ity goes like this. A plus B plus C is the same, is adding and be first and then adding C, let's do some examples. Let's say a was this matrix and B is this matrix and C is given by this. Okay, um, let's do a plus B. Okay, so we're gonna add these to you. Okay, so let's just go ahead and add each corresponding entry. All right, that's a plus B. Let's add them in the reverse order, so B plus a Okay, let's go ahead and add the corresponding entries, and we get this and notice it's the same as a plus B. Okay, So, um, addition for matrices is community. Let's do an example of associative ity. Let's do a plus B plus c. Remember what being see were Okay, let's rewrite a and add the stuff inside the parentheses. So we get one here. Eight. So that's our new matrix here. Now we go ahead and simply add. Okay, so that's what we got for this. Now let's do a plus B first and then add, See, let's see if we get the same thing. Okay, so a plus B plus C. Okay, so let's add these two matrices inside the parentheses. Now add the two matrices that we get here and notice. It's the same as what we got earlier for a plus B plus c. Okay, so addition is also associative for maitresse. Ease. Let's look at another property. Let's C and D B scale er's. Then we have this property CD times A is the same as C Times de times a, for example, a CB two and D B minus one. Let a be this matrix. Then C D is negative two and C d A. Becomes minus two times a and we get this. That's what we have here on the left hand side. Let's check the right hand side. So see D a that's too minus one times a Okay, so that's two times this matrix. Multiply it through by the two, and we get this notice. It's the same as what we had earlier here. So you can, um, are there multiply the matrix A by C D or you can first do D and then do see, that's what this property says. Next we have the distributive ity properties distributed. Vitti Now one property goes like this. See times a plus B is the same SC times a plus c times be another distributive ity property tells you that c plus d times a is C a plus de a Let's do an example. So for the 1st 1 recall that she was too and hey, was this and B was this. Okay, so see, Time's a plus B that's two times this matrix. Plus this, and that's equal to two times this matrix and we get this. Let's do see a plus C B motor oil with the two and add So we get the same answer as we got earlier 14. Identities, Additive Inverses, Multiplicative Associativity and Distributivity: There's an additive identity for matrices, and the additive identity is the Matrix with zeros everywhere. So it looks like this for a two by two matrix. If we add the zero matrix as it's cold to a matrix, let's say this was the Matrix. Then we just get back that same matrix. Okay, so it's like adding zero to, ah, number additive. Inverse is exists for matrices and the additive inverse in verse of a is negative. A. So let's say a was this. If we add the negative of that, then we get this. And if you add all the corresponding entries, we get zeros everywhere. Okay, so we have the additive inverse of a matrix. A is just negative A We'll get the zero matrix. We have some further properties. Um, associative ity and distributive ity for matrix multiplication. Okay, so the associative ity property goes like this eight times BC is the same as a B time. See distributive ity. We have a times B plus c. Well, the aid distributes, so we get a B plus a c. We also have distributive ity on the right. So let's say we had a plus B time see than this See distributes to each term inside the parentheses, so we'll get a C plus B C. We also have this property where if we have a scaler, see, then see times a B is the same as See a a Times B, and that's the same as a times C B. Okay, so this means we could pull that see out in front here and this This here will be the same as this. It's a ziff we're pulling to see out. Let's do some examples. Here's an example of associative ity, so let's do a B times c. Okay, let's multiply the two matrices inside the parentheses. Okay, so that's two one. Now multiply this l and you should get this. Now let's do a times B C and see if we get the same result. Okay, multiply these two matrices inside the parentheses. You should get this and now multiply these two matrices and you get the same result. So this demonstrates the associative ity property. - Okay , let's try this. Let's do see times a B. So that's three times a B. Multiply that three to each term. Let's check what C A Times B is Okay, that's three times a a times B. Okay, so that's this. A times B. OK, notice this right here is the same as this result. So see, a times B is the same A c times a b and let's check a time CB Let's multiply this out and we get the same result. We also have a multiplication of identity element for matrices and we call it I It has ones along the diagonal and zeros everywhere else. So this will be the identity element for two by two matrix. He's if you multiply, I buy any matrix A then it just gives you back that same matrix. Let's try this for a Okay. Now multiply these two out we get one zero zero minus one. OK, notice this is the same as a. Similarly, if you multiply by, I on the right hand side, you'll just get back a again. Okay? So multiply this else and you just get back a Okay, so I this matrix, I it's like one. When you multiply anything by one, you just get back whatever that thing is. 15. Transpose of a Matrix: in this lecture, we're going to learn about the transpose of the Matrix. The transpose of the Matrix is the matrix that you get when you swapped the columns and rows of the Matrix. So, for example, let's say a was this matrix. Then the transpose written like this with a T. That's the matrix that you get by taking the first row, which is 10 making that the first column and taking the second row and making that the second column. Okay, so you're just swapping the rows and columns. That's the transpose of the Matrix. A. Let's do another example. Let's say B is our matrix here than the transpose of B. It's going to be this. We take the first row of B. Make that two column look at the second row, make that the second column and finally look 1/3 row. Make that the dirt column. Now the transposed satisfies a bunch of properties, - so the 1st 1 goes like this. The transpose of the transpose just returns the original matrix. If you take the sum of two matrices and you take the transpose, that's like adding the transposes of each. If you take a scaler. See and multiply by a and you take the transpose. That's like, See, Time's a transpose and if we multiply a and B then take the transpose we get Be transposed a transpose. Okay, so it reverses the order but puts a t above each matrix. Let's see an example. Let a be this and let BB this matrix multiply and be and you should get this. Let's take the transpose of a B. Well, we already have a be here. Let's swap the rows and columns. Let's see if we get the same thing with be transpose a transpose. Okay, be transposed. Is, um let's see. This a transpose is this. Multiply this out. Okay, And this is what you should get. It's the same as what we got here. 16. Inverse Matrix: in this section, we're going to look at the inverse of a matrix we've explored. Addition, scaler, multiplication, subtraction and matrix multiplication for matrices. We've seen that A Matrix always has an additive. Inverse. We might wonder if The Matrix has a multiplicity of inverse the inverse of a matrix. A. Is any matrix B such that when you multiply a on the left and on the right by the Matrix B , you get the identity matrix so it looks like this. Okay, so if is given and we want to know if A has an inverse, let's say there's another Matrix B such that when you multiply on the left and right like this, you get the identity and the inverse of a The inverse is denoted like this, a little minus sign here in Matrix algebra. There is no division, but the analog of division is the inverse matrix. Let's do an example. So let's say a is given as this matrix. Then we can find the inverse of a Actually, there's a formula for that for two by two matrices. So let's say a was like this, then the inverse is given by this Formula one over a D minus B c times the matrix that you get here. If you swapped the Andy and then attach a minus sign to be N c. Okay, so this is the formula for the inverse of a This only applies for two by two matrices. Oh, okay, let's apply that here to this matrix. So take 1/80 minus b C times the matrix that you get by swapping the two and four and attaching a minus sign here and here. Okay, so let's simplify this distribute that won over eight. So we get this. Okay, so this is the inverse of a Let's see if this actually gives you the identity matrix when you multiply with a Okay, so eight times a inverse is going to be this and multiply this out. You get one, get zero here. Negative 1/4 plus of fourth, which is zero and one. Okay, so we do get the identity matrix on the right hand side here. Let's multiply a by the inverse on the left, and we should also get the identity matrix. Okay, so that's 10 1/2 minus half, which is zero and one year 17. Gauss Jordan Elimination: to find the inverse of a matrix. We can use a process called KAOS Jordan elimination gals, Jordan elimination to do gals Jorden Elimination. We take the Matrix a and a join the identity matrix. Okay, so and join the identity matrix. Okay, so that's the first thing. Then perform row operations on the resulting matrix until we transform a into I the identity matrix. Okay, so perform row operations until we transform a into I. Let's do an example. So let's say a was given like this. We take a and a join the identity matrix just like that. Then we start performing row operations on this matrix until we get this left hand side to look like the right hand side. Okay, so let's take 14 times the first row. Well, get this. Now, let's take row one added to wrote to. Okay, so we get zero to 1/4 and one. Now, we have a two down here. Let's make that a one. Sue, take 1/2 of road to Okay, So that will give us 1/8 and one, huh? Okay, Now, the resulting matrix on the right right here is a inverse. So the inverse is this. Let's do another example. Let's say a is equal to this. Okay, we form The Matrix with a on the left and the identity matrix on the right. Okay, like this. And start performing row operations on the left hand side here, Or rather, actually on the whole thing. But we want the left hand side to look like the identity. Let's do negative to roll one plus road to to get our new road to Okay, so that's going to get a zero plus two. This will be two plus one is three minus two plus zero zero plus one and zero. Now, let's see. Let's make this three right here a one. So let's do 1/3 Row three. Make that the new Rule three. Okay? - And this is what we get. Same thing with road to Let's make it. Let's make this one here. So take 1/2 of road to make that the new road to, and this is what we get right here. Now let's try to make this 3/2 a zero. Let's do minus 3/2 row three at that to road to to give us our new road to Okay, Sue, That will give us this. Okay. So minus 1/2 here. All right. Look at this. This is starting to look like the identity matrix. Okay, We just need this last part to be zero. This negative one here. Okay, let's do Row one, plus row three. Make that the new row one. Okay, so just add the first and third Rose, kid. Zero here. 10 1/3. And this is what we get. Okay, Since we have the identity matrix here on the left, what we're left with on the right hand side is the inverse. So the inverse of a is given by that matrix on the right. 18. Gauss Jordan Elimination Additional Example: if we can't transform The Matrix on the left into the identity matrix than a is not in vertebral. Let's see an example of this. Let's say a was given by this matrix. Let's form the augmented matrix with a on the left and the identity matrix on the right. Okay. And let's start doing row operations. Let's do two times wrote two plus row one. Okay, too. Row two, plus row one. Okay, Sue, that's going to give a zero here. Two plus one. That's three two plus 46 12 zero. Let's see this, too, right here in the first row. Let's make data one. So 1/2 row one. - Okay , let's look at this three. Here, make that a one. Soon. Take 1/3 road to look at this four right here in the third row. Let's divide out by four. Okay, Groups. All right. Let's take road to subtract Rhodri. - We get a bunch of zeros here, okay? And this is what we get on the right. All right? Since we get a row of zeros here, um, we can't transform the left hand side to the identity matrix. Okay? So we can't transform a into I Okay, So is not in vertebral. In other words, A doesn't have an inverse 19. Determinant of a 2 by 2 Matrix : in this section, we're going to look at determinants. Let's say we had a two by two matrix A. Which looks like this. The determinant of a is defined as a D minus b c. Okay, so it's eight times D minus B. C. And the determinant is denoted like this. And also it's sometimes were in like this within a end to vertical bars. Let's do some examples. Suppose a is this matrix. Then let's find the determinant of a So that's a times D minus b times C. So that's eight plus three, and that's 11. Let's do another example. Sebi is this matrix. Then the determinant of B is eight times D minus B time. See? So that's two minus six, which is minus for 20. Cofactor Expansion: to find the determinant of a three by three matrix or of a larger matrix, we have to use what's called co factor expansion. Let's take a look at an example. So let's say a was given by this matrix then First, we should assign plus and minus signs to each position in the Matrix. So starting with the first position, make it a plus and then start alternating plus minus plus Going down. Also, you want to alternate, so this will be plus minus plus going across. We want alternate. So goes minus, plus minus plus minus plus. Okay, so we have these plus and minus signs for each position in the Matrix. We want to expand along the first column, and what we do is we take that first entry one. But here we have a plus sign, so we just leave it as one. Multiply that by the determinant of the matrix that you get when you delete the first column and first row. Okay, so imagine that the first row and first column are deleted. So what you end up with is just this matrix here. 1304 Okay, so let's right that here 1304 Okay. And now we move on to the second element zero. So we add to that zero times. Now, the sign for that position is negative one. So we multiply by negative one times the determinant of the matrix that you get when you delete the first row and second column. So that would just give you 23 minus 14 23 minus 14 Okay, we move on to the last term in this row, which is negative one. We have a plus sign in that position. So we just leave it as not and multiply by the determinant of the matrix that you get when you delete that first row and last column. So you get to one minus 10 21 minus one zero. No, this determinant right here. That's labeled and 11 This determinate here is labelled m 12 and this determinant is labelled m 13 and these determinants their cold minor. So m m I J. That's the minor of a I j the I j eighth entry. Okay, And now if you take negative one to the power I plus j Okay, this part right here. That's the, um, the plus and minus signs that we saw earlier. If you multiply the minor by that plus or minus sign. Okay, that's called a co factor notice here. We took the co factors and we multiplied each co factor by the entry A i J. Okay, so we That's why we had one here multiplied by m 11 plus zero times the co factor here. Negative one times, M 12 and negative one times, M 13 Okay, So the determinant of a is just the some of the co factors, um, multiplied by the entries A i J as well. Um, expand along the first room. You can expand along any row, and the result is the same. Okay, let's simplify what we have appear one times the determinant of this two by two, which is for plus zero times. Anything is zero minus the determinant of this two by two matrix, which is zero minus minus one, which is one. So we get four minus one, which is three. Okay, so the determinant of a is three 21. Cofactor Expansion Additional Examples: Let's expand along the second row and see if we get the same answer for the determinant of a Okay, so remember the signs. Plus minus plus minus. Plus. Okay, so let's look at this. Second row right here. Take two times. Negative one, because the sign right there is negative. Okay, take the determinant of the matrix that you get when you delete the first column and second row. Okay, let's move on to the next term, which is one. The sign there is plus, so we don't have to multiply any by anything there. Delete that Second column and second row, we get one minus one minus 14 Move on to the next term. Three. We have to multiply by negative and then delete that third column and second row. So we get 10 minus 10 Now, let's simplify determinant of this matrix zero four minus one, which is three and thats zero. Okay, so we get zero plus three plus zero. Which history? That's the same result that we had earlier when we found the determinant for a We can also expand along any column and the result is the same. So let's expand along the third column. This was our matrix A. Okay, let's expand along the third column. So we're going to start moving down this way. Okay, so minus one. But remember the signs. So plus minus, plus minus, Plus minus, plus minus. Plus. Okay, Now, the sighing for that position is plus, so we don't have to do anything. Take the first row and third column. Delete those, and we should get this matrix. Okay, moving down. We get three times negative. One times the determinant of the matrix went when Ah, you delete the second row. Their column. Okay. And finally, the four times plus one, it doesn't do anything. And delete that. Their column And the last row. So we get 10 to 1 10 to 1. Okay. Now, let's simplify this. Negative one times zero minus minus one, which is plus one. Okay, so we get minus one plus four, which is story notice. We get the same result as the earlier co factor expansions. 22. Determinant of a Product of Matrices and of a Scalar Multiple of a Matrix: Let's look at properties of determinants. The first property goes like this. If a NB are and by n matrices, then the determinate of a times B is the same as the determinant of a times the determinant of B. For example, let's say a was given by this two by two matrix and B is given by this. Then determinant of a That's too minus zero determinant of B is six minus 20 which is minus 14. And if we take the product determinant of a times determinant of B, we get two times minus 14 which is negative. 28. Let's take the product a B and then let's find the determinant of it and see if we get the same result. Okay, let's multiply a and B so minus three plus 10 seven and minus four plus 40 Okay, take the determinant of that. We get zero minus 28. All right, so it is the same. The second property we want to look at it goes like this. IFC is a scaler and a is an n by n matrix. Then the determinant of C times A is C to the power end times determinant of a Let's do an example. Let's say CIA's eight and a is this matrix. See, Time's a is going to be this. Okay, So determinant of C times A well, that's eight times minus 32 plus 24 times a. And that turns out to be minus 2 56 plus 1 92 which is minus 64. And he goes to two. Since we're talking about a two by two matrix So si to the power end is going to be eight squared, which is 64 determinant of a Remember what a was a looks like this. So the determinant is minus for plus three, which is minus one. Hey, so see to the end determinant of a is minus 64. Okay, that's the same result we got earlier. Let's do another example. Let's see. Our scaler is three and the Matrix A is a three by three matrix. Okay, let's find See, Time's a Okay, well, that's just a with three distributed everywhere and the determinant of C A. Okay, let's look out this year. Now look at this second column right here. It is a bunch of zeros right here. So it makes sense to expand along this second column. Remember, co factor expansion. If we expand along this column, then the 1st 2 terms will be zero. OK, so this will be zero times negative one times the determinant of this matrix plus zero times the determinant of this plus 12 times negative one times the determinant of this. OK, notice. The 1st 2 terms are zero. So we just get minus 12 times 27 plus 18 minus 12 times 45 and that's negative. 5 40 Okay, now let me rewrite a here. So the determinant of a If we expand along this second, call him, We get four times minus one times. Ah, the determinant of this. So that's minus for three plus two. So minus 20 si to the power end is three to the power three, which is 27 case. So C to the power end times determinant of a That's 27 times negative 20 which is minus 5 40 Okay, that's the same result we got earlier 23. Determinants and Invertibility: in this lecture. I want to talk about determinants and invert ability. There is a nice connection between determinants and convertibility. Um, A matrix, which is in vertebral, has a non zero determinant. Furthermore, if a matrix has a non zero determinant than its in vertebral Okay, so let me write that here a is in vertebral. In other words, it has an inverse if and only if the determinant of a is non zero. Okay, let's do an example. Determine if the matrix is in vertebral. Let's say a is given like this. Okay, so let's find the determinate of a let's expand along the first row so minus two times the determinant of this matrix plus three times negative one times the determinant of this matrix plus one times the determinant of this. Okay, so determinant of this here is four minus five determinant of this is eight minus two determinant of this is 20 minus four. So simplifying I get this which is zero. And since the determinant of a zero, we know that a is not in vertical. Okay, let's do another example. Let's say a is this matrix. All right, let's find the determinant determinant to they. Okay, let's look at our matrix. A Here. There's a zero right here. So let's expand along the first column, Okay, One times the determinant of this matrix plus zero times whatever. We don't care about that. So we move on to the third entry here, plus one times the determinant of this matrix. Okay, so that's, um, four minus six, which is negative. Two plus minus two minus 18. So that's minus 20. So I get minus 22 and that's not zero. So a is in vertebral. Now, if we do have an in vertebral matrix A. We confined the determinant of the inverse by using the following formula. The determinant of the inverse of a is one over the determinant of a Okay, let's do an example. Let's say a czar matrix given like this. All right. We saw that the determinant is negative. 22. So the determinant of the inverse of a that's one over negative 22 24. Determinants and Transposes: The determinant of the transposed of the matrix is the same as the determinant of the matrix. Okay, so the determinant of the transpose of a is the same as the determinant of A For example, let's say a is this matrix, then a transpose is this. We just swapped the rows and columns the determinant of the transpose. Let's try to find that look at this. Um, second column. Let's expand along that. So we get to times. Well, here we get plus minus plus. So remember the signs. It goes like this. Plus minus, plus minus, Plus minus, plus minus. Plus. So the first term, we just ignored that because it zero we move on to two. But it has a plus sign, so it doesn't really matter. We just have to. And then we cross out this column in the second row, so get 1192 Okay, Now look at this to down here. We add two times negative one because of the sign multiplied by the determinant. Okay of 11 minus 13 So that's two times two minus nine, which is negative. Seven minus two times three plus one is four minus 14 minus eight. So that's negative. 22. Recall from an earlier lecture recall that the determinant of a was negative 22. Okay, so doing to transpose it makes no difference to the determinate. 25. Vector Space Definition: in this section, we're going to look at vector spaces. A vector space is a set v together with addition and scaler multiplication such that the following 10 properties hold. Okay, let u v and W Lyon V and let's see Andy be real numbers then the first property is that when you take you and being you, Adam, you get an element which is still in V. This is called closure under addition. Okay, The second property is that when you add you envy, it's the same as adding V plus you and this is called communicative ity. Communicative ity under addition. Okay, The third property says that you plus V plus w is the same as U plus V plus w That's associative ity, associative ity under addition. OK, the fourth property is that there is a zero vector. There is a zero vector. Denote it like this, such that when you add, you just get back. You okay? So if you add zero to any vector, you you just get back. You okay? So the zero vector that's called the additive identity additive identity. The fifth property says that for each element you envy, so each vector U and V. There's an additive Inverse. Do you noted minus you such that you plus minus you gives you zero. OK, minus you is called the additive Inverse. The attitude in verse Que the sixth property says that c times you lies envy. This is called closure under scaler Multiplication scaler. Multiplication. Uh, I'll put a you, um I'm sorry. I'll put a bar over to you sometimes to Do you know that you is a vector? Okay, Now, the seventh property says if you take c and multiply with U plus V, the sea distributes. So see you plus C V. This is, um distributive ity. Distributive ity. The eighth property says when you take c plus d multiply by the vector you you also get distributive ity. Sue See you Plus do you? Let's just call this distributive ity again, Okay? The ninth property says that See, time's d'you is the same C d times you that's called associative ity. And finally, the 10th property. If you take the scaler one and multiply by the vector, you you just get back You that's called the Scaler identity. Okay, so there you have it. You have all these properties of 10 properties of a vector space, if any set V satisfies these 10 properties than it's a vector space. 26. Vector Space Example: in this lecture. We're going to look at an example of a vector. Space are two is a vector space are to Okay, we're going to try and prove that Are, too is a vector space. What you be you want you to and V b V one V two and w B W one W two. Hey, let these be vectors in or two now Note. U one u two v one, V two and w one w two These all lying are so they're all real numbers. Okay, so let's try to prove each of the vector space properties. The 1st 1 is closure. So let's take u plus v. So that's this. Plus V one beat you. Okay, let's add these two together. So the first component becomes you one plus V one. The second component is you two plus V two. Okay, Now the question is, does this lie in or two? Well, you one and V one. They're both real numbers. So when we add to real numbers, we get another real number. So you one plus V one lies in our and same with you two plus V two. Okay, this is by closure of the real numbers. Okay, So since each component here is a real number, this whole thing lies in or two. Okay, let's try to prove the second property. U Plus V. Okay. Lets right you envy in their component forms. Okay, now look at this right here. We can we conflict these. So that becomes V one. Plus, you won this. Also, we can flip. And this is because we have community Vitti communicative ity under addition of the real numbers. Okay, you won. Plus V one. That's V one. Plus you won. You two plus V two is the same as V two. Plus you, too. Okay, that's that's because the real numbers are community of under addition. And that's how we got this step right here. Now, this can be rewritten V one v two plus you want you to, but that's just be and that's just you. Okay, so you plus v is V plus you and we have competitivity now for the third property associative ity. That's rewrite the left hand side. Okay, No, let's add the two terms inside the parentheses. No, let's And these two vectors. So this is what we get groups you two plus V two plus W two. Okay, now here we can switch this around as you one plus V one plus W one. Same with second component. And that's by associative ity of the real numbers Associative ity under addition. Okay, no, let's rewrite that as you one plus V one. You two plus V two plus w one w two and that's equal to this. Okay, but this is just you and that's V. And that's just w Okay, so we've shown associative ity. Okay. For property four 00 is the attitude Identity. Thank you. U plus zero is equal to you. Want you to plus 00 which is you one plus zero u two plus zero. And that's just you on YouTube because zero is the additive identity in our okay, So you want is a real number, and if we add zero, we just get back. You one. Same thing for you to. Okay, now, this is just you. So we get u plus zero is you And we know that the zero vector is 00 No, for the fifth property, minus you is the additive in verse of you. Okay, you plus minus you. That's going to be U one u two plus negative. You want negative? You too. Okay. Minus use. Just defined to be this. Okay, where? Each component has a negative sign. Now we add, but you want minus you 10 and you two minus you 20 That's because negative you one is the additive inverse of you one. This is in our similarly for you too Similarly negative. You too is the additive in verse of you too. In our okay. And this, of course, that's the zero vector. 27. Vector Space Example Continued: Okay, lets um look at property six take see times you multiply the see through So we get See you one See you too. And we want to know if this lies in our to groups. Okay? No see you one and see you too. Both Lyon are because we have closure under multiplication in our okay since C is a real number and you one is a real number. When we multiply, then we get See you one which is also really number by closure. In our similarly see times, you to is a real number by closure of multiplication in our okay, let's look at Property seven. See Time's U plus V. He goes to see you plus C v starting with the left hand side. Let's rewrite you in component form and same with the Now add the two The sea multiplies through now the sea right here distributes to each term inside the parentheses. So we get See you one plus c v one saying with this we get See you two plus c V two. Okay, that's because of distributive ity. Okay, since C and U one v one you to envy to They're all real numbers. We have distributive ity into real numbers. And that's what we applied right here. Okay, so we can rewrite this here as see you one comma. See you too. Plus C V one cv to Okay, now, looking at this part right here. Pull the c out. Also right here. Pull the c out. This is just you. And this is just be okay. So we do get See you plus C v property eight c plus d times you We want to check that. This is the same a c u plus d you Hey, let's start with the left hand side. Rewrite you in component form. Distribute that scaler C plus d Now distribute. This is by distributive ity in our since these are all real numbers right here. C d you wanting you to weaken? Distribute. Okay, rewrite this like this. Pull the c out and pulled the deal. This is just you and this is you. Okay, so we do get See you, plus d you. It's proved property nine. See, Time's do you equals two c d times. You all right? Let's start with the left hand side. Rewrite you in component form distribute that d into that vector. Now distribute the sea inside. Okay, Look inside right here. We can move the parentheses from here to see d like this. Okay, this is by associative ity. This is by associative iti and in our ok notice. We have a CD here in a CD here. Pull that out. But this right here you want you to. That's just you. Okay, so we get CD times. You finally the 10th property, take one times you. That's one times this vector. Multiply the one through, but one times you want is just you one and one times you to is YouTube. This is because one is the multiplication tive identity in the real numbers. Let's just rewrite this as you. Okay? So one times you is you. Now, since our two satisfies all of those 10 properties of a vector space are two is a vector space. Okay, so we have just shown that are too is a vector space. It's also true that, but are in is a vector space for any end greater than two. So are three or four or five and so on. And so on. Delta, Those are all vector spaces 28. Vector Space Additional Example: the example of our N is not the only example of vector spaces. Vector spaces could be very different in, um, what do you look like? So, for example, this set m said M n, which is the set of all, and by n matrices with matrix addition and scaler multiplication that forms a vector space . Okay, so the vectors in that case, are matrices, which is a little strange, but it's fine if it satisfies the properties of the vector space. It's a vector space. Okay. Another example is the set of all Paulino meals of degree, less than or equal to end that also forms a vector space. Okay, so the vectors in this case are Paulino meals. Let's do an example. Let's look at P two. That's the set of all Paulino meals of degree, less than or equal to two. Okay, that's a vector space. We want to try to prove that, so we would have to show all 10 properties of the vector space. Okay, let's try to prove the first few properties. Let f g and H lie in p two and let CND b scale er's f of X well look like this It's a second degree polynomial where the coefficients are a zero a one a two. Those are just constants. Those aerial numbers G will also look like that. Let's just say it's be too X squared plus B one X Plus B zero, where the bees are real numbers and h it's gonna be a similar second degree polynomial. Let's just say it. See T X squared plus C one x plus C zero and the coefficient C zero c one C two those air. Just real numbers. Okay, the first property we want to show is closure under addition. So let's take that plus G of X. Well, by definition, that's f of X plus G of X. Okay, lets right, uh, in terms of what it is as a polynomial. Hey, that's right, G as what it is as a polynomial. Now the way to add these two Paulino meals is to add the coefficients. So that's eight. To beat a two plus B two X squared, plus a one plus B one x plus a zero plus B zero. Okay, now the coefficients here, a two plus B two. That's a row number A one plus B one is a real number, and a zero plus B zero is a real number. Okay, so all the coefficients are real and ah, degree here is to the degree of this polynomial is two or less because this this co fishing right here could be zero. Okay, so that means that F plus G lies in p two. Okay, let's look at the second property F plus G of X. That's f of X plus g of X, by definition. Okay, so that's a two x squared, plus a one X plus a zero. Let's rewrite gs the polynomial Oops, B two x squared, plus be one x plus B zero. Now we combine the two Paulino meals by adding the coefficients. Now, since the terms here and be those are real numbers so we can use communicative ity of the rules. Same here. A one plus B one that's B one plus a one. Okay. And we can switch these here to be zero plus a zero. Now, let's break this up. Okay, break this up into the to Paulino meals. I noticed this right here is just G, and that's, uh and by definition, that's G plus f of X. Okay, So f plus G f plus g is the same as G plus f. So we do have communicative ity here. Let's look at the third property, which is associative ity. We want to show that this equals at plus G plus h. Okay, so let's take this function. That function of X is f of X plus she plus h of X, it's f of X plus G of x plus h of x. Okay, let's rewrite f like this, and G and H Okay , so we have this Plus Okay, let's combine these coefficients here. So we get B two plus c two X squared plus B one plus C one x plus B zero plus Caesar. Okay, Now add the coefficients. We get this. Now, notice In this coefficient, we can swap the parentheses using associative ity for the real numbers. Same thing for these coefficients. Okay, Now, let's break this apart, Okay? We have this polynomial plus this polynomial, okay? And we can break up this part right here into two other Paulino meals. - OK , but now this right here, that's just f of X. And this is G of X. This is a church of X. Okay? And this right here we can rewrite as F plus g of X, then rewrite this whole thing as f plus G plus h of X. Okay, so f plus G plus age. That's the same as F plus G plus h. Okay, here we have F plus G plus H and way. In the beginning, we had F plus G plus h. And we show that they were the same function when we applied them to X. And so this is what we have right here. Okay? Associative ity holds. 29. Vector Space Additional Example Continued: Okay, let's look at Property four. Let zero be the polynomial such that when you plug in next, you just get zero. Okay, then. Zero plus f When we apply it to X, we get zero of X plus f of X, but zero of x zero and f of X Is that Paulino meal adding zero here it gets added to a zero , but it doesn't do anything. So we just gives you back a zero and thats f of x. Okay, so zero plus f is f Okay, so we have a zero element, the zero polynomial now for the fifth property, the inverse, the additive inverse, the additive inverse of f is negative and that's defined by this. If we apply negative F two X, we get negative f of x. Okay, so F plus negative f if we apply that to X, we get f of X plus negative f of X, which is f of X minus ffx. That gives us zero, which is the same as zero of X. So therefore, f plus negative f is zero. Okay, so there is an additive inverse of that for every F in P two. All right, we've shown the 1st 5 properties of a vector space applied to P two. Now I want you to try proving the five remaining properties of a vector space for P two. 30. Examples of Sets that are Not Vector Spaces: in this lecture, we're going to look at examples of sets that are not vector spaces. For the first example, consider the set of Paulino meals that have degree Exactly. Two. Okay, we want to show that this said is not a vector space. Okay, let f of X be this polynomial and let g be this polynomial. OK? Notice f has degree to and G has degree to as well no, let's see what happens when we add f n g. Okay, so that's f plus g. Okay, gather all the light terms the X squared terms cancel out and we get three X plus seven. Okay, So f plus G has degree one. So f plus G is not in the set of Paulino meals that have degree to okay, F and G themselves have degree exactly two. But when you add FND, you just get a one degree polynomial. So the some F plus e g doesn't lie in the original set. Okay, So closure under addition does not hold. So the set of all Paulino meals of degree to does not form a vector space. Let's take a look at another example. Consider Z to Z two is not a vector space. Okay. Z two is the set of all pairs mn where m and N are integers so m unenlightened z Okay, let you be 23 and let's see Be the scaler 1/3 then see times you is 1/3 times 23 which is 2/3 1 And that does not lie in Z two since 2/3 is not an interest. Your okay to Thursday's not lying Z So this pair right here doesn't lie in Z two. Okay, so see you does not line z two, which means that closure under scaler multiplication does not hold Okay, since close your under scaler Multiplication doesn't hold Z two is not a vector space. 31. Subspace Definition and Subspace Properties: in this section, we're going to look at sub spaces. A subset w of a vector space V is a subspace of e. If w is non empty and a vector space itself with the same operations as V, for example, the set w given by the set of all pairs X zero, where X Israel is a subspace of ve ghost are too. Okay, if we look at the X Y plane than all the points in this plane forms, the vector space are too. But W is the subset of our two consisting of those points along the X axis. Okay, so x zero, the White Corning into zero. So it's just going to be all those points along here on this X axis. Okay, so the claim is that that line, the X axis is a subset. I'm sorry. A subspace of the bigger space are too. Okay, now, to show that w is a subspace of our two, we have to show that W has all those 10 vector space properties. Okay, What I mean is that normally one would have to show all 10 of those vector space properties . Fortunately, we don't have to show all of those properties. We just have to show a few of those. So we have to show to closure properties and we have to show that w is non empty. And to show that w is not empty, we can show that it contains the zero vector. Okay, so let me right the subspace properties. Subspace properties. Okay, there's only three of them. The 1st 1 the zero vector, lies in W to sum of two vectors from W lies in w. Okay, this is called closure under addition. And the third property is that C times you lies in W where C is a scaler, and u is a vector from w. Okay, this is called closure under scaler Multiplication skill. Er, multiplication. Okay, let's do an example. Let's say W is the set of all X zero where X is in our and the big space V is our to Okay, look, XB zero, then x zero is 00 and zero lies in our so zero vector, which is 00 lies in w. Okay, now for the second property, let you be x zero and let V b Y zero where x and y our riel. Okay, I just picked to arbitrary vectors you envy from W Now want to add add u and V. Okay, so now I get X plus. Why zero? And that lies in W since X plus y lies in our okay, X plus y lies in our and so the first component is rural and the second component is zero. But that's just what w is. It's all of those pairs, eg zero. Where the first component Israel. Okay, so this right here lies in W Now for the third property. Let's see, be a skill er and let you be some element x zero in w then see times you equals c times x zero, which is C x c zero, but see time 00 Hey! And this does lion w. Because see, ex lies in our okay. Since C and X are both real numbers, When I multiply them, I get a real number. And so this first component Israel, the second component is zero. And so this specter lies in w. Okay, so we do have closure under scaler multiplication. So w is a subspace of our two 32. Definition of Trivial and Nontrivial Subspace: the subset consisting of only the zero Vector is a subspace of the So the set that just consists of the zero vector. Okay, that is a subspace of E. And it's called the zero subspace, the zero subspace, the set V itself. That's also a subspace of the these two sub spaces air called trivial sub spaces. Okay, these air trivial, trivial sub spaces, a nontrivial subspace is any subspace of E that is not the zero space, and it's not V itself. Okay, remember, W W is a set consisting of all eggs zero where X Israel. Okay, we show that w is a subspace of our two and w It's not the zero subspace because it contains elements that are other than the zero vector. So, for example, it contains 10 okay, And that's not zero. That's not to zero vector. So w is not the zero subspace, okay? And w is not the whole space are too, because it won't contain other elements in our to for example, say 23 This lies in our to the big space, but it doesn't lie in W because the second component is not zero. Okay, so w is a non trivial subspace of our two 33. Additional Example of Subspace: Let's look at another example of a subspace. Consider the set w of all matrices of the form a zero bc. This is a subspace of M sub 22 m sub Tutu's the set of all two by two matrices. Okay, now we want to show those three subspace properties for W. Okay, if we can do that, then we would have shown that w is a subspace of m 22 Okay, the first property. Look A B and C B zero. Then the zero matrix 0000 lies in w Okay, here a B and C. Those just have to be really numbers. And zero is a real number. Okay, so we have zeros everywhere right here. This zero right here or this position right here. This has to be zero, and it is. So this zero matrix lies in the set. W Now for the second property. Let's pick two arbitrary elements in w say you is this and V is this Let u and V B in w Okay. We want to show show that u plus V also lies in w Okay, u plus V. That's equal to this matrix. Plus this Let's go ahead and add. Okay. Now notice that A plus D B plus e and C plus f Those are all real numbers. And this fourth position is zero. So this does lie in W now for the third property, let you be some matrix in W. And let k be a scaler. Then take Kate times you and we want to show that k u lies in w. Okay, that's multiply the K through. Well, Kate, Times A is a real number K b and K c those air also riel because we're just multiplying real numbers. And this fourth position is zero. So, yes, this lies in W. 34. Subsets that are Not Subspaces: in this lecture, we're going to look at subsets that are not sub spaces. Okay, let's take a look and an example. Let WB the set of all pairs Ex X squared where X is real, then noticed that w is a subset of our two. Okay, if you look at the second component, its X squared. So really, it's going to be the graph of Lycos two X squared, which is a parabola. Looks like this. Okay, a parabola in the plane. No. W is going to be the set of all points on this proble. Okay? And it is a subset of our two because it's contained in or two. But it's not a subspace of our two. And we'll see this by checking those three subspace properties. If any one of those subspace properties fails, then we know that it's not a subspace. Okay, let's check the first property. Okay, Well, what if we let x be zero, then X X squared is zero zero square, which is zero. Okay. And so zero lies in W. If we look at the graph, 00 is the origin. It's right here, and it does lie on that graph case of 00 vector realizing w Let's check closure under addition. Okay, so let you be given by this and v be given by this. Okay? Just picked you envy from W. Then let's add those together. U plus v. Okay, so that's X plus y comma X squared plus boy squared. And the question is, does that lie in W? Well, it would if that second component was the square of the first component, but usually X squared. Plus y squared is not the same as X plus y squared. Okay. For instance, 11 and 24 Lying w. But when we add those together, we get three comma five. But that doesn't lie in W, because five should be three square, which is nine, but but five is not nine. So it doesn't lie in w. Okay, so closure under addition fails. Okay, since closure under addition fails, weaken stopped right here. We know that w is not a subspace of our two. However, let's go ahead and check the third property. Anyways, let's see if it's close under scaler. Multiple multiplication. Okay, let you be x X squared and let's see via scaler than see times you is C X times or c X and C X squared. So does that lie in W. Let's look at the second component. That should be the first component squared C X squared, but see X squared is C squared, X squared. And generally that's not going to equal c X squared. So C X squared. Is that equal to C squared X squared, usually no. For instance, if we let CB three and let you be to come before then see Time's you is three times to four not equal to 6 12 OK, but 12 is not six square, which is 36. Okay, so closure under scaler multiplication also fails. Since one of the properties of a subspace fails, We know that w is not a subspace of or two. 35. Subsets that are Not Subspaces Additional Example: Okay, let's look at another example of a subset, which is not a subspace. Okay, let WB the set of all two by two matrices that are not in vertebral show. That w is not a subspace of m sub 22 Okay, let's look at the subspace properties. The 1st 1 is where the Z role vector lies in. W Okay, the first property is actually true because the zero matrix, actually, it does lying w because it's not convertible. If you calculate the determinant here, we get zero. So, um, zero matrix is not in vertebral and therefore does lion w. Okay, so let's move on to the second property closure under addition. Okay, Let a be this matrix and let be be this matrix, Okay? No, and be they're both not in vertebral, because if you calculate the determinants of a that's six minus six, which is zero determinant of B. Well, that's six minus six, which is zero. Okay. And since the determinants are zero A and B are not in vertebral. Okay, so a and be lie in w. Let's add and be if we add them, we get 10 zero minus story. Okay? and the determinant of that is minus three, which is not zero. Okay, so a plus b is in vertebral. Okay, But that means a plus B does not lie in W because w remember consists of those major cities that are not in vertebral, but a plus B is in vertebral. Okay, so therefore closure under addition fails, and we can start right there. And we already know that w is not a subspace of m sub 22 36. Span: and this lecture we're going to learn about the notion of sparing let v be a vector space and let s given by v one v two dot a dot VK Be a subset of the If every vector envy can be written as a linear combination of vectors in s. Then we say that s Spans V s Spearing's V, for example, show that this set spends are to okay, so let you want you to lie in our to where you wanting you to are really members. Then we can rewrite you on YouTube as you one times 10 plus you two times 01 And this is a linear combination of the vectors. 10 and 01 So s spans are too. Let's do another example show that s given by this. Spends m sub two comma two where m sub two comma two is the set of all two by two matrices. Okay, so what a B c d be a matrix and and said to to where a, B, C and d are real numbers. Then we can rewrite a B C D as eight times this matrix plus b times. This matrix plus C times this matrix plus de times this matrix. Okay, so s Spans and said to to Okay, show that this set s given by 10 and 30 does not span rt. Okay, To show this, we just need to find one vector in our to that cannot be written as a linear combination of the vectors in s so consider to common to if we could write to To as a linear combination of 10 and 30 we would have this see one times the first vector plus C two times. The second vector equals to to for some scaler see one and see to. But then that means that C one zero plus three c 20 is equal to two comma two. So we get this and equating the coordinates, we get this. Okay, but here we have zero equals to two, which is a contradiction. Therefore s does not spare are to okay because we found one vector two comma two that cannot be written as a linear combination. Ah, 10 and three Zer. Let's do another example. Show that us given by this show that us does not spend and sub two to. Okay, consider this matrix. If this matrix could be written as a linear combination Ah, the matrices in s. Then we would have see one times the first matrix plus C two times the second matrix plus seat three times the third matrix plus C four times the fourth Matrix equal to this. Okay, now multiplying. Um, the coefficients see one into this matrix. We got this and do the same. Foresee, too, and so on. Okay, Now, add those matrices up and we get this. Okay? Now, equating each entry, we get this. Okay? Now form the augmented matrix for this system of equations. - Try solving this by Ghazi. An elimination. So let's do row one minus root, too. - Okay , Now let's do Row two minus Rhodri. And let's do Rhodri plus row for OK. Notice that in this last row we have zero equals negative four, which is a contradiction. Okay, so the system has no solution and therefore are matrix. 12 for one cannot be written as a linear combination of the major season s so s does not span M sub 22 37. Span of a Subset of a Vector Space: We've seen some examples of subsets s of a vector space V that do not span all of e. However, if we take the set of all linear combinations of vectors and are set s given by this if we take, um, the set of all linear combinations vectors in s this set will form a subspace of e The set of a linear combinations vectors and s is called the span of s and it's denoted like this. The span of this recall our earlier example. Where s was this and be was our to we saw that s does not span all of our two. However, if we take the set of all linear combinations uh, 10 and 30 we get the span of That's the Spanish US is equal to the scent of ole linear combinations of one girl and 30 C one and C two are real numbers, and we can show that this is equal to the scent of all scaler multiples. Um, 10 Okay, so let's show that if this lies in the span of s, then this is equal to this just multiplying C one and C two into each vector and adding them up. We get that and pulling this out, we get this. So if we let Kay be this scaler, then our original linear combination is equal to K times 10 And that lies in dissent of all scaler multiples of 10 Okay, so the span of s is a subset of the scent of all scaler multiples of one juror. Okay, no, let Kate times 10 lie in the set of all scaler multiples of 10 Then que times 10 is equal to que times 10 plus zero times 30 So let's see one b k and see to be zero than K times 10 can be written as see one times 10 plus C two times 30 where c one is K and C to a zero. So therefore we have that this lies in the span of us. Therefore, the set of all scaler multiples of 10 is a subset of the span of us. Okay, Now, since the span of us is a subset of the set of all scaler multiples of 10 and that said is also a subset of the span of us. The two sets are equal. Okay, so the span of ask consists of all scaler multiples. Ah, the 0.1 juror in the plane are too. Okay, let me draw that. Here. Here is the plane, the X axis and the Y axis. And here's one your and let me draw an arrow from the origin to 10 Okay, we can think of 10 as an arrow in the plane scaler. Multiples of the vector 10 Just give us more points on the real line. So the span of s is the entire rial lying in the plane. Even though s doesn't span all of our two. It does spanned entire real line and is a subspace of our two. 38. Linear Independence 2 : the notion of linear independence in addition to the notion of span is an important notion in linear algebra Supposed s is given by this V one through VK. Suppose s is a subset of a vector space V If the vector equation see one V one plus dr dot plus ck VK equals zero. If this vector equation has only the trivial solution the trivial solution is where c one see to dot, dot, dot ck our old zero. So if this vector equation has only the trivial solution than the set s is set to be linearly independent. Okay, otherwise, this Ah, is that to be linearly dependent? Okay. For example, let s be given by this set A subset Ah are too Show that s is linearly independent. Okay, so I suppose. See, one times the first vector plus C two times The second vector is equal to zero show that C one and C two are both euro. Okay, so we have C one times 10 plus t two times you're a one and that's equal to zero Doctor to that means we get this after multiplying the C one and C two into each vector. Now we add now, since these two vectors are equal their components or equal and therefore C one and C two or build your own. So the vector equation given earlier this vector equation has only the trivial solution. Therefore s is linearly independent. Let's do another example. Suppose s is given by this set a subset of our two show that s is linearly dependent. Okay, suppose see, one times the first specter plus C two times a second vector is zero. Find a nontrivial solution. Okay, so we have from here, see one juror plus three c 20 that's equal to zero. Adding the two vectors we get this. So equating each component, we get this. And for the second component, zero is equal to zero. Okay, so we just care about the first equation here. Notice that C two could be anything. So let let's see, to be t a parameter solving foresee one. I get this and plugging in t I get this. Okay. And see, two is t. So let t b one then see one is minus three and C two is one. Okay, so steve warn equals minus three C two equals one. That's a nontrivial solution. So that's a nontrivial solution. Choose the vector equation that we had earlier. Okay? And you can check that if I plug in minus three for C one and one for C two. I get that. This equation is true. 39. Determining Linear Independence or Dependence: Let's do some more examples. Let s be given by this. A subset of artery determine if s is linearly independent or linearly dependent. Okay, suppose see, one times the first vector plus C two times the second vector plus C three times. But there Victor is equal to zero. Then we get C one plus to see to plus three C three equal to zero Here. I just, um, added up the first components doing the same for the second components. I get this. And finally, for the third component, I get this. Okay, so we have a system of equations. Let's form the augmented matrix and start doing galaxy and elimination. Okay, so here's the Matrix. Okay, so let's do role one plus road to Let's do Wondered. Grow too. Let's do row to minus Rhodri Now, from the second equation, we get C two plus two si three equals zero. Let's see. Three be t. So see to is minus two T. And from the first equation, we get C one plus two c two plus three C three equal to zero. So C one plus two time see two. Which is this? Plus three times seat during which is T is your Okay, so we get this and solving four C one. I get tea, so C one equals two t si two is minus two T and C three is T. Well, t could be anything. Let's, um let t b one. So see, one is one and si two is minus two and C three is one. Okay, so this is a non trivial solution to the original vector equation. So s is linearly dependent. Let's do another example. Let us be given by this set a subset of P two, where p two is the set of all Paulino meals of degree, less than or equal to two. Hey, determine if s is linearly independent or linearly dependent. Okay, suppose see, one times the first vector Plus he to times the second vector plus C three times. The third vector is equal to zero and zero, doctor and P two is the polynomial with coefficient zero everywhere. Okay, so we get this polynomial plus this polynomial plus this pulling no meal equal to zero plus zero x plus zero x squared. Okay. Combining the light terms, we get this. Okay, so now this polynomial on the left hand side is equal to the zero polynomial on the right hand side so we can equate all the coefficients. Um 20 Okay, so we get this system of equations. Let's write this in augmented matrix form. And now let's do Galaxy and Elimination. Let's do minus two Row one plus road to and minus three. Row one plus Rhodri. Now let's do minus two. Row two plus Rhodri and Let's Multiply wrote three by negative one. Okay, so if you look at role three, Road three tells us that C three a zero, and wrote to tells us that c two plus four seat three a zero. But that means C two plus zero is zero. Soc two is also zero. The first room tells a C one plus minus two C three is zero. So see one minus two times zero is zero, and so see one is zero. Okay, so C one, C two and C three or old zero and therefore the system has only the trivial solution, and so s is linearly deep. Um, independent s is linearly independent 40. Basis: So far, we've seen that a subset s given by the vectors V one to V k of a vector space V can span all of the We've also seen what it means for us to be linearly independent. If s both spans V and is linearly independent than s a set to be a basis for V. Okay, for example, the set s given by 10 and 01 This is a basis for our to we've seen in earlier examples that s spans are too and is linearly independent. In fact, this basis is called the standard basis for our to. For our three, the standard basis is given by 100 010 and 001 for artery and for our end. In general, the standard basis is given by 100 dot a 0.0.0 010 dot a 0.0.0 and so on. Like this and there are and vectors in this set this is for RN a vector space could have a nonstandard basis for example show that s given by this is a non standard basis for our to okay. To show that s is a basis we need to show That s is linearly independent and spans are too . First, let's show that it's linearly independent, I suppose. See, one times the first vector plus C two times the second vector is zero. Then we get this equating the components. We get this system of equations, Okay, Writing this in matrix form and doing row operations. Let's do minus two. Row one plus Ruutu. And from the second row, we get this. So see two. Must be zero. No, The first row tells us this. So we get this. If we plug in, C two equals zero and so see, one is also zero. Okay, so since C one and C two or both zero, Um, the original two vectors inner set s are linearly independent. Okay, Next, let's show that s Spans or two. So let you want you to be an arbitrary vector and are too. Okay, We need to show that there are scale. Er's C one and C two such that. See, one times the first vector plus C two times. The second vector equals you want you to. Okay, so here we get the left hand side equals this. Okay, so we want C one and C two such that this vector is equal to you. Want you to. If we equate the components, we get a system of equations like that. Let's write this in matrix form and do role operations. So let's do minus two. Row one plus route to Let's do 1/7, Row two. Okay, Now let's do two road to plus row one, - okay ? And simplifying this a little bit. We get this. So see, one is equal to this and see, to is equal to this. All right, so we have a solution, and therefore s spans are too. Okay, let's do another example. Show that s given by this is a basis for M sub 22 the set of all two by two matrices. In a previous example, we've already shown that s spans M sub two common to we just need to show that s is linearly independent. So suppose C one times the first vector plus C two times a second plus C three times the third plus e four times before is equal to zero. Then we get this. Plus this Plus this. Plus, this all of that is you go to zero now add up all of those matrices and that's equal to zero. But then if we equate the entries of each matrix, we get C 10 see to a zero C three and C four or both zero. Okay, so all of the seas are zero and therefore s is linearly independent and since S is linearly independent and spans and sub two comma two SZ basis and furthermore s is the standard basis for M sub two comma two the standard basis for PN This edible Paulino meals of degree lust enrico to end is given by to set like this. Okay, that's the standard basis for PN. As you can see, the vectors in a basis are like the building blocks for all other vectors in the vector space V. It turns out that if V one to V. K. Is a basis for V, um, not only can every vector in V be represented as a linear combination v one to V. K, but the representation is unique. Que, for instance, in p two Paulino me 03 plus X minus, X squared can be written as a linear combination of the vectors. One x x squared. Okay, Like this. So it's three times one plus one times x plus. Negative one times X squared. Okay, so this polynomial can be written as a linear combination of one x and X squared in one and Onley one way, namely this way. 41. Dimension: One important fact about bases is that if a vector space V has a basis consisting of n vectors than any other basis for V has in vectors, for example, we saw that dissent consisting of one x X squared is a basis for P two. It turns out that this set is also a basis for P two. Note that it also has three vectors. Any other non standard basis for P two will have three vectors. We're now in a position to define the dimension of a vector space. If V has a basis consisting of in vectors, then the Dimension a V is an the number of vectors. In the basis The dimension of e is unambiguous because every other basis for V has in vectors. We've already seen that the set consisting ah 1001 is a basis for our to. So the dimension of our two is two. Considered a subspace w the set of all scaler multiples of 46 Let's find the dimension of W . Every vector in W can be written as a scaler multiple uh, 46 So the set, consisting of just the single element 46 spends W to set, consisting of 46 that is also linearly independent. Because if see, Time's 46 is equal to zero, then this is equal to zero. So then we get this and solving for C, we get juror. Okay, so this set consisting of just that one vector 46 not only spans W, but is linearly independent. Therefore, that sent is a basis four w. But that means that the dimension of W is one. We can see this geometrically we graft 46 like this, W um consists of all scaler multiples. 46 So w consists of all points on the line going through the origin and 2.46 So let me draw that here. So we have this line going through the 0.46 we can see that it makes sense that w is a one dimensional subspace of our two. Let's do another example. Let w be this edible matrices of the form like this and B are real numbers. W is a subspace of m sub 33 The scent of all three by three matrices find the dimension of w Okay, no doubt the matrix given by this is equal to this matrix plus this matrix, and we can pull that a ELT and pull the BL So every matrix and W can be written as a linear combination, uh, this matrix and that matrix. Thus the set consisting of those two spends W. It's easy to show that this set is also linearly independent and therefore the set forms a basis. Four w Therefore, the dimension of W is to 42. Coordinates: if X is an arbitrary vector, Envy and B is a basis for V than X can be written as a linear combination of the vectors and be so if be equals V one through VN then X is equal to see one v one plus dot, dot dot CN The end for some scale er's c one to c n the scale er's c one to C n or called the co ordinates of X co ordinates of X relative to the basis be the vectors V one to V N in the bases B are like ingredients in a recipe and the scale er's c one to C n tell us the amount of each ingredient needed to cook up the vector X. We take the amount C one of you one the amount. See two of V two etcetera and Adam Ola to get X equals two c one b one plus C t v two plus dot a dot plus C N v n for a different vector. Why envy? We're going to have different amounts for each ingredient to cook up why we can form a column matrix consisting of the court in its of X relative to be as follows, see one c two dot a dot CN. This is called the cord and it matrix of X relative to be, and it's denoted like this X and be here. 43. Change of Basis: we have seen that a vector space can have more than one basis. For example, this set 1001 is a basis for our to. But so is this. The 1st 1 is the standard basis, and the 2nd 1 is a non standard basis. There could be many nonstandard bases. Let be be the standard basis and let be prime be the nonstandard basis. We want to be able to represent a vector in our two, given in terms of B as a vector in terms of be prime and vice versa. In other words, we want to be able to change the basis. For example, let X be given by for 15 4 15 can be written as four times 10 plus 15 times 01 So the court in it the co ordinates for X relative to be are four and 15. So the co ordinate matrix for X relative to be is given by four 15. We want to find the coordinates for X relative to be prime. Okay, so we want 4 15 to be equal to see one times the first vector plus C two times a second vector so that implies that 4 15 is equal to this vector and setting the components equal to each other. We get this so we get a system of equations. And if we try to solve this, let's do negative two times the first equation plus the second equation. And so we get C two equals to one and C one is equal to six. Okay, so 4 15 is equal to six times the first vector, plus one times the second vector. Okay, so the co ordinates for X relative to be prime are six and one. So the co ordinate matrix for acts relative to be prime is given by 61 In our example, we changed the basis from B to be crime. For the vector X equals to 4 15 We want to be able to do this for any vector in our to But more generally, if V is an n dimensional vector space, so V is an in dimensional vector space and be and be prime or two bases for V. Okay, so if he is a vector space and be and be prime are basis for V, then we want to be able to change the basis from B to be prime for any vector envy. It turns out that there is a way to do this. There is a Matrix p called the transition matrix transition matrix from the bases B to the bases Be prime such that p times The co ordinate matrix for X relative to be is equal to dick ordinate matrix for ex relative to be prime. If we're given the co ordinate matrix for X relative to be, we can simply multiply by the transition matrix P and the result will be the co ordinate matrix for X relative to be prime. There is a procedure for finding the transition matrix p formed augmented matrix Be prime be and perform gals Jordan Elimination to get the Matrix like this so you'll have the identity and whatever is on the right hand side is p 44. Examples of Finding Transition Matrices: Okay, let's do some examples. Find the transition matrix from B to be prime. Where B is given by this and be prime is given by this. Okay, so form the augmented matrix. Be prime. Be so we have 12 minus 23 and then we have be okay. Now, let's do gals. Jordan, elimination stew minus two. Row one, plus road to Okay, so we get zero seven minus 21 Let's do 1/7, Row two. Okay, let's do two times. Row two, plus row one, and we get this. Okay, so p a is this matrix. Let's truck P Times 4 15 is equal to 61 Let's check that. This is true. Okay, so cares P multiply that boy 4 15 And we got this, which reduces to 61 And that's what we got earlier. Okay, let's do another example. Find the transition matrix from B to be prime. Where B is given by this and be prime. It is given by this. Okay, so we form the augmented matrix. Be prime. Be que so be prime. Is this okay? And B is this Now perform gals. Jordan elimination. Let's do negative 1/12. Row one. Let's do 1/4 bro. Tooth. So now let's do negative one. Dirt. Row two, plus row one. Okay, so whatever matrix we have here is P okay. Suppose X coordinate matrix for X relative to be is given by negative one fly. Find the coordinate matrix for X relative to be prime. Okay, so we take the transition matrix p and multiply by the coordinate matrix for X relative to be okay, so we multiply p by this, and if you multiply that all out, you should get this. Okay, so no corn in it matrix for X relative to be prime is given by this. Okay, so let's check. X is equal to negative one times this specter plus five times this vector. Okay, so X is equal to this here. Now, let's check, um, to negative 13/4. Let's find two times this vector plus negative 13/4 times a second victor. Okay? And we get the same thing. It's also possible to change the basis and the other direction from be prime to be. If p is the transition matrix appears to transition matrix from B to be prime than p in verse is the transition matrix from Be prime too to be okay in our example If we want the transition matrix from be prime to be we just find inverse of p And I was given by this soapy inverse is given by this formula and and the determinant of P is negative 1/12 Okay, multiplying all of this out we get this.