Autonomous Robots: Kalman Filters
Daniel Stang, Robotics Engineer
17 Lessons (2h 5m)
View My Notes


1. Course Preview
1:28 
2. Intro
12:30 
3. Filtering Basics
5:42 
4. Kalman Toy
9:44 
5. Assignment 1 Intro
3:58 
6. Assignment 1 Walkthrough
3:59 
7. Kalman Filter Full Implementation
29:59 
8. Assignment 2 Intro
4:11 
9. Assignment 2 Walkthrough
3:27 
10. Kalman Filter 2D
5:46 
11. Assignment 3 Intro
5:40 
12. Assignment 3 Walkthrough
4:02 
13. Kalman Prediction
13:29 
14. Assignment 4 Intro
3:49 
15. Assignment 4 Walkthrough
6:29 
16. Assignment 3 Bonus
5:05 
17. Outro
5:40

About This Class
Learn about writing software for an autonomous robot by implementing a Kalman Filter on a selfdriving car in Python!
In this course you will learn not only how Kalman Filters work, but why are needed in the first place. You will get to write actual code that will have to perform well in simulations designed to mimic a real selfdriving car. No previous experience in linear algebra or software is required. All code is written in Python which is a very easy language to get up and running even with limited to no software experience.
Class Projects See All
In this class you will design a Kalman Filter that will work on a selfdriving car simulator I've designed. This way you can see your skills in action when solving a real problem that selfdriving cars have to deal with.
(Read More)
Transcripts
1. Course Preview: Welcome to autonomous robots. Kalman filters. This course is the first in a series, of course, is I'll be providing on autonomous robots. So whether you're interested in autonomous robots as a whole, such a self driving cars or whether you're interested in gaining some specific knowledge about Kalman filters such as how they work or even why they're needed in the first place than this course is a great place to start. I have experienced working on machines and robots of all sizes. At my previous job, I designed control systems for military tank turrets, and at my current job, I'm working to provide autonomous solutions to the cleaning sector. Perhaps you've tried learning about Kalman filters on your own by browsing the Wikipedia page and just been overwhelmed with the maitresse sees and all the crazy mathematical symbols. Well, in this course, I'm gonna provide simplified explanations for all aspects of a Kalman filter, and you'll even get to test your knowledge on a couple simulators I designed where you'll have to solve real problems that self driving cars encounter. Beyond that, you'll even get really Kalman filter code, which you will write yourself with my help that you can actually deploy on a riel robotic system. So whether you're looking to just start your robotics career or whether you're a seasoned professional who's just looking to brush up on Kalman filters, please do check out this course. It'll be well worth your time.
2. Intro: welcome to autonomous robots, Common filters. In this course. I'm not only going to show you how common filters work and how you can code them yourself. But I'm also gonna explain why Kalman filters air such a big deal in robotics and why they're so widely used. This course is titled Autonomous Robots. Kalman filters Because this is the first in a series, of course, is I'm gonna make on robotics in general, but more specifically, autonomous robots. If you happen to take my previous course on P I d Control, you would have noticed that the examples that I used and the simulator that I designed was all based on designing P I. D controllers for an elevator. Well, in this course and for all the courses under the autonomous robots, Siri's all of the examples will be using riel robotics examples. So, for example, in this course, all of our examples and all of our assignments will be working with a new autonomous car or a self driving car. As with all my courses, I strongly recommend you watch all the lectures at 1.5 times speed. I just find it's way easier to process it and I'd rather watch a lecture twice at, ah, higher speed rate than watch it once at normal speed in this lecture, I'm just going to give you a quick course overview of what to expect from this course and the assignment structure and what not? I'm then going to give you a quick overview of robotic software systems in general. So you can sort of have some idea of where Kalman filters fit in into the larger robotic systems. And then I'm finally just gonna show you how to set up the simulator and just play around with a little bit and show you how to install it and get everything up and running. This course has four main assignments, but also a lot of bonus material included in the assignments where you can sort of go above and beyond on. I really can't stress doing the assignments enough. It's one thing toe watch, lectures and to sort of understand what's going on. But it's another thing entirely to actually do it yourself and actually use the information which you quit we gained. If you want to retain what you learned in this course, please do the assignments thoroughly and really try at the bonus material that I offer just to give you a rough overview of the course in the next section. I'm gonna talk about filtering basics, essentially, what filtering is what it's used for. And then I'll sort of set up the problem that Kalman filters Consol that other filtering methods can't. After that, we're gonna do a toy implementation of a Kalman filter, which is where we will hand code a rough thing that's equivalent to account and filter but is not exactly right. Toy implementations are not meant to be serious. They're just meant to give you an understanding of how the overall structure of something will work by doing it in a more simplistic manner. After that, we will implement a full Kalman filter in one dimensions and have an assignment based on that. Then we'll upgrade to two dimensions, and then finally, the final assignment will be on common prediction. So common filters are in general, used to try and estimate what the current state of the robot ISS and prediction is trying to estimate the future states of the robot. Now, if you don't quite understand that, don't worry, that's what this courses, for it'll all make sense At the end. Robotic systems congenitally be divided into three parts. The first is sensing you try to sense the world around you. Second, you decide. Based on what you sensed, you decide what you want to do. And then, finally, the robot needs to act based on what it has decided to do. You need to actually do that thing generally on Lee, the sensing and the acting portion actually interact with the real world. Imagine a self driving car sitting at a red light waiting to go. The self driving car needs to sense the rial world, state of the light. It needs the sense that color of the light and when the light goes from red to green, the robot needs to sense this change in the real world. After that, it needs to decide what to do. It needs to decide. Is the intersection clear? Is it safe to move forward? And if it decides it's safe to move forward and drive, then it finally needs toe act. It actually needs to step on the gas pedal and actually drive the car and thus make something happen in the real world. The list shown here is by no means complete. But under each of the three categories, I've listed a few of the key technologies or methods used for accomplishing each of those three stages. So, under the sensing stage, have listed sensor fusion filtering and localization. Sensor fusion is when you pull in information from multiple different sensor sources and unify it to get the best estimate of your current state. The real world is noisy, and our sensors and our measurement devices don't always give us an accurate picture of what's going on. Therefore, you need filters in order to get a better estimate of your current state or whatever it is you're trying to measure. Localization is trying to figure out where you are in the world based on the information around you. This is weird to us humans because we do localization automatically and we do it so easily . We don't even realize that it's difficult, but actually, when you try to do localization on a robot, it's an extremely difficult problem. Path planning, prediction and behavior planning are all things that the robot needs to decide what to do once it has a good idea of where it is in the world and what's a sort of around it. You can think of path planning as the problem Google Maps is solving. When you are asking it, How do you get from your house to the grocery store and given of network of roads? It has to determine a way that you can actually get from point A to point B in the least amount of time. That is actually a very non trivial problem. And if you've ever tried to navigate in a very complex city, you're busy urban area. You know that it's not as travelers you think it could be. Prediction is trying to. Based on your current information, how can you predict what's gonna happen in the future? So, for example, if you see a car with its left signal light on, you need to predict whether it's gonna turn in front of you or if it's gonna wait and whether you're clear to drive through the intersection. Behavior planning is about breaking down. The problems at a self driving car needs to overcome into a few distinct states that you can handle them more specifically. So, for example, the things that a self driving car needs to be thinking about or planning for when it's trying to park in a parking lot are vastly different than the things that needs to be thinking about or worrying about when it's driving down the highway. So behavior planning is thinking about how can you switch from those two states and sort of How do you think about each of the problems individually, then, when the robot needs to act into the real world, actually make what it has decided to do? Reality. You have things like P I. D Control, which I already have a course on or model predictive control, which will actually be the next course I release under the autonomous robots heading Now model predictive control is a major interest for mine. So if you're interested in that, stay tuned for my next course. Now, of all the things that I listed under the sense aspect of a robotic system Kalman filters can actually accomplish. All three of these Kalman filters can be used for sensor fusion that is bringing information from multiple sensors and fusing them together. However, in this course, I won't actually go over sensor fusion. I'll be mainly focusing on the second part, which is the filtering problem now in the self driving car. Example will be using throughout the course I actually do use common filters to accomplish localization. However, common filters are by no means a one off solution for solving all of localization. Localization is a very difficult problem, and just one method alone can't solve it. However, Kalman filters can be used to greatly aid and localization. Two. Use the simulator. You'll need Python three installed and the num pie and Matt Plot lib libraries installed. So generally, if you have Python three installed, you'll already have those two libraries. And if you've already got that all set up, skip ahead to the end of this lecture, where I'll show you how you can download the simulator from get up.
3. Filtering Basics: Welcome back to autonomous robots. Kalman filters in this lecture. I'm gonna talk about the typical use cases for filters, will then talk about the limits of standard filters and show you the types of problems which Kalman filters air well suited for measurement. Noise is something that is always on the back of any engineers mind. For example, let's say you stand roughly 180 centimeters tall. If you give five people the exact same tape measure and tell them to measure how tall you are, it's very unlikely that they will all produce. The exact result. The first person might measure you is 180.1 centimeters tall. The next person maybe 1 79.5 centimeters tall, the third maybe 180.5 centimeters tall. Whenever you're trying to measure something, noise will always be there. Filtering is when you take a number of measurements and you try to remove the noise from them. For example, shown here is a very noisy signal. Filtering might produce something like the dotted line shown here. If you know that the real world is not behaving in such a noisy like erratic manner, you can use a filter to get a much more smooth signal, which could more accurately reflect the reality. A moving average filter is one of the most basic filters you can implement. Let's say we're trying to measure something, and again here the numbers don't matter. It's all arbitrary. But let's say our first measurement is 2.5. Let's say our second measurement is 0.4. Our third measurement is 1.5, and our final fourth measurement is 3.8. Now that we have multiple raw measurements, what we can do is we can average all of these measurements together and produce one filtered number. For this example. That filtered measurement is now 2.5 Let's place an X on the graph to signify that filtered number. And now let's take a new measurement. Let's say that our next measured number is 1.1 again. What we can do is we can filter the four previous measurements so we won't use 2.5 as part of our average because that is now five measurements behind. We're only going to use the latest four measurements as part of our filter and that is why it's called a moving average filter. If we continue this process for even more measurements, you can see that the X is shown on the graph are a lot more stable and consistent than the dots. The dots are the raw measurements They've seemed to be moving all over the graph. Yet the exes, which are the moving averages, are a lot more stable because they're composed of the four previous measurements. Now let's try this moving average filter on a self driving car. I'm sure you're all aware of what a GPS is and sort of what GPS does. So let's imagine that your self driving car has a GPS chip in it, which gives you the cars current position. However, like any sensor, it has a little bit of noise. So we want to filter this noise out. So let's start with the car at position zero of the GPS. The car then moves, and the GPS chip takes a new measurement, and this puts us at 2.2. Let's imagine that the car continues to move, and the next GPS measurement is 3.7. Again, the car moves a little bit more, and the next GPS measurement is 6.4. Now we've got four numbers. Let's try, are moving average filter. If we take the average of these four numbers, we get 3.75 which is really nowhere near where our car currently is. Let's say we move ahead one time step and we take a new measurement of 8.1. We shift the moving average window and we take a new moving average. We will get 5.1. As a result again, 5.1 is well far behind where the cars current location is. As you can see, the moving average filter is doing more harm than good, as we might as well just take the latest raw sensor measurement because that's far closer to the cars position than are moving average value. So how can we filter this noisy signal? Because if we look at the measurements and we assume that the car has always been traveling a constant speed, something is not right. We still need to filter the noisy signal, but we need to do it in a way that doesn't always estimate our position well behind the car . This is the exact type of problem that a Kalman filter is perfect for, and in the next lecture will start talking about the toy implementation of a Kalman filter and start talking about how common filters roughly work.
4. Kalman Toy: Welcome back to autonomous robots. Kalman filters In this lecture, I'll be showing you how we can do a toy implementation of a Kalman filter and then for the assignment. Following this lecture, you'll have to actually implement this into Python. We left off the last lecture with a self driving car with a GPS attached to it. The GPS had given us five sensor readings, and we tried to use a moving average filter so that we could filter out the noise out of these measurements. Now if I told you that the car has been moving a constant speed the entire time, if you were to look at the measurements listed below, what would you guess if you had to guess where the car will be by the next measurement? So what do you think X will be for the next measurement? If you guess that the cars next position will be something around 10 then that's a very good guess. Let's break down the process, which you might have used to come to that guess, and hopefully that will reveal some insight about how we can actually create a good filter . The way that I came to this guess is that I noticed that the X values were increasing roughly by two. Each time that is the Delta or the difference between any two measurements was roughly two . You can see here that if I actually take the difference between all the measurements and average all those differences, I get roughly two. And then what I do is I take my most recent measurement, which is 8.1 I add to do it on. I get roughly around 10. Let's leave this process out in a more formal mathematical way. Let's say that X of T is our most recent measurement, and Delta X is the difference between all of our previous measurements, then x of t plus one. So if T is our current Time, T plus one is our future next time. So x of t plus one is what we expect. The next measurement to be X of T plus one, is equal to X of T plus Delta X. Now it's a little bit odd to be using Delta X, which is the distance between our two measured points. Really, what we should do is we should break Delta X into X dot and Delta T, where X dot is the speed that we think we're going. And Delta T is the time between measurements, because really Delta X here signifies the distance we have traveled. So, for example, our first measurement was zero. Our second measurement was 2.2. Delta X is the distance we traveled between those two points, which is 2.2. But you don't want to think about things in terms of a change of distance. What you want to do is think about it in terms of speed, times time. So, for example, if it took one second to go from 0 to 2.2, well, then our speed is 2.2 meters per second. Using this formula, you can see that we can come up with some rough estimate of where the car will be at the next time step. If we have some rough estimated speed, we can times that by time and add that to our current measurement. So again, in other words, we can use our previous measurements to come up with some rough estimate of what speed is, and then we can add that to our most recent measurement to estimate where we will be at the next time step. So that way we don't have to rely as much on an individual noisy measurement, for example, you can see there that 3.7 was one of our measurements. We don't have to rely on an individual measurement that can contain a lot of noise if we use the average speed and sort of just add that to our most recent measurement. This is a rough analogy for how a Kalman filter actually works. Obviously, this isn't perfectly analogous to how a Kalman filter works. But by implementing a filter that works this way, you'll be able to see roughly how the Kalman filter works. And by changing some variables, which you'll do in the assignment, you'll see some of the trade offs between looking Maura to our past measurements or having more weight to current measurements. On the left, you can see the class where you will be implementing the solution for the assignment. Now we'll obviously go over this in a bit more detail in the assignment intro lecture. But for now, just realised that you have to fill in both the predict an update function. Those two functions will be called by the simulator in order to produce an estimate of where you believe the car currently is and what you believe the cars. Current speed is just as a basic rule for python. Anytime you see a variable that has self dot in front of it like, for example, self dot V, that variable will be retained for the next time the function is called. So if you use that prediction vet variable, their prediction equals zero. The next time prediction function is called that variable will be erased. You won't be able to use the previous value, so I've used self dot, so you can retain the previous X value and the previous time for the prediction function. I want you to predict where the car will be at the next time step, so essentially use your most recent measurement x of T. Add that to your current estimated speed, which is self dot B and times that by Delta t been returned that number for the update step . What I want you to do is update your current estimate of speed. A really common filter is also broken up into both prediction and update steps, so we'll do the same here for our toy implementation. Now you're measured. Velocity is obviously the change in X, divided by your change in time. Then what I want you to do first is set self dot Be equal to your most recently measured velocity. However, once you try turning constant speed to false, that is disabling constant speed so that the car in the simulator will drive at varying speeds. You'll see that this method, while it works great for a constant speed, does not work good for a changing speed. Also, since you need to use previous facts and the previous T, don't forget that at the end of the update function, you'll need to set these values to the current ex and current T. So that way, for the next time the function is called, they will be the previous X and the previous time. Now, once you've set constant speed equal to false, you'll notice that the method that I align previously setting self dot V equal to your measured V doesn't work. So instead, what I want you to try is setting self dot be to the equation above, where self dot de is plus equals. The measured V minus yourself dot be time 0.5. I also want you to try multiple numbers between zero and one. Try 0.1. Try zero tri 0.9 and see what effect this is happening and pay special attention when the speed changes. Because that's when you'll notice this effect. Make sure you understand or have some rough idea of what is actually changing when you change that 0.5 number to something else. This is really important for a real Kalman filter, and when we do the full Kalman filter implementation, I'll be referencing back to this change. Now, if you don't understand what plus equals is in Python, it's essentially just setting it two plus the current value. So, for example, if you have Y equals toe one and you do, why plus equals four and you print that value of why that new value is now five, the next lecture will just be a brief introduction to the assignment on the simulator used . So if you feel you have a good handle on what's going on, just jump right into the assignment
5. Assignment 1 Intro: Okay, so I have my assignment one dot pie file open. Just you can open this and whatever your favorite text editor is, and I'm ready to work on the assignment So again, for this first function here, Prediction. You want to return your predicted value of acts so you will use your previous value of X, which is your previous measurement. You'll also use your previous time in conjunction with this time T here. So if you want to do delta T, you can sort of do something like this so t minus self previous tea and that'll give you your delta T and you'll want to use your previous X and your best, your most recent estimate of velocity and return that predicted value. And then for the measure and update part you want to update this value of self dot v And don't forget at the end. Like I mentioned that you need to at the end of this set previous X two x will and then set ah previous t two t that will so that way, for the next time, the functions air called You will have your previous excess, your the most recent ex you measured etcetera. Um, do you don't have to worry about any of this stuff at the bottom? This is just how it's called. Ah, you can change this constant speed from true to false in order to change the thing of the rope tow. Sorry. Change the speed of the car and the simulator, Uh, which I'll show you right now. So if you're not too familiar with the command prompt, you can see d to change the file toe where you want to go. And now I can do python assignment one dot pie. So here's the simulator. I'm obviously not about putting anything, so my access to Madeira is just going down negative, because the car, like my XX estimate error is probably zero. So I the car's way ahead of that, So I'm way negative. And then here you can see my V estimate. It's kinda hard to see, but it's zero one hint. Whenever you're closing, the simulator makes you click the X button. You don't close it with a control see in the terminal. But just to show you all set this toe 30 and let's itself dot V equals 23 Actually, let's do, too, so you can see ourself. Davi Estimate is now exactly two that were out putting and our excess to mid air. Once we go past 30 here, you can see it start to go down because at 30 or access to Madeira would have been zero. And then as we go past that, it will be negative. So this dotted line here in the V estimate that is actually the true speed the car is traveling. So if I go here to constant speed and set this to false, you will see this dotted line move and you'll be able to physically see the car on the left here, change its speed, It will slow down first, and then I believe it speeds back up again. Yeah, so here you can see. Now we're going much slower. Obviously, RV estimate is remaining the same cause I just have it set to a constant value. So the others not too much to this assignment. Uh, it shouldn't be too bad again. Play around with that 0.5 value once you get it. Working with constant speed of true, and that will help you a lot as we design a full Kalman filter in the future
6. Assignment 1 Walkthrough: shown here is my solution Toe assignment one as you can see right here. Ah, I kind of did away with the prediction value, but your answer would have been something like this. Prediction equals to this. It's why does I keep doing that? But it produces the same result either way. So here I have myself dot v times Delta T plus X Ray Toto. The algebraic equation that I've shown you numerous times now and then for the measure and update we calculate are measured V by doing our X minus our previous X or Delta X over our delta T and then this is that 0.5 value that I told you to manipulate. And the effect that you should have seen is that once you are changing speed so let's actually say constant speed here to false the lower you set that number, the slower it will respond to a change in speed. So I just bring in my terminal. Let's do python assignment one solution so you can see that fairly quickly. We get up to the estimated V. However here is, we have ah, we have a speed change in the car and you can see that this actually reacted quite quickly . It in a number of times steps eventually gets down, and it equals the new speed. However, let's sit this value to 0.1 and see what happens. You can see even for this first time step. It's taking a long time for RV estimate to reach the rial V value, and as a result, our ex estimate error is also lagging quite far behind. C. It didn't even get to the top before responded. So you may be thinking, Well, you should always use a higher V estimate. However, there's a trade off to that. If we set this to 0.9, you'll see that it responds very quickly, however, notice how wavy it is. Still, this error won't go away. So you see that there's almost a trade off between how quickly you'll react and how much your error is. And the reason for that is that when you use a larger value of this, you're essentially putting more weight on the current measurement. However, when you said it lower, you're putting less weight on the current measurement and more weight on your previous measurements. So if we set constant speed to true, and we set this again to a lower value like zero point. Let's try 0.3. You will notice that it will take a long time to get to the V estimate, but once it gets there, it's very accurate. It will have very like quite a bit less of up and down movement to it, because essentially you're taking in mawr information of the past. You look at how stable that is. When we had a higher number, it wasn't nearly that stable. So that's it for the first assignment. It's pretty straightforward. I hope you played around with this number and sort of figured out what was going on with it . And if you did that, then you are well on your way to understanding a full Kalman filter.
7. Kalman Filter Full Implementation: welcome back to autonomous robots, Common filters. In this lecture, I'm gonna show you how to implement a full Kalman filter, which you will then use to implement a one d Kalman filter onto the exact same simulator you used for the previous assignment by one D. I just mean that will be working with one dimension. So, for example, in the last assignment, the car could only drive straight to can't turn left or right or anything like that. So we'll be first designing a common filter toe work in one dimensions and then, in future lectures will look into Kalman filters, working on a car that can travel in two dimensions. Now, before we can just jump into how the common filter works and the equations that make it work, you need a little bit of background knowledge on linear algebra. If you have no knowledge of linear algebra or you're a little bit rusty and the subject, don't worry. I'm going to start with a big refresher on the topic that will give you all the information you need. So that way you can actually use a Kalman filter and design one yourself. Linear algebra is a way to express algebraic equations in a more compact form. Using matrices at the top, you can see to algebra equations. You should recognize these equations as the exact same ones we used in our toy implementation of a Kalman filter. At the bottom, you can see the linear algebra equivalent of those two equations. Again, the top algebra equations and the bottom linear algebra equations produced the same result . They have the exact same information, while at first glance it may appear that the algebra equation shown is a much more condensed and succinct form of the information. Once you start manipulating a lot of equations and working with them, you'll see that the linear algebra form can save us a lot of hassle while at the same time making it much more clear what's actually going on. I'm now going to give you a quick refresher on how matrix multiplication works so you can see how we can go from the linear algebra form back to the algebra form at the talk. This is how I can show you that the two sets of equations are actually equivalent matrix. Multiplication is one of those things where it's easier to see what's going on by just doing it. So let's just imagine that we have the two matrix ease Listed below. The First Matrix is a two row four call of Matrix, and it's being multiplied by a four row, one column matrix to multiply these two matrix ease together. Essentially, what you do is you take all the numbers in the first row of the first matrix and you times it by all the numbers in the first column of the Second Matrix. So first row times First Column. If you do that, you will get the result in Matrix shown here again to make it absolutely clear you started the first number in the first row and you multiply at by the first number in the first column. So one time 0.1. Then you move a number over you. Take the second number in the row and times that by the second number in the column. So to time 0.2 and onwards until you get to the last number in the row four and multiply it by the last number in the column. 0.4. Always think about it as going left to right in the rose of the First Matrix and top to bottom in the columns of the Second Matrix. Now we're not done yet because we still have to deal with the second row of the first Matrix. However, we just do the same thing all over again. We take the first number in the row and multiply it by the first number in the column and move all the way left to right and rose top to bottom in the columns. And that will produce the final matrix result shown here. Now let's go back to the original matrix that we started with. Hopefully, now you can see some way that the bottom linear algebra can turn into the algebra we have at the top. If we multiply these two Matrix sees together, we get the following result. The first row of the resultant matrix is one times X of T zero times y of tea delta T times x dot and zero times y dot And then for the second row, we have zero times x of T one times why of tea zero times x dot and delta t times y dot Now if you do not understand how to multiply these two matrices together. I really can't stress enough how important it is to understanding Kalman filters. You will not be able to fully understand Kalman filters If you don't even understand the mechanics of how linear algebra work, you don't necessarily have to understand why multiplying two matrices produces that result . You have to understand how you can get from the from the starting equations to the results . If you don't please, either go back and try watching it again a few times. Or look somewhere else to find some more information about linear algebra because you won't really be able to fully understand Kalman filters without getting the mechanics of how matrix multiplication works. Now let's remove all the terms that are zero in the Matrix and just hide them temporarily so you can see more clearly how the result in Matrix turned out. Looking at this, you can now see that there is some clear way where X of T plus one can equal Exit E and Delta T Times X star. Now let's remove the intermediary step so we can just see our matrix containing X of T plus one and why of t plus one equal to the result in Matrix we just calculated. Now all we need to do to get our final answers is to split up both the Matrix on the left side of the equal sign and the right side of the equal sign by row. So you sort of imagine that the Matrix kind of get sliced horizontally and then all of the values left in The Matrix, you just add them all up. So we get X of T plus one equals x of T plus zero plus delta T times X dot plus zero, and then same for y of tea. We get why of t plus one plus zero plus y of t plus zero plus delta t times y dot. By removing the zeros, you can now see how we went from two Matrix ease, which looked nothing like the algebra form, and they end up producing the exact same result as the algebraic. For now, you may be wondering that that seems like an awful lot of work just to get the exact same equations in the algebraic form. However, once you start adding even mawr equations, you'll quickly see that the linear algebra form is much easier to work with. Now let's go from two equations toe four equations as shown on the left. And let's imagine that we wanted to calculate the answer to all four of these equations in Python First, what we would have to do is we have to initialize all the value, so let's just make up some numbers. X of T equals two. Why of t equals three x dot equals four y dot equals five. We'll set d t. Which is the Delta t equal. The one let's say our input from our pedal is eight and our input from our steering is nine . And these air not riel Self driving car update equations. This is just showing you how it works. So to calculate this, we would set x of t plus one equal to x of T plus x dot times D t. Etcetera etcetera. I'm sure you can understand the code in the calculate section, you can see it took a setting up all the variables and then about four lines of code to calculate it. Now let's imagine that we're trying to come up with answers for those four equations except using linear algebra instead. What we would do this time is we'd set up our matrix ease and then we multiply them together. So our first Matrix x of T would just be 2345 in a column vector our input. You would also be 0089 in a column vector and then our matrix A, which is the Matrix. That sort of contains all the information of the update equation would be as shown here. Then all you need to do is a Times X plus you and you will get a column vector, which contains the exact same answers as X of T plus one y of t plus one x dot y dot So you can see that by using linear algebra, weaken solve four equations all at the same time using just one line of code eight times x plus you very simple. If you're still not totally confident in your linear algebra skills, I highly recommend you grab a pen and paper, and you work it out exactly as I showed you and get the exact same result doing it in linear algebra form as you would in normal algebraic form. Now, with that background knowledge in place, we can finally start designing our Kalman filter before you're overwhelmed by this slide. Just focus on the left after the side. First, essentially our Kalman filter works is we have to create be five matrices I have listed here the X matrix, P f H and R matrix. And then once we have done that, which is the entire focus of this course is how to design those matrices. You then just follow sort of a mechanical procedure shown on the right where you take those matrices, you multiply them together, do some, transforms this matrix minus this matrix times another matrix, and then you just get the result out of that for the right side. You don't necessarily have to understand everything that's going on. The rial part of designing a Kalman filter is creating the matrices to start with how then process actually works when you're multiplying. The matrices is something you can look more into now that you have some idea of how linear algebra works. But honestly, for the most part, no one really cares too much. Even Sebastian Thrum, the founder of the Google self driving car project. When I was reached searching this course, I looked at his course on Kalman Filters, and he even said that when it comes to the equations on the right, he just Google's them and looks them up. You don't need to memorize them or really understand what's going on. Just know that for the most part it will produce the results you want. I'm now going to go in depth into each of the five matrices listed there, starting with the state vector. Just a little more background information on matrices, matrices, air always defined as row times call. So, for example, the state matrix for the state vector. You see, there has four rows and one call of so the size of the state matrix, you would say, is four by one. Now, I've used this term state quite a few times throughout this course. Essentially, the rial definition of state is something that contains all information about your current place in time. So, for example, if you're driving a car, all your relevant current state information might be your position. It might be how fast you're going. It might be how much you have the steering wheel turn. The information included in your current state really depends on what you are trying to solve. So let's go back to imagining that we're just trying to know our current X Y position well for that. In our state, we should obviously include acts of T and y of tea, which is our current position in X and Y. But also it probably makes sense to include the rate of change were moving in X, which is x dot on the rate of change were moving. And why? Which is why dot Because if we know how fast we're going in each of those directions, we will have some idea of where we will be next. It really depends on what you are trying to measure or what you are trying to use the common filter to filter. To get a more accurate result of now, your state vector or state matrix should be size of your number of states by one. Therefore, it should have the same number of rows as a number of states you have, and one column shown to the left we have four states are x of T y of T x dot and why DOT and thus we have four rows and one column. Next, we need to populate our state matrix with our starting conditions. So, for example, shown here we have our car starting at position acts of to y equals one and both ex dot and why dot R zero. So that means our cars at some position and it's currently not moving. This is sort of the starting condition for when you first turn your common filter off. Whatever those starting conditions are to the best of your knowledge, put those into the state matrix Moving on. Let's look at the uncertainty matrix. The uncertainty matrix P is of size and states by end states. So keep being consistent with our previous example where we had four states. Here we have a four by four matrix. Now what you do is down what is called the diagonal shown here with those numbers 100 in each location. You populate the matrix with the initial uncertainty for each state, so that 1st 100 in the top left corresponds to our initial uncertainty of x of t and in the bottom right that corresponds to our initial uncertainty of why dot Now why I chose 100 I can't really say you kind of just have to play with the uncertainty matrix and feel it out and see how it responds. But let's say, for example, you knew for a fact that except t your starting position of X of T was too. And your starting position of Y of tea was won in the previous state matrix. Then, instead of 100 for the top left two values, you would set them both to zero. But let's say you were uncertain about that. You were going zero speed in both X and y directions. Then you would keep the bottom two results in the bottom, right as 100. Now, part of what you'll do is you will change these values in the uncertainty matrix and sort of see what happens. You kind of have to tune your Kalman filters based on each problems and tuning your uncertainty matrix is definitely a big part of designing a good Kalman filter. Here is an example where I said that we're certain about our starting conditions of, let's say, x of t and x dot I have set both the values corresponding to x of t and x dot which were the first value in our state matrix on the third value in our state matrix. I have set them to zero. Next, we need to create the state transition matrix, which is of size and states by end states. Now this matrix will be used to estimate how we go from one state to the next. So, for example, remember previously that x of t plus one equaled x of T plus delta t times X dot That function allowed us to transition our state from exit E toe except T plus one. So going from the current state to the next state, that is what the state transition matrix needs to accomplish. But for all of the states in our state matrix, so in matrix form, we need to populate F so that x of t plus one equals f times our state matrix X. If we look back at our previous linear algebra example, the first matrix on the left here, starting with 10 doubt the T zero that is essentially our state transition matrix. It is helping us transition from our current state of except T two x of t plus one. Here. As an example, I populated the state transition function with the same algebraic equations we used before and just to make it a bit clear. Here is F Times X, and you can clearly see how this is the equations we were working with before. For the most part, setting up all the Kalman filter matrices is very straightforward, However, for the state transition matrix, you really need to think about what the actual robot or self driving car, whatever it is you're working with, how would actually transitions from one state to another, And there's no one right answer. You can try multiple different weights. There's multiple different ways that you can model, how a self driving car or robot moves. And in fact, in the bonus material of Assignment two and three, you will try a new way of modelling the self driving car that I don't actually show you here, and you will see that it can work just as well, if not better, than the method that I have set up here. Moving on. We have our measurement matrix. Our measurement matrix H is of size the number of measurements by the number of states. So if we think back to our GPS example where we're getting either a location and X or a location in X and Y, if we think that we're getting a location in X and why were essentially getting to measurements were getting two numbers are estimated position in X on our estimated position . And why? Which is why this matrix here is a two by four matrix. Next, what you need to do is populate the Matrix so that when you take this h matrix and times it by the Matrix Z, which will contain our measurements, you essentially get the measured values. So, as I stated before, let's pretend that we're measuring both the exposition and the Y position. I've then put our exit vector into aged just so you can see it here as a reference. Essentially, what we would do is for the first row, we would set X equal toe one and all other values to zero. And then for the second row of the H matrix, we would set extra zero. Why toe one and all other values to zero, as shown here now, just to make sure you understand what's going on. I'm going to give you a few questions. What if, instead of measuring current X and Y position for some reason, our GPS measures are current X speed and why speed? What would our measurement matrix H look like in this case? The answer to that is shown here. Now let's try something a bit harder. What if we measure all four states, but for whatever reason are four states are measured out of order? That is the first measurement in our measurement Matrix's ex dot Then we have why followed by why dot and finally, X, what will our measurement matrix H look like in this case? Shown here is the answer. If you don't quite get that, always refer back to what the h times X matrix will look like and realize that all you're doing is plugging in one for the value that corresponds to the same value in these Ed matrix. So, for example, for the very last row we're measuring X. Therefore, the bottom left hand value of the H matrix should be a one, cause its corresponding to the X finally shown here we have our last Matrix, which is our measurement uncertainty matrix Like the P matrix. This is kind of one of the matrices that you have to sort of plug in values and see how they work. Essentially, what this matrix is trying to convey is how accurate is each of our measurements. If you totally believe each measurement 100% you can put zero in here for the measurement uncertainty. However, try all their values here, try 10 and try 0.1. And again, this is one of those matrices that you only plug values along the diagonal and the value of each row should correspond to the value in these Ed Matrix. So, for example, of ours, Ed, matrixes X and Y than our two values that we're plugging in here are related to the measurement uncertainty we have about X and y. Now that we have all of our equation set up, we're ready to actually do the computation part of the common filter. Essentially, the common filter computation is broken up into two parts. The first part you predict what the future state will be So for a previous example, with our toy implementation based on the current state and how much time had passed between our current state and the next measurement, we can predict what the future state will be. Second, what happens is that we measure using our zed, which includes our measurements, and we update our prediction based on the difference between what we predicted and what we actually measured. So again, you predict what you think is gonna happen in the future. Then you compare that to what you measure the future to be, and then you update. So that way your prediction sort of falls more in line with the measurement. So that way, next time step, when you predict again, you should have a more accurate prediction. In the end of all this calculation, we should have a newly updated X and P matrix. That is, our state matrix should be updated and R p matrix should be updated. Those are the only two matrices which really changed throughout this process. Our state matrix, which is now recently updated, will contain information of where we believe our current status. So if we're a car and we have our GPS going and our X Y position, the state matrix parts that correspond or X, and Y will be our best guess of the current location of the car for the P Matrix. Essentially, the P matrix will show how uncertain we believe the values we estimate in the state matrix are. So if you have very I values for your P matrix, that means that when you are predicting and then measuring toe update, your prediction is very far off from the measurement that you're four. You're not getting a very good prediction. Your P matrix value will go up saying that you are very uncertain. However, if you whatever you predict, ah will happen for your future state when you measure it. If that comes nearly perfectly true, your p matrix value will be really low, which essentially says that you have a very small uncertainty about your state matrix value . Now that's definitely a lot of information to take in. But the best thing to do now is to just jump right into the assignment and then reference back to each of the slides I have on all five of the matrices you need to define. For this second assignment, you'll be implementing a full Kalman filter on the one d simulator you used for the toy implementation. So since we're working in one dimension, our estate is really only X and x dot So end states will equal to, and your end measurements, which will be your ex measurement, will be one. I will supply some starter code so you can just plug in the values. And I've also supplied the code for calculating doubt the tea and show an example of how you can enter it into the F matrix using numb pies, not the focus of this course. But if you want to learn more of how the num pie is actually doing all the Matrix calculations, you can search online for this assignment. I really think it's a good idea for you to play around and experiment. Try different P. Values are values etcetera, even set. Ah, different starting states for your state matrix stuff like that, it will be much easier to see what's going on and how things are reacting in one dimensions than it will be in two D. So make sure you sort of understand what's going on in one dimensions before you move up to the more complex problem of designing a common filter in two dimensions. Now, once you get everything working and you have a very low error of your ex estimate, try setting constant speed toe false again. What you should notice is that it'll take forever for the Kalman filter toe update to the new speed, which is not a good thing, because you should be thinking that this complex common filter should work much better than our previous toy implementation. Now I want you to think about that 0.5 value you were changing and how changing that value reacted ever made the filter react to the change in speed. Well, that exact same effect that changing that 0.5 value had is sort of what's going on with the P Matrix. Remember, the P matrixes are uncertainty matrix. It's how uncertain we are about each of our state values. So when you change speed, you're essentially changing x dot. Therefore, you don't want the common filter to be certain about its X dot value. You want to increase its uncertainty. So what you should do is try setting p 00 and and 11 as I've shown here by sort of adding uncertainty and see what effect this has tried different numbers. Also, try adding 1000 or try adding 10000.1 and see what happens again. Play around and see if you can figure out what's going on.
8. Assignment 2 Intro: you can see the starter code for assignment to. You can see that I've already created the matrices for you in the correct size, mainly because I didn't want you to be playing around with numb by and figuring how it works. Because again, this is not a course on Umpire Python. This is a course about Kalman filters. You can apply this knowledge to any programming language and yeah, I just didn't want you to be fiddling around with learning how matrices work in numb pie. So here is plenty of examples, because in the next assignment assignment three you will actually have to change these sizes yourself. This matrix here, you might have seen inthe e slides as I and that is the identity matrix. That's a linear algebra thing where essentially it's a matrix that has ones along the diagonal and zero everywhere else. So when you see I and the notes, this is the matrix that you will use. You don't have to change this one. This is fine as it is, but all the other matrix matrices you will have to enter in value. I've play zeros for all the values, so you will have to look back at the notes and sort of see what you should place in each of these values. Now I've kept previous time and self dot v previous time You will still need. You will need to calculate d t and directly put it into the state transition matrix. Ah, if you actually knew what d t was. And let's say, for example, that d t was a constant value of 0.5. So that is you were taking a measurement, uh, twice every second. Then you could manually just put in 0.5 in here wherever DT belongs, and you wouldn't have to worry about it. But in this case, I'm sort of setting you up for some good habits of having a having a varying DT because that can happen on a real system where you're not always measuring at a consistent rate. I've also included here the start of the calculation for P because I wanted you to know how to do a transpose. And again, a transpose is something where you'll see the value f with a t at the top like a a superscript t. And that signifies a transposed which essentially means you're flipping the matrix. Ah, but that's more linear algebra stuff that you don't really have to worry about if you don't want to. I've also included how to do an inverse of a matrix because you'll notice that the S Matrix has its to the power of negative one. Just like with the last one. I want you to return the predicted X value. Um, during the predicts step, and then also, once you're done, the measure and update set already have it here. I have it set. So that you set, uh, self dot v two year predicted velocity just so we can show up in that chart on the simulator and then two times matrices. You just used these Asterix just like how you would in any other programming languages if you wanted times this by acts, you do self dot Exe, etcetera. So if you get any heirs in the consulates, most likely that you have the matrix. Ah, you're you're multiplying the wrong matrices. So again, just look back at the notes and see what they did. I've already turned the measurements into a matrix so you can just go ahead and wherever you want to use E. You can put it in and note that these values don't have self dot in front of them, so you would just put it as it is. You wouldn't do this rate. It's just just see for this assignment and actually all the other assignments in the course , there is no stopping condition. If you took my last course on P I. D controllers, you noticed that sometimes a simulator would say, pass or fail with whether you completed the assignment or not. And there's actually none of that in this course because it's so obvious to see whether the Kalman filter is having a positive effect or not. So it almost allows you to You can make the filter as good as you want, or if you just get the rough idea of it and it seems like it's doing something, then you can just go ahead and move on to the next assignment. Other than that, I think you have all the information needed to successfully complete the assignment. So good luck
9. Assignment 2 Walkthrough: here is my solution to assignment to It's pretty straightforward again. I just started with 1000 for the uncertainty matrix I filled in the state transition function. Just Aziz, you would. Now one thing you could do differently is because, you know, ah, this value is going to be set by D T. You could set this 20 1000. Whatever you could, you could set it to whatever, because, ah, here, you're going to overwrite that value anyways. So let's try it with the constant speed equal true and see what happens. Nope, I want Simon to solution so you can quickly see that it Ah, that it quickly gets up to the V estimate and its remains very constant. So this is a really good thing. However, if we try it like this and we have constant speed set to false, we will notice that once the speed change changes, the common filter is extremely slow to react. Ah, so much so that it would not be good for any any riel use case. So you can see the cars slowed down and the Kalman filter is just taking forever to react. And in the meantime, the X estimate. Air has just gone through the roof. So what you can do to remedy that and also remember this is similar to how we did it in the toy implementation, where you change that 0.5 value to get it to react faster. So what we want to do is that if we were printing this P matrix on every step, what we would notice is that this is 1000 value is getting really low. It's getting really small. So the common filters becoming overly confident in its result. So what we can do as we can sort of add some uncertainty that every time step after we update, Pee wee sort of add a bit more uncertainty. So it's using less of the previous information because it's essentially, it's it's still uncertain about what the final values are. So when we do this, the trade off is that we should notice a slightly more error in this part of the V estimate . Now here you can see it actually seems like it's quite considered quite high. The air is quite high. It's not quite locking onto this value, but again, the trade off in this extra uncertainty is that a will react faster. So my numbers that I'm using are not the final solution. They're not the optimal best case. I'm sure you could come up with better numbers, especially for this this part here. You can also try different things for your measurement, uncertainty and see what effect that has. But as long as you sort of understand the rough idea, that's sort of the aim of this course. So I hope you got this far and I look forward to seeing you in the next assignment.
10. Kalman Filter 2D: Welcome back to autonomous robots. Common filters. You actually already have all the knowledge you need in order to design a Kalman filter for two dimensions. So let's just jump right into talking about the assignment. In this assignment, you will implement a full common filter on my two D simulator. Now all the starter code matrices are the wrong size. They're the size for the one dimensional problem, so you'll have to change all of these matrix sizes yourself to get started. I highly suggest you use the same sort of model that we had set up in one D but instead just used the same thing in the UAE dimension. So that will give you end states of four and end measurements of two, which is just your X and Y measured position. If you get any error spewing out into your console, make sure you check your matrix sizes as this is the most likely source for an error. If you need to, you can go over the previous lecture and verify that you're setting the correct matrix sizes for all of the matrices. Now, with the current set up, you won't get something that perfectly tracks the cars movement, but it'll work quite a bit better than just taking in sort of the raw measurement. So don't try to strive for perfection with the current model you have set up, and if you get something that's OK, feel free to move on to the bonus material, which I'll talk about next. You'll notice that one of the simulator options is drivin circle, and it will be currently set to false Try setting this option to true and see what happens . Does your filter do a good job of estimating where the car is? Chances are it doesn't, and it will really just be bad in a failure. Overall, in terms of a filter, what I want you to do is think about the states and the state transition matrix. And here's a hint. Do speed limit signs say maximum 30 kilometers an hour in the X Direction and 25 kilometers an hour in the Y direction? No, they don't. They really just have one speed. But the way that we've set up our model with ex dot and wide, ought we kind of have to speeds in perpendicular directions? What if we tried something new for our state. Fekter. Let's try a state vector where we have five states. We have our X of t, which is our current position in acts. Why of tea current position and why we have the which is our total vehicle speed. Note. We don't have speed in two different X y directions. We just have one vehicle speed. We have fada, which corresponds to our vehicles current heading a k which weighs our vehicle currently facing in the world. And then we have faded dog, which you can almost sort of think of like our steering. When you turn the steering wheel of a car, you essentially change the direction that the vehicle is heading. So fada dot is changing the direction of the vehicle heading and therefore it's similar to our steering. Now, I'm not going to give you the full state transition function as because this is bonus material. I think it's necessary for you to actually do some Googling Google car model Kinnah Matics , etcetera, and try to figure out a state transition model that works. However, just to sort of get you started. All fill in the bottom half of the state transition function has shown here, and just to make things a bit more interesting, the state transition function or the bottom half of the state transition function I've shown here isn't totally correct. The correct state transition function won't have all ones, as I've shown here, but one of the ones should actually be a Delta T. So again, all of the zeros are correct. But one of the ones I've shown here should really be a Delta T. Now, in order to get a common filter working with this new state transition function, you'll need to recall how you entered Delta T manually into the F matrix. Each time you'll have to do the same thing again. However, you'll have to do it with values in the 1st 2 matrix roads. So just as a little bit more of a hint, you'll have to take values from your state matrix and manually enter them into the F matrix . I suggest you start with the algebraic equations and work from there into how you would get a state transition function that will work. I'm not going to provide solutions for this as this is sort of an extracurricular thing. If you want to go above and beyond and truly understand how common filters work by designing your own state transition function. How, If you do get this to work, be sure to send me a message. And who knows, Maybe I'll give you a discount on my next course or something, because I'd really be impressed for you to do this.
11. Assignment 3 Intro: here, you can see the starting code for assignment three again. I've copied the matrices just as they were over from assignment to. But since we're now working in two dimensions, will have both X and why components? You'll need to expand the size of all of these matrices. Now exactly how you'll do that. I sort of outlined in the, uh, lecture notes, so you should have plenty of information to go back to reference to see what you should do here. And also again, you'll have to fill in all the values yourself. I've supplied a four by four identity matrix. If you do the bonus material where I talked about you actually using five states instead of four, you will have to expand the identity matrix yourself. The two functions where you'll do all the action are again, predict and measure and update. You don't have to return anything for predict this time. Instead, all of the information you'll be returning will happen in the measure and update function. I believe that's all the information you'll need. Uh oh. Actually, the options at the top here, So this drive in circle option is what I said. You should try once you get it working without it. Essentially the course as you'll see is the robot drives and takes two left turns. But the drive in circle the robot will or the self driving car story will just constantly drive in a circle, and you'll see that by doing this a way that we're thinking about this Implementation won't work in that case. This to her story. These two options measure angle and receive inputs. This is for the bonus material when you use five states essentially instead of just measuring X and y. If you turn this measure angle to true, you will also receive in the measurements this part right here. You will also receive X Y and the current angle of the car, which is if you remember the five state it will be corresponding to Fada, so you will have to expand your measurement function. Also, this receive inputs. This is another bonus material for this ah simulator set up. So for assignment three, you can say and that will essentially, when this is set to true, this function here receive inputs will be called. But don't worry about this until you've finished assignment for, Because once you finish assignment for, I'll show you how to deal with inputs. And then, if you want, you can come back and do like in another part of bonus material on top of Assignment three . Because essentially, there's the five state bonus material, and then there's using the five state while receiving the inputs. Now let's just run the simulator so you can see what it looks like. So this screen here is basically just a zoomed in version of what's going on here. This green box is the raw measurements. Same as this green box here. It's the raw measurements, so you can see the car is driving down the course. The light is currently red right here, so the car is going to come to a stop and you can look at either one of these screens. It'll probably be a bit easier to see what's going on in the zoomed in screen, which is why, added it, and you can see that there's this blue box here. Well, that's actually the prediction output from your Kalman filter. It's essentially just the prediction of your X and Y coordinates, so the car is going to come up. This light is green, so the cars is going to make a left hand turn here, and that's the end of the course. So let's actually put in some values here so you can see what it looks like. Let's sit X two. I don't know. 100. No, we shouldn't set it to one underdog. Set it to 50. And why? To 20. So here is our blue box here. Can you can't really see it, Um, but you'll get the idea that hopefully the blue box should be moving a lot less erratically than the green box. Now it won't be perfect. Using the kind of model that we developed, it will be hard to get an answer that is perfect. And you'll notice that even when the the car here are red, self driving car here very detailed, um, comes to a stop that the blue dot will actually move towards the front because it's trying to detect here. It's starting to detect that the car is slowing down, which it's not predicting toe happen, right? Common filters air, generally predicting that whatever the state currently is, you'll maintain that state again. You'll notice on the corner here that the blue box will sort of fall to the outside before coming back. Now the five state model will make this quite a bit better, but you'll essentially need to do that in order to get it to work nearly perfectly. It will be really tough with the current model you have set up, but at least you'll get on idea of how the entire thing works.
12. Assignment 3 Walkthrough: here is my solution to assignment three. So as you can see, I've got a four by one state matrix. My uncertainty Matrixes four by four And I put 1000 in all of the, uh, you know, the O throughout the diagonal. I have my state transition function of four by four, and I actually, I I sort of actually cheated here. I just plugged in because I made the simulator. I know what d t is. I just plugged the DT value directly in Ah, but hopefully you were able to figure out they had to do self dot Previti again and then calculate your d t. The way you did it in the other assignments. Um, I started out by doing this. This doesn't matter. You could do this at the start of the end. It doesn't matter. It will have the same effect. And everything else is fairly similar to how you would have done it in assignment to. But now you've just scaled everything up to four dimensions. So let's run it and see how it does. Nope. That's not the right one. That's the one. So you can see it's not perfect. It falls a little bit behind as the car gets up to speed. But once the car starts maintaining a constant speed, it locks on to the back of the car quite consistently and yet just for reference X. Obviously, X and Y are measured from the back of the car. Then, as you'll notice is, the car slows down. The blue shifts forward because it's not anticipating the car. Slowing down that problem you can't really fix until you do the receive inputs part because if the Kalman filter doesn't know the car is slowing down, then obviously, it's gonna think that it's no ways and just predict that the car will be maintaining a constant speed again here. Once the car starts turning, you'll notice that the blue dots sort of fades a bit because it's sort of anticipating the card. It constantly maintain its motion, but then it'll eventually come back. You could have it to snap back faster if you use the higher value that you were adding to the uncertainty matrix. But again, that has the trade off that we talked about in assignment to here. The car comes to a stop and then the test is done so that that's as good as sort of. I made it for this four state implementation. Obviously, once we if you try the bonus material wear, use five states and then you get to the than the next part of the bonus material where you're actually receiving the inputs. You'll be able to do much better than this, which I'll show you. But if you're happy with your solution now, feel free to move on to assignment for Actually, before you do that, let's see how this behaves when we turn drivin circle to true. So what's gonna happen is the car is gonna pull up somewhere to here, and then it's gonna just start driving in a consistent circle. So you'll notice that just like when we took the left turn up here. The blue dot is sort of fading to the outside. However, it will never recover from this. The way we have set up the model, it will never recover. This is as good as it's gonna get. However, if you use the five state model, it will actually recover, and it will actually lock onto the back of the self driving car. So if you want to try that bonus material. I think it would be really helpful to do, but obviously it's not necessary to understand how common filters work, as that's more of a specific to this sort of problem.
13. Kalman Prediction: Welcome back to autonomous robots. Common filters If you've made it this far, congratulations. You've designed a common filter in one and two dimensions, and you come a long way since the start of the course. In this lecture, I'll set up the final assignment of the course. The final assignment is a bit of a fun one. I've set up a really self driving car problem that self driving cars will have to think about, and you'll have to use a Kalman filter in a way we haven't used it previously. In order to solve this problem. If you recall from the previous lectures, the first part of the common filter computation is the predicts stage on the very first thing you do is you set X equal to F Times X. That is you take your state transition function times it by your current state to get your next predicted state. Here I show our state transition function in action, producing our next state. Now I have a question for you. What would happen if instead of using Delta T as the difference between our two measurements, we just set Delta t tow a large number. Let's say we set it to 10 seconds. What would x of t plus one B in this case? And I'm not saying what would the number be? What would it signify when Delta T is the difference in time between our two measurements? X of T plus one in that case signifies where we predict except T will be for our next measurement. So if we think about that, But we set doubt the T just to some big number, like 10 seconds will then x of T plus one will signify in that case where we believe x of t will be in 10 seconds. So again, in other words, when we set Delta T to a number whatever we want, we're essentially asking the common filter. Where does it believe? Based on the current state information that X will be in 10 seconds or whatever number we set in this final assignment, a self driving car will be approaching a green light. However, as the car approaches the green light at some point, the light will go from green to yellow. Now, for those of you who have a driver's license, you know that you cannot be in the intersection when the light is red. So if the light turns yellow, you have to predict whether you will be able to make it to the other side of the intersection before the light turns red. If you can, you are allowed to proceed through the intersection, and if you can't, you must come to a stop. Let's assume that it takes three seconds for a light to go from yellow to red. That is, by the time the light turns yellow. You have three seconds to get to either the other side of the intersection or stop and be clear of the intersection. If we imagine that are self driving. Car already has a method for detecting when the light turns yellow. Then what we can do is we can use our Kalman filter to predict where the car will be three seconds in the future. And if it predicts that we will be in the middle of the intersection weaken, tell the self driving car that it needs to stop because it won't get to the other side of the intersection in time. However, if in three sections, we will be on the far side of the intersection and clear. Then it is okay for us to proceed. I've set up this function, predict red light here. And essentially this function will be called when we detect the light going from green to yellow. So when the light turns yellow, we will call this predict light function. Then what we need to do is predict where we will be three seconds in the future and then return true or false, depending on whether we are able to make it to the other side of the intersection without speeding up or step on the brakes. And we need to stop before the intersection. In the return statement, I've also included the ex new zero value, which will be essentially our X value, which is the only value we care about because really, this example is only happening in the X dimension. As you'll see in the simulator, that ex new value will be printed onto the screen. So that way you can see where the common filter predicts the car will be. So if you if the card doesn't make it to the other side of the intersection, you should see this ex new creating a dotted line in the middle of intersection and then the false will tell the car that it needs to stop. Now, Since we're just predicting the future, we don't actually want to update our X or R p matrices, right, Because we just want to predict what will happen. We're not trying to predict and then update. So what we need to do is we need to use a copy of the state transition matrix and the copy of X, which I've labeled F new and ex new here, and I've wrapped around self dot m f n p dot copy, which is a function which just creates a raw copy of the whatever it is you put inside of it. So in your calculation, you should use F new as opposed to self dot f. You'll have to change values in F New Remember, f new has delta t in it, and we want to set delta T to some value the duration that it takes to go from yellow to red. So you'll have to do that. But make sure you do it f new and not self dot F. The second function below. Predict red light. Speed is you are going to make a prediction of whether you can make it to the other end of the red light if the self driving car is allowed to speed up a bit or step on the gas. If you've ever driven, you've doubtlessly done this before the light turns yellow. You don't think you can make it, so you give your car a little bit of gas just to ensure you make it to the other. Side, however, will have to implement this increasing speed or this changing speed of us stepping on the gas into our prediction because our common filter will have our current speed, where we want to sort of predict what will happen in the future. If we step on the gas and increase our speed, then we want to know where we will end up in the intersection. In order to do that, we'll have to look at inputs, which is something I've previously neglected and as part of our Kalman filter at the top, you can see our state transition function in action, but this time there's an added you term. At the end of it, we have f of X plus you well, this you signifies our inputs. The way to think about an input is an input is anything which effects are next state, so you can see at the bottom my ex dot equals x dot plus you. Normally here it's just x dot equals X start, but now we have some you term, which is impacting the next state that we will transition to. Here is another example where we have our input you affecting x of t plus one. We have x of t plus one equals x of T plus delta t times x dot which is all normal. But now we have this added you term involved here are input. You is essentially a column vector the same size as our state vector and we have placed the number based on the state. We want to have our input effect. So here in this example, we have it affecting x of t plus one. And if we go back to our previous example, here are input is affecting ex dot. Now, if we imagine our self driving car, it wants to speed up so it can make it to the other side of an intersection before the light turns red and it doesn't want to break and wait. It wants toe instead, speed up and make it to the other side. So what we want is we want our input to affect our ex dot which is our speed were traveling in the X direction. So our input will increase ex dot and then the next time we call upon x of t plus one x dot will be larger because we'll be going a faster speed and will be able to predict whether we can make it to the other side of the intersection even when we step on the gas. So for the assignment, the car won't always be able to make it to the other side, but it will be able to make it to the other side in more cases when we're able to step on the gas. So for some more hints for the assignment, make sure you reuse your previous Kalman filter code. And if you did the bonus material where you tried it with the different states, you can use that model or this one. It doesn't really matter now when you're predicting when the car will make the light, assume that the light takes three seconds from going to yellow to red and again, the predict red light function will be called. The moment the car notices, the light has turned from green to yellow. Then, for predict red light speed function, you will have to assume that it takes one second for the car to increase its speed by 1.5 units. So essentially, you're you in this case will be equal toe 1.5 and you will have your you set up on your ex doctor so x dot will equal X docked plus you. Now you will predict that it will take one second for your car to get to the next speed. So essentially you'll have toe call your predict function or do your predict action twice. First, you use your you term to increase ex tac toe a higher value and have that last one second. Then you'll have to do X equals F Times X again for another state transition for the remaining two seconds. So essentially we're sort of predicting that for the 1st 2nd the car is just increasing its speed. And then, for the last two seconds, the car is traveling at this increased speed, and we will predict where it ends up. If that's sort of confusing, don't worry about the details, because that's actually not a perfect way to implement it. It's just sort of a rough way to increase ex dot and then have this increased ex dot translate to a further distance ahead. And it actually won't even be perfect, because the way that I use the simulator adjust speed isn't exact, but you generally will do these rough sort of things. You can get a rough first order approximation of how something's working. Now, After you finish this assignment and you get the input working, you can try another part of bonus material, which is where you go back to the two D simulator we were doing before. Not the one not this one with the red light, and you use the cars input to help your Kalman filter. So in a really self driving car, the self driving car itself knows when it's turned the steering wheel to the left. The Kalman filter isn't sort of just along for the ride. What you could do is that if you are trying to predict your next state. You can take advantage of this knowledge, for example, of where the steering wheel is or where the steering wheel is going to be. And you can use that as part of your Kalman filter to get an even more accurate prediction again. I'm not going to show a full solution to this part of the bonus material as it's sort of extracurricular, and it's there if you want to go above and beyond and put in some extra work and get more out of it. I will, however, show a solution that I've lived implemented using the cars input to show you that it can be done and to show you how it makes the results of the common filter just even that more accurate again, this is the final assignment of the course, so really do make sure that you iron out any misunderstandings you have about Kalman filters at this point. And be sure to stick around for the otra lecture world sort of sum up everything you learned and have accomplished so far
14. Assignment 4 Intro: as with all the other assignments, I started with the smaller matrix size, which you'll have to expand on your own. But again, you can just use your common filter code that you used previously. It'll all be the same. There is one notable exception here where I've actually given you the initial state. So I set it up with these numbers 55 for X three for why and five forex dot just so that your Kalman filter will localize really quickly because there's not a lot of time that you have to localize before you get to the traffic light, and you need to decide whether to go through the light or not, so you can actually set your P matrix. You can set all the diagonal values to zero because you are. You are not uncertain about what these are. So if we go to this, predict red light function again. This function is called any time that the light goes from green to yellow, and essentially all it's doing is that it's taking this light location, which is the location of the traffic light, and comparing that what you'll have to do is you have to calculate where the car will be three seconds from now and compare whether that is less than the light location. Because if it is, if you're X value that you, uh guess or that you estimate is less than the light locations, that means that you will end up in the middle of the intersection and therefore you need to stop. You need to come to a stop putting false here will make the car come to a stop. And if it's not, if it's actually greater than that, that means that you predict the car will be on the other side of the intersection and therefore you can return. True. Now this ex new zero value here. What that does is that will that will signal to the simulator to display the value you're predicting is that way. When the car stops, you can actually see where it predicted it's it's ah, tail end would be So let me just set this to let's set this to ah, Haiti instead of 90. Let's run the assignment in tow. Me put this into a little vector like that doesn't matter, so you can see our car is coming up here and this is that 80 value and because 80 is less than the light location, which is something around 90 I believe. Ah, you don't actually need to know that for the rial assignment, cause you're Kalman Filter Will will handle all of that. Um, it decides to stop. So this value here on the right this is the amount of time after which the simulation starts, that the light will turn from green to yellow. So every time I click X, this will run for five times, this value will increase. So what should happen is that for the 1st 2 or three values, your car should come to a stop. But after that, it should predict that it will make it through to the other side of the intersection. Obviously, it's not gonna work here because I don't have anything set up yet. So just keep clicking X until you go through it five times and then it'll end. So once you're done, that you can turn this allow speeding to true. And what that will do is that we'll actually call this function, predict red light speed, where you're gonna sort of do the same thing, but you'll have to set it up in the manner that I explained in the lectures. So you should have all the tools you need to attack this assignment. Good luck and have fun.
15. Assignment 4 Walkthrough: here you can see the solution to assignment for all of this is Ah not much different than the other assignments. So, as I noted here, I set all the diagnosed my P matrix zero because I know that thes starting values I'm not uncertain about them. I know what they are, so I don't need to have any values for the uncertainty matrix for them again for my state function. I kind of cheated. I used I just put DT directly in there. All of this is the same. Now we get to the part that has changed with the part that we added to predict the red light. So just like DT before us where we had toe enter in DT into these values Here we are entering instead of DT, We're entering the light duration in there. So we want those values to be three. But again, we don't want them to be that in the rial state transition matrix, we only want them to do that in this f New matrix. Oops, I'm in the wrong functionary in this F New Matrix. And then once we have that set, we just take f new times. It by self dot. Exe and that's it. There's really not too much to this aspect of it, and you'll see that if I run it, that's clear, actually, so you'll see for this 1st 1 What does it decide to dio? It decides to go through. Hold on one second. I have allow speeding on Oh, I've got allows speeding on. Okay, We're not actually quite there yet. Getting ahead of myself. Oh, no. Here we go. Just gonna exit that out. Okay, let's turn, allow speeding off. Let's do that again. So here we go. It's coming up to the light and look at that. It predicts that it will be in the intersection before the light turns read. Or when the light turns red. It predicts it will be in the intersection so it decides to stop. So let's click X on that and run it again. This one same thing and predicts it will be in the intersection. Let's try the 3rd 1 once more. It predicts it's in the intersection now. Here it should get interesting. The light turns yellow, it predicts it'll just barely make it. And while that was close, But you know, I trust in the filter, and I am confident we wouldn't have got a ticket. So let's just see the last one here again. Obviously, we make it pretty easily this time. So now let's try the allow speeding again and we should see something different. So here, right away you can see that it decides that it can make it, and it speeds up. Might be kind of hard to see, but you can definitely tell that the car is speeding up once the light turns yellow Here, Here we go. So for my implementation of the speed, I first checked to see if the previous method that we used without speeding would work. So that way I wasn't speeding unnecessarily through the intersection. But you didn't have to do that. And then what I do is I create a copy of you. I then manipulate you by adding this 1.5 and now that's the 1.5 units I talked about. So I'm adding it to two year. I'm adding it to ex dot some increasing ex dot by 1.5 units. Then I set f new toe one second. So instead of using the light duration amusing one second here because I talked about in the lecture. How? Let's imagine that it takes one second to get up to this new x dot speed. So then I do x new equals f New times self dot x plus, you knew? Well, that's a tongue twister. And now our ex state at this point should be one second after the light has turned yellow because we have just increased our speed. But it took one second to get up to that speed, and now we've sort of burned one second of the light being yellow. Now, here we set our F new delta t part to our light duration minus one. So it's cause we've got only two seconds now to make it across the intersection. But we're going at a faster speed, and then I do the same thing again. X new equals f New Times X new. And now I will get a new value. And as we saw this new value for all the cases listed should be on the far side of the intersection. Now, one thing you'll notice is that if you look really closely before are dotted line of our prediction lined up perfectly toe where we were. But for here, it won't quite be perfect. You could see that the car was actually a big conservative, that it was actually closer to over here when the light turned red. And that's because when we speed up the car in reality by stepping on the pedal, it's not that it takes one second for the car to get to that speed, and then it's instantly going the new speed. It in reality, slowly ramps up to that speed. But again, I just made it as sort of a simple first order approximation that can work. It's actually it's actually not as bad as I thought. So this was the last assignment in the course. I really hope you enjoyed all the assignments. I really had fun putting them together. Please stay tuned for the ultra lecture world sort of sum up everything you've learned and just give you a few key takeaways. If you ever happen to be designing Kalman filters yourself Thanks
16. Assignment 3 Bonus: So I've gone ahead and I've implemented the five state bonus solution. Now, I'm not going to show you my solution here. However, I will show you it working in the simulator. So here I've got measure angles that the true receive inputs false and driving circle. False. So when I run it, we should see that it's pretty similar to how it was working in the four States set up where we had individual X and Y speeds. And it will be a view pretty much the same point. Same up to Well, we're sorry. While it's running this course, the blue dots still moves forward and it'll fade a bit when it turns. That's not really what we're interested in, though What we're interested in is when you're driving in a circle constantly. So if you think about the five State how we have set it up, if you're fifth state is set up, how I sort of recommended, but by no means is the only way. You have a fated dot term and fate a dot is the change in the vehicles heading Now, If you are driving in a circle like this in a car, your steering wheel position is at a constant point. You're not changing your steering, you're not going left right or anything. You're just holding it at a constant value to the left. So essentially what the common filter is doing it is. It is predicting what that data dot is and sort of locking onto that and using that in its prediction so you can see it's pretty much dead stable on the back of the card. It's not constantly floating to the outside like it was in the example before. Now, even with the way we have it set up, the blue dot will fade as it pulls up to the stoplight that is the blue dot wolf sort of pushed towards the front of the car. So here the car is decelerating, its stepping on the brakes and the common filter will have no idea about that. So how is it supposed to know that it's not just noise and that the car is actually slowing down? Well, if you implement a solution in the in that receive inputs function and you set this to true , so now my Kalman filter is not only receiving the GPS X y and the current angle. But it's also receiving the inputs from the steering wheel and the pedal. So everything seems maybe a bit more stable than before. And now, when the car breaks, the common filter will factor the car breaking into its prediction. So here is the car steps on the brakes. It's already slowing down and look at barely moved. It barely moved in comparison to before. It should do Justus. Well, pretty much on this corner. I mean, that's a that's a good as you're going to get. Look at how noisy the signal is. Um, now that I think about it, in fact, I'm gonna show you just one quick sneak peek of how to do the receive inputs because it might actually be a bit too tough. So I'm just gonna zoom the camera over to the right side and I'm going to separate this out . I'm in a delete one part. Okay, there. So now you can look at the receive inputs function. What you need to do is you'll need to set two of your values, the values that will be corresponding to your steering and pedal that will be changed by sorry that will be changed by your steering and pedal. And you didn't set those in the state transition function to zero. Because essentially, you're you is acting like your ex dot or your your V or whatever your speed you're you pedal is so you don't want to have both your you pedal and your estimated speed interacting at the same time cause you'll see your you'll sort of double your speed. So remove your speed from the state transition matrix by setting it to zero and then just use your pedal input because that's probably more accurate than trying to measure your speed. So those are the two hints to get you started, and you should be able to solve the rest on your own again. If you are able to do so, you're able to do the the five State and this receive inputs by all means. Send me a message on you. Demi, I'd really like to hear from you. Thanks a lot
17. Outro: Welcome to the last lecture of autonomous robots. Kalman filters in this lecture. I'm just going to sort of sum up everything you've learned thus far and give you a few tips on what you can do in the future if you're looking to learn more about common filters in the first assignment of this course, you implemented a toy solution of the Kalman filter, which helped you understand how a Kalman filter works at its core without having to get into all the crazier link. Crazy story, linear algebra, essentially a Kalman filter updates values that you set in the background and then uses these values in some sort of transition function or transition matrix to predict where values will be in the future and then measure these values and update your predictions so that next time you predict the value in the future, you'll just be that much more accurate. The next assignment had you solve that same one D problem, but this time, using a full Kalman filter in this assignment, you should have learned about the trade off. Ah, that happens in the P Matrix. That is, if you let your p matrix value get too low. You're Kalman Filter will be too confident in that value and not be able to adjust the value in the future. So you learned that it can sometimes be good to give your Kalman filter. Some uncertainty. Make it believe that it is uncertain about what certain values could be, just in case those values air changing in the future. Next an assignment. Three. You used your Kalman filter and just sort of upgraded it to two dimensions. Now, one thing that was really important about this is that it should have been fairly straightforward to go from one D to two D, and really, that's pretty much the difficulty you should have in creating a common filter for whatever it is you want. Basic common filter structure remains the same. It's just what values you plug into those matrices that make the difference. Finally, you used a Kalman filter to solve a real self driving car problem, which is predicting whether the self driving car is allowed to proceed through the intersection safely or needs to stop when the light turns yellow. Here, you use the predict part of a Kalman filter to take the most accurate estimation of your current states and predict that in the future this type of thing is used all over the place . Whether it's predicting the movement of other vehicles or predicting other obstacles, this part of a common filter is used a lot in real robotics. If you happen to complete the bonus material, you were able to create a Kalman filter using your own design transition function for the assignments, like sort of gave you most of the state transition function. But for this one, you had to design it pretty much purely on your own. So you got to go through the whole process of starting with the Kinnah Matics of your robot or whatever vehicle and implementing your own state transition function. If you did this, you would have noticed that the results produced We're far better than this state transition function. I started you. You also, if you got this far, implemented the common filter to sort of integrate with this steering and pedal commands to create an even more accurate state estimation. If you ever find yourself designing common filter again in the future, be sure to refer back to this slide and the proceeding. Five slides at a company in company it in the lecture does. This really contains all the information you need to know? So again you define the five matrices is seen on the left, and then you execute sort of the step by step shown on the right. When you're designing the matrices on the left, it's unlikely that there is just one right solution. You'll probably have to try a lot of different things, especially when it comes to things such as the uncertainty matrix, starting condition, the measurement uncertainty and sometimes even the way you are setting up the state transition matrix. You'll have to try a few things out and sort of see what gives you the best results. There's no real one right way toe always solve a problem. If, after implementing everything correctly, the Kalman filter still is not performing well, you should investigate whether whatever you're measuring follows a Gaussian distribution. The common filter that we've designed is generally called a linear Kalman filter. That is because it's on Lee guaranteed toe work on linear problems that follow a Gaussian distribution. If your problem is highly non linear and it's output does not follow a Gaussian distribution. Then you should look into something called the extended Kalman filter. I'm obviously not going to go into the extended Kalman filter in this course, but hopefully, by looking at the Wikipedia article I have shown here, you can see that it's not that much different from the linear common filter. So using the knowledge you've gained in this class, you should be able to get one working. That's it for the course. I hope you've enjoyed working through it as much as I've enjoyed teaching it and good luck on all your future robotic endeavors.