PID Controllers  Intro to Control Design
Daniel Stang, Robotics Engineer
15 Lessons (2h 5m)
View My Notes


1. Course Overview
2:02 
2. Intro
8:35 
3. Physics
7:57 
4. Simulator Intro
6:09 
5. Simulator Install
7:26 
6. Logic Control
6:57 
7. Fuzzy Logic
3:32 
8. Fuzzy Logic 2
8:50 
9. Closed Loop Control
8:04 
10. P Control
9:45 
11. PD Control
12:08 
12. Assignment 2 Walkthrough
8:19 
13. PID Control
8:49 
14. Assignment 3 Walkthrough
7:16 
15. Outro
19:16

About This Class
In this course you'll learn how to implement a PIDÂ controller in software. You will understand when the Proportional, Integral, and Derivative components of the controller should and shouldn't be used.
The physics of an elevator are simulated to allow you the opportunity to write control software and see how it performs. The simulator will also give you hands on experience with debugging and tuning a controller, two very important aspects for a real system.
Class Projects See All
In this course you will design a PID controller that successfully controls a buildings elevator (using a Python simulation). Once you do that you'll be tackle any PID problem out there!
(Read More)
Transcripts
1. Course Overview: In this course, you'll learn about control system design and write software to create your own P I. D. Controller in a simulated environment. I have a master's degree in mechanical engineering, which I learned from my research and control designed for automotive applications. At my most recent job, I was responsible for designing motion controllers and stabilization system for military tank turrets. I have also been a teaching assistant for fourth year engineering control classes, and in this course I'll share with you the techniques I've learned throughout my engineering experience thus far and share with you lessons. My students have found the most valuable when learning how to design a P I. D controller. At the end of this course, you will have a firm understanding of common control terms, such as reference gains, closed loop and open loop and much more. You'll also be able to write software for your own P I D controller, while not only understanding how the various components of the controller work, but also understanding when or when not to use a specific component. This is crucial as all control problems require a different set of tools to solve them, and I can't cover every specific problem you'll encounter in this course. So instead I'll be giving you the knowledge of not only the tools but when or when not to use the tools in the course lectures. I'll teach you the basics of control design on the theory behind P I. D controllers. You'll then be able to use your knowledge to create a controller for a simulator I designed . The simulator will not only allow you to practice implementing a real controller, but also practice tuning and debugging a controller when things don't go according to plan . I created this course for someone who is interested in topics such as automation or robotics but has no formal controls. Training if you have an interest in software development that is also a big plus, has all modern controllers are in. Stan Shih ated in software. Feel free to look at the course preview, where I'll show you how to get the simulator up and running on your system. Once that's all done. All you need to do is enroll in the course to learn how to design a controller and solve the simulated problem. Thanks for your time. I look forward to seeing you in the course lectures
2. Intro: welcome to P. I. D Controllers intro to control design. In this lecture, I'm going to give you a brief introduction into the course. The goal of this lecture is to just get you excited about the course material and what you'll be learning. I'll also be giving you a brief introduction of the core structure, such as what you'll be learning in each section, as well as the assignments. You'll have to complete control systems air all around us. They play such a large role in our lives that we probably don't even appreciate the wide array of problems there continuously solving for us in your home. We use controllers to regulate the temperature of our houses using thermostats. Refrigerators have controllers, which regulate the temperature of the inside of our fridge, and our computers have control systems, which regulate the amount of power and current that go to various electrical systems in your garage. Your car has numerous controllers in it, regulating anything from the engine to the climate controls. Things like cruise control are a perfect example of a controller, something that takes away from the burden of operating a car and places it on a control system instead of the human. Cruise control is a great example of a control system, which handles some task so that humans don't have to worry about it. In this case, your cruise control regulates the speed at which your car drives so you can take a break or focus on other things while things like engine controllers or cruise control can get fairly complex. Even something as simple as your garage door opener is a controller, which offload some burden from a human at your work, you're likely to encounter even more control systems. Things like elevators, which will be the problem will be using to test our control knowledge in this course automatic doors. Maybe you work at a chemical processing facility where you have things like water tanks or chemical tanks of some sort, or even something like one of my past jobs doing military R and D, where I worked with armored tanks. These things are full of control systems in your city. Helicopters, airplanes, even traffic control is part of control theory, and then, obviously things like regulating the amount of water that gets sent to homes. These are all control systems that are around us constantly. The previous slide contained examples of controllers that have become an ingrained part of our daily lot. But where did controllers start? Well, the first control system ever invented was something called the Fly Ball Governor. The fly ball governor pictured here, was invented by James Watt to control the power output from his steam engines. Steam engines contain the throttle valve, which regulates the amount of steam that goes to the cylinders, which essentially regulates the power of put. However, instead of having a human manually regulate this throttle valve based on the power output of the steam engine, James Watt wanted this process to be automatic as seen on the left side of the figure. Those two balls are attached to the output shaft of the steam engine. So when the steam engine output shaft starts moving very quickly, those two balls will be pushed outwards by the centrifugal force. Imagine you have a ball on a piece of strain. In this analogy, your hand is the output shaft of the steam engine. When your hand isn't moving at all, the ball is just dangling straight downwards. But as you begin to rotate your hand, really fast, which is mimicking the rotation of the steam engine output shaft, the ball will begin to move upwards. What the fly ball governor does is tie this motion of the ball, moving upwards to the throttle vow through some mechanism. Thus, when the output shaft starts spinning really fast, causing the balls to move upwards, the throttle valve is pulled back, letting less steam go to the engine so again to go through the whole process. When there's too much steam going to the engine, causing the output shaft to move way too fast, the balls will move outward due to this rotation of the output shaft, causing the throttle valve to be pulled back through some mechanism tied to the motion of the balls. In fact, this is where the same going balls out comes from, because when the ball's air fully extended outwards, it means your output shaft is going as fast as possible, and thus your steam engine or whatever it's powering train or whatever is going as fast as possible. The Fly ball governor enabled steam engines to regulate their own power. Put fast forward into the future, we now have things such a super maneuverable aircraft, which is made on Lee possible by control systems. The maneuver shown here is impossible to do with just a human pilot and control systems in the plane. Augment the pilots abilities and allow it to do things that were previously not possible. This is a great example of control systems being used to augment human ability. However, in the present and very near future, we're seeing Mawr control systems that take the human out of the picture entirely. Autonomous vehicles Automated manufacturing robots are all examples of control systems, which don't require any humans at all. They don't need to augment to humans ability. Instead, they completely replace the human well. Now just give you a brief overview of the core structure. Next section. We're going to set up the elevator problem that will be working on for the rest of this course in the section. Following that, you'll design your first controller and work on assignment one. After that, we'll learn about proportional control and derivative control, and you'll complete assignment to finally will learn about interval control and you'll complete the last assignment Assignment three. Now assignment one is pretty optional and is mainly to get used to the simulator will be using in this course, but Assignments two and three should be done fully. Lastly, I'll just give you a quick course Outro talking about some advanced control schemes and will war game as in Go through a few scenarios of designing your own P I. D. Controller for a problem other than the elevator problem. To succeed in this course, I'd highly recommend you watch all the videos at at least 1.5 times speed. Once you try it, you'll never go back. Also, be sure to play around in the assignments before trying to solve them immediately. Getting an intuitive feel for how things work is the core value you'll receive from this course and also do the assignments. Don't just watch the solution videos. Now, before you move on to the next section, I want you to start thinking about something called dynamic systems. A dynamic system is one in which the effects of actions did not occur immediately. Think of the thermostat regulating the temperature of your house. When your furnace gets turned on and set toe a set output, your house doesn't immediately heat to that temperature. It takes a while for these actions, which is turning on your furnace to have some effect on the system that is, bring your house to a set temperature. I want you to think about environments where this is and isn't true. Static systems, which are the opposite of dynamic systems, still change. That is just because they're a static system, it doesn't mean they never change. It just means that change happens immediately after an action. I want you to think about why an elevator is a dynamic system. And with that, I hope you're excited about this course you're excited. Toe learn and design of P I. D. Controller in the next section will begin setting up the elevator problem and setting up the simulator that you'll be using for this course. I look forward to seeing you there
3. Physics: Welcome back to P I. D. Controllers intro to control design in this lesson will be setting up the problem will be working on through the rest of this course in this lecture. I'm just going to give a quick physics refresher to anyone who may have been out of the game for a while. And if physics wasn't your strong suit, don't worry. The stuff that will cover in this lecture won't be that complicated has mentioned in the introduction. The problem we're gonna be working on throughout this course is the elevator problem. Now, if you're wondering why elevators the answer to that is elevators are actually a pretty interesting control problem. It's not a trivial thing to design an elevator controller as you'll soon find out. Also, the techniques learned can apply to a wide array of other problems. Quad copters, robotic arms, Cruz controllers and autonomous vehicles All used the control technique that you'll learn here. Lastly, elevators air very intuitive and easy to understand. They essentially go up and down, and their failure and passed modes are very obvious. You either get to the floor or you don't before we jump into the physics. Let's just lay out very explicitly the end we're trying to achieve and how we're going to achieve it. So the end we're trying to achieve is to get the elevator to travel from one floor to another are margin for error is around one centimeter. We must do so quickly and smoothly without breaking any speed or acceleration limits. Lastly, we must be ableto handle vastly different cargo weights and operate consistently the means which we're gonna be able to do this is we're gonna use sensors which can measure the absolute height of the elevator and will have an actuator which will produce a force on the elevator in either the up or down direction. Lastly, we have a controller operating at 20 hertz or 20 times for second on weaken. Right. Any code we want taking in the sensor input and output ing to the actuator? The main physical law that will be governing our elevator situation will be Newton's second law, which is that force equals mass times acceleration. The acceleration due to gravity that will be using is 9.8 meters per second squared and our actuator will output. It's force in Newtons where one noon equals one kilogram meters per second squared. Let's try a few physics questions to get you back into the swing of things I'm holding to rocks. Rock one weighs one kilogram rock two ways. Two kilograms. I let both rocks go it the same time which will hit the ground first and then my second question after one second. How fast will each rock be going? Assume that they won't hit the ground For the first question, I'm assuming that there is no air friction, so the answer is is that they will hit the ground at the same time. If that's not intuitively obvious to you on YouTube, search for the video feather versus bowling ball. For the second question, both rocks are being accelerated at the acceleration due to gravity, which is 9.8 meters per second square. Thus we take 9.8 meters per second squared, multiply it by the one second and we're going 9.8 meters per second. We will be dealing with accelerations and velocities a lot throughout this course. So if you didn't get this question right, just remember that the relationship between acceleration and velocity is the same as the relationship between velocity and position. That is, if you can answer the question. If you are walking at a pace of one meters per second, how far will you be after 15 seconds, which the answer is 15 meters? Because you took 15 seconds and times it by your pace, apply that exact same math to acceleration. If you were accelerating at a rate of one meters per second squared house fast, will you be going after 15 seconds? The answer is 15 meters per second. Next question. There are two identical elevators except one has people in it and the other one is empty. The elevator controller applies the same force to both elevators. Which one will accelerate faster? The answer is, the empty one will accelerate faster. However, what if both elevators are in space, which means that there is no gravity. You can see this satellite zipping around in the background so you know where in space again, the elevator controller applies the same force to bold elevators. Will they accelerate at the same rate? The answer is no. The empty elevator will still accelerate faster. Remember, F equals M A. Therefore, the acceleration equals a force divided by mass. You have a larger mass. You have a lower acceleration for the same force to accelerate an object in space in any direction. Your force divided by the mass will always equal your acceleration. However, on Earth, if you are trying to accelerate an object directly upwards, you also have to deal with the force of gravity, which is a negative force. Trying to accelerate the object downwards on the force of gravity equals the mass of the object times the acceleration due to gravity. If you imagine an elevator suspended by a pulley has shown here, you can imagine that the force of gravity of an elevator is quite significant. Elevators air really heavy on the force of gravity Pulling down is really large. If you tried to hold on to that cable, you would accelerate like you wouldn't believe to make it so that elevator motors don't have to constantly hold the entire weight of the elevator. What they do is they add a counterweight to the other side of that cable, which balances out the force of gravity when the force of gravity equals zero. Because the mass of the elevator equals the mass of the counterweight. Now our acceleration is based solely on our input force. That is, when our input forces zero. The acceleration is zero on when our input forces positive, the acceleration is positive. This creates a situation of zero gravity that I talked about earlier. Where if we don't input anything, the elevator just stays where it is. But in order to accelerate it, we still have to apply a force. And that force, divided by the total mass of the system, will equal our acceleration. If you feel overwhelmed by the physics we've gone over so far. Don't worry. The beauty about the controller we're going to design is that you don't have to have perfect or even good knowledge of the underlying physics in order to make a controller that works. Let me recap. What you learned in this lecture on elevator with a perfect counterweight will behave similar to an elevator in zero gravity that is, without any force input. The elevator will stay where it is even a perfectly counter weighted elevator or an elevator, and Cyril gravity still requires a force to accelerate it, and the relationship between the force applied to an elevator, and its acceleration is f equals m A. Thus the heavier the elevator or the counterweight where the people inside of it, the more forces required to accelerate it.
4. Simulator Intro: Welcome back to P. I. D. Controllers intro to control design In this lecture, we're going to go over some of the basic options you can change in the simulator I've designed with the simulator physics. You can turn gravity, honor off the friction on or off. You can change the mass of the elevator. You can change the mass of the counterweight, and you can change the mass of the people inside the elevator. In the previous lecture, I talked about how an elevator with a counterweight is similar or approximately equal to the situation where you have an elevator in zero gravity. The settings on screen show this approximately equal case. Do you know what changes would have to be made to these settings so that both simulations would have identical physical responses? The answer is setting the elevator mass on the right equal to the elevator mass, plus the counterweight mass on the left. Remember, even with zero gravity off forces required to accelerate a mass, and this relation is F equals M A. Thus, all settings whether there's gravity or not, need to have the same mass. In addition to the physics options, there's also a number of controller options you can change. First, you can turn the controller on and off. Next, you can change the start location of the elevator. That is the height that the elevator will start at the beginning of the simulation. After that, you can set the elevator set point, which is the point that the controller will try to make the elevator move to and note here that in order to achieve a pass in the assignments, you have to achieve the final set point within three centimeters while going at a speed of lesson 0.1 meters per second or essentially stop. The last option you can change is the output game. Earlier, I had talked about how your controller outputs of force. But this isn't strictly true. Your controller will be some sort of computer, maybe a small chip like an Arduino that is running some code, and it doesn't have the power to move an elevator. For example, your desktop computer can't push you around the living room. It doesn't have that sort of power. Really. What your controller outputs is a bunch of ones and zeros. It's binary information, and you can't just send this digital information to the elevator and expect it to move nowhere. Can you just send it to an electric motor? Whatever type of motor using electric motors require a lot of voltage and amps in order to move, and your computer just doesn't have that sort of power. What really happens is your controller outputs its digital number, whatever it calculates and sends it to some intermediary step called X, which is usually some sort of power electron. ICS, which scales up and can actually provide the correct amount of power to an electric motor. And then the electric motor or whatever type of motor it is, provides a force to the elevator. The reason why I'm telling you all this is because I want you to know that you can manipulate the controller output. It will. It's just a digital number. If you're wondering why you would want to do that, let's look at the following two questions. If we command to controller with an output at 10 what will be the acceleration if the masses one kilogram Similarly, if we command the controller, put a five What will be the acceleration of the same mass using a equals f divided. I am for the first case will get an answer of 10 meters per second squared and for the second case will get an answer of five meters per second squared. That's actually pretty convenient. We owe put 10 we get 10 meters per second squared. We output five. We had five meters per second squared. However, a massive one kilogram is not very realistic for an elevator and it will break this 1 to 1 ratio between our controller output and the acceleration. If our elevator has a massive 1000 kilograms now, the ratio between our comptroller O put an acceleration is 1002. What? That means. We need 1000 controller output in order to get an acceleration of one meters per second squared. This is where the output gained option comes in. If we have a no put gain, that is, before we send our comptroller O put to the next stage. We just multiply it by 1000 because remember, it's just a digital number. We can manipulate it how we want. We can then re establish that 1 to 1 ratio between what our controller outputs or more precisely, the value that the controller calculates and the acceleration felt by the elevator. In this situation, with a notebook gain of 1000 are controller output of Let's say, six will get multiplied by 1000 which will generate a force onto the elevator of 6000 Newtons. Divide that by 1000 kilograms of the elevators mass and you get an acceleration of six meters per second squared. This will really help when you're designing your controller has all the numbers you're working with will be roughly in the 0 to 10 range as opposed to the 0 to 1000 rage. While currently, if we set the output gain equal to the mass of the elevator or the mass of the elevator plus a counterweight, everything will work out nice. However, in future lessons we will introduce unknown and unbalanced waits. Namely people and our controller will have to compensate on its own. Eventually we're gonna add friction and this will also impact the acceleration. Further hurting that 1 to 1 ratio We had, however, are controller will just have to deal with this as an added challenge. That's it. You finished Section two. Thanks for sticking around and I look forward to seeing you in the next section
5. Simulator Install: Welcome back to P I. D Controllers intro to control design In this section, you'll be designing your first controller. Now you may be wondering, How can you design a controller if I haven't actually taught you anything yet? And that's fine in this lesson. Also I want you to do is start prying a few things out and see the problems that come up. Otherwise, I could give you a bunch of control solutions and start showing you how they work. But you wouldn't even realize that there was a problem that needed to be sold. However, before we can get started, I'm going to give you a quick walk through of how to get the simulator up and running. And if you don't have python installed on your computer already, how to install it and add the appropriate libraries. If you run into any problems during this tutorial, be sure to watch the installation video at the end of this lecture, where I'll go through the entire installation process on a Windows machine. The first thing you want to do is navigate to the get hub repository shown on the screen. If you have get installed on your computer, you can just get cloned the repository. If not, click the Kohner download button and download the ZIP file. Unzip it and place it somewhere easy to get to like your downloads of your desktop, because we'll be using it throughout this course. If you don't have python installed, download and install mini, conduct the link below and ensure you download the version for Python 3.6 or higher. The required packages you need to run the simulator are numb Pie, Matt Plot Lib and CYP I. If you have python installed already, there's a good chance you have these three packages on windows. Once anacondas installed, open up your anaconda prompt or on Mac and Lennox, open up your terminal and install the packages by typing conduct install followed by the name of the packages. You can do it all three at a time like this. However, I found that on Windows I had some troubles doing it this way. So in the installation video at the end, you'll see that I install the packages one at a time, navigate your terminal or prompt using the CD Command, which stands for change directory to the folder you cloned or downloaded containing the simulator fouls once you're there. Entering dear command line. Python assignment one dot pie. The simulator should now open up on your screen. When you're closing the simulator, be sure to close the window labeled Figure one. If you don't, the command line might freeze and you might have to restart your prompter terminal window. Now, you shouldn't see exactly what I'm showing on the screen there as your elevator won't be moving because we haven't given it any commands yet. Let's do a basic physics test to make sure everything in the simulator is operating correctly. Open up assignment one dot pie in your favorite text editor and set the options is shown on the right. Make sure that you save the file before you run the simulator again. Now, using the timer in the simulator, How many seconds does it take for the elevator to fall past the ground floor? When I did this, the elevator seemed to fall past the ground four somewhere between 45 seconds. Now we know the acceleration. It was under the acceleration due to gravity and we know the distance it would travel, so we can quite easily calculate the time that this would take it an ideal situation. And that is 4.5 seconds. So it seems to me like the simulator is working correctly. With your simulator working, you're now ready to move on to the next lesson. Where will design your first controller? Now, if you haven't got your simulator up and running quite yet, be sure to follow the installation video following this slide where I'll go through the entire process.
6. Logic Control: Welcome back to P. I. D. Controllers intruded Control design. In this section, you'll be designing your first controller in the previous lecture. We got your simulator all set up, so we're good to go. Let's jump right into it. Open up your anaconda, prompt, or, if you're on mackerel, Lennix your terminal window and navigate to the folder containing your simulator files. Note that here you can use tab to auto. Complete the rest of the file or folder name once you're there. Run assignment one dot high now. Hopefully, you've already been playing around with the simulator a bit, so you're kind of familiar how everything works. Nothing's happening in the simulation cause I've got both the controller and gravity turned off. So let's open up assignment one dot pie and let's start changing some things. Now the text editor I'm using is no pad plus Plus. You can really use whatever you want, However, if you use no pad plus plus, you need to make sure that you set this one setting in language. So that way, it always replaces tabs with four spaces. Python is really finicky about its tabs and spaces, and you want to make sure that even when you press tab still actually generates four spaces now, just to make sure everything in the simulators working properly, let's turn gravity on. Let's set the elevator mast 500 the Counterweight Mast 600. So with this sort of a set up, we should expect that the elevator should rise up because the counterweight is heavier than the elevator. So there it goes, starts rising up, and so we know that our simulators behaving correctly. Let's turn the controller on. Set the counterweight Mass back to 500 and let's actually set a set point of 15. That way it will be pretty much dead center of our little picture there. And let's that they'll put gained 1000 because remember, that has to be the elevator mass, plus the counterweight Mass. Now we're more or less ready to design our controller. Let's look at this controller class that I've created. If you're not too familiar with classes in Python, don't worry. We don't go that in depth into it. Basically, in this class, you can see that there are two functions one called the Net and the other one called Run Thean it function is only called once rate when the classes first created, and you can see there that when the classes first created, it takes in a value called reference. And that reference is set to the value self daughter. Now, whenever you set a value a self dot inside of a class that value will stick around, it won't get deleted once that function is done. Running that reference value that we set as self dot are that is the set point value that we defined in the options above. For this example, self dot R is 15. And remember, the set point value is the value that we're trying to attain. It's the height we're trying to make the elevator go to. Let's look at the run function. The run function is being constantly pulled by the simulator to see what its output should be. I've designed this initial if else statement. So that way we Onley calculate a new O put 20 times a second to get that 20 hertz operating time that I talked about previously. The rest of the time that the function is pulled, we just deliver whatever the previous hope awas, you don't need to modify this if health statement at all. The only place where you need to be writing your code is this area have highlighted here. Let's make sure Controller is operating correctly. Let's trade an output of two and we'll run our simulator again now because we have Theo put gains that correctly we should be seeing an acceleration of to which we are. So everything seems to be working perfectly at this point. We have our reference or the value that we're trying to get the elevator to go to. And we've also shown that our controller can manipulate the elevator and make him move. The last thing that we need before we can design a good controller is we need the sensor read out, and that is shown in this variable X. So this variable X is an input toe are run function, and now we can use that number. Whatever we get sent by the simulator to design a controller. I've been a teaching assistant multiple times for fourth year engineering controls classes , and I always ask the students to design their own controller at the beginning, and I almost always get the same result. They designed what is called a logic controller. Their first pass is always just a simple, if else statement where if we're higher than the reference, we put a negative and if we're lower than their friends, we put a positive. So essentially, we've just created a piece of logic to try and make the elevator move up or down. However, there is one key reason why controller design that's like this will not work. Why don't you play around in the simulator, design your own logic controller and see if you can find out what that one reason is.
7. Fuzzy Logic: Welcome back to P I. D. Controllers Intro to control design. Last lecture. We designed a logic controller, but it didn't work at all. The elevator oscillated up and down, and it never really reached the set point. Now, at the end of that lecture, I asked you if you could figure out what the one key reason is that that controller structure will know where. If you think you figured it out, feel free to skip ahead to the next lecture. If you didn't get there, then we're going to try something new. And this should make the problem a lot more obvious. I mentioned last lecture how whenever I was a teaching assistant for the fourth year controls class, they would always start with the logic controller we designed previously. Now, when that didn't work, usually their first instincts was to drive Atmore logic so they would end up creating something akin to what is known as a fuzzy logic controller. Now what we're creating isn't exactly a fuzzy logic controller, but it has similar underlying principles. Let's plot the controllers output versus the height of the elevator. Here I have the Y axis as the controllers output and I have the X axis as the height of the elevator. That point in the middle of this set point is the point that we're trying to make the elevator move to now. Her previous logic controller had a response that looked like this. It's responses shaped like this because when we were below the set point, we output a constant positive value. And when we're above this set point, we output a constant negative value. Now this clearly didn't work, so let's add a bit more logic. The first thing we can do is smooth out the area around the set point like this. Let's take this a bit further. Let's set some minimum value around the set point that we have to stay within. Next. Let's add a buffer area where output will be zero. We now have a pretty complicated piece of logic, but it still won't solve our problem. Now, if you still haven't figured out why, that is, try to implement the fuzzy logic controller like this on your own or watch the video have included at the end of this lecture. And if you need an extra little end, my last hint to you is to pay very special attention to what the elevator is doing when we are out, putting zero
8. Fuzzy Logic 2: Welcome back to P I. D. Controllers Intro to control design. In this section you are designing your first controller. You've already created a logic controller and something akin to a fuzzy logic controller. However, neither of these techniques worked. And if you haven't figured out why, that is, well, you're about to find out the hint I gave in. The previous lecture was to see what the elevator is doing when we are out putting zero now . What you should have seen at that point is that even though our controller was about putting zero, the elevator was still moving. It still had some velocity, and that's because our output controls acceleration, not velocity. Chances are you know how this works if you're in a car and you step on the gas pedal, also known as the accelerator pedal, for obvious reasons, and you begin accelerating up to highway speeds When you let go of the accelerator pedal, your car doesn't just stop on a dime. It keeps going at whatever velocity was going before, minus some air friction and stuff like that. So when you want to come to a stock and stop at a specific position like we're trying to do with the elevator here, you have to decelerate before you get there, which, if you look at our fuzzy logic controller, were clearly not doing. Now. The solution to our problem isn't to ADM or Logic had some deceleration logic before the set point. That's not the solution. We're going to drive that the final controller that will design will actually be much simpler. We need less logic, but logic that makes a lot more sense. Let's spend a bit more time on this acceleration principle because it's actually a very important point. So remember, on our elevator we're putting a force onto it and a force will accelerate a mass and the way you can think about velocity. His velocity is almost the sum of all previous accelerations. If you accelerate at 10 meters per second, squared for five seconds and then stop accelerating at the end of that, you will still be going 50 meters per second. One thing I want to make very clear is that this wasn't a trick. I didn't set up this scheme like this and then have a using acceleration which is not very common. You'll actually encounter situations like this constantly. If you design more controllers, quad copters, autonomous vehicles and actually the mass majority of situations you encounter, you'll be dealing with more or less acceleration, not velocity. And I put that in quotes because it won't always, strictly speaking, be acceleration, but will certainly feel like it. If you've flown a quadcopter before, you may be thinking that it sure feels like you control the velocity of a quadcopter and not the acceleration. Well, that's actually because when you control a quadcopter, you're just sending inputs to P I D controllers. But I'll talk more about that later to show you one of those situations that may not be thought of typically is acceleration, but certainly feels like acceleration. Let's talk about autonomous vehicles now. I'm not going to talk about the cruise control problem or the problem of making a autonomous vehicle drive at a certain speed. It may be surprising, but that is actually a simpler problem than the one we're trying to solve with the elevator , because there you're using acceleration to try and control of velocity, where we're using an acceleration to try and control the position. If you want to learn more about that, though. Stick around for the bonus section of this lecture. Let's pretend you're driving straight down the highway and you want to move into the right lane. Well, obviously, the first thing that you would want to do his steer towards the right. Now what do you think will happen if we steer towards the right and then bring our steering wheel back to dead center? Do you think will be in a good position? The answer is no will be veering off the road so you can see how this problem of steering an autonomous vehicle we run into the same issue we had with our elevator, which is at the root, were controlling some sort of acceleration instead of a velocity. If you driven a car before and you've been unknowingly solving an acceleration control problem every time you change lanes and you know the solution involves steering to the right, then steering back left before steering straight. You may be wondering, how do we get a controller to do this automatically? And that's what I'll be going into in the following sections before I finish, though, let's talk about a few situations where you are interacting directly with a velocity. Now the one thing that I mentioned earlier is the cruise control problem. And again, this isn't strictly a velocity. But we're trying to change velocity and weaken directly, manipulate the rate of change of velocity, which is the acceleration. Which is why I said that this is actually a simpler problem than the elevator when we're trying to deal with another example would be trying to fill a tank. Let's say a manufacturing or processing facility of some sort here are controller would be directly manipulating the flow rate coming out of the tap or whatever it ISS, we turn the valve and that just adjust the flow rate. There's no acceleration involved. So you can imagine for a case like this, our logic controller or even our fuzzy logic controller would be perfectly able to solve this control problem. As the tank began to fill up, the controller would slow down the rate coming out of the tap and eventually just stopped with that. Section three is finished. I hope you found it interesting, and I look forward to seeing you in the next section. If you're looking for an extra challenge there's a control solution that I've hinted at a few times in this section that you could use to solve our elevator problem. Now, since this is bonus material, I feel it's OK for me to be a bit less formal and more or less just ramble on for the next few minutes with no visuals to back it up. So the way to solve this problem is you have to sort of break it up into two smaller problems. Now I've shown you those two smaller problems in this lecture. The 1st 1 I talked about in relation to the cruise controller. I said, How? If you're going from an acceleration toe a velocity, you can use just sort of a linear or logic controller to solve that problem Now it suggests using a linear controller. Think of the part of the fuzzy logic controller where we had slope going downwards. We essentially varied our output in relation to the distance. You can use that to go from an acceleration toe a velocity, so that's the first part you sold. The second part is you can then use the fuzzy logic controller that we originally designed and use that on the velocity. So if it's not quite clear yet, you can break the elevator problem into sort of two smaller control problems. So remember, the total elevator problem is what I was calling an acceleration problem where you're not dealing with the rate of the thing you want to change for. You're dealing with the rate of change of the rate of change rate acceleration, not velocity. But what you can really do is break it up into two velocity problems. So the 1st 1 you go from acceleration to velocity using the linear logic controller that I talked about. And then you use that philosophy in your fuzzy logic controller to actually get the controller to move the elevator to the correct position. The only thing you're missing for a solution at that point is how you can actually calculate philosophy. Remember, 20 times a second at 10 hertz. You are getting a new position measurement. Also you can do is take to position measurements, your current position, memory measurement and your previous position measurement, and use that to calculate your velocity. Try this out. It will work. You will get a pass on some areas of the simulator, However, you'll see that it's not a very good solution. It's not very clean solution, and the thing that will eventually design the full PD controller will be much better.
9. Closed Loop Control: welcome back to P I. D. Controllers intro to control design in this section we're going to start talking about proportional control, which is the first part of R P I D. Controller. So far, we've just sort of been doing stuff. So what I want to do for this first lecture is take a step back and describe to you closed loop control in a more formal way. So that way we'll actually have a good basis When we go to improve and make a better controller in the next lecture will then get into proportional control once we have this nice, solid foundation to work from. However, before we can understand closed loop control, which is the type of controllers we've built thus far, we have to understand something called open loop control. Now, I'm gonna use the example of you going to turn on your shower because that's actually a great example of both open loop and closed loop control. So here we have our controller for a shower example. That's you. You are the controller, and you output something toe. What? I'm gonna call the plant now. The plant is generally the system that the controller is manipulating and trying to control . So for our shower example, this would be the shower itself. Everything from the knob you turn to the Ignazio where the water comes out. Our controller output here, listed as you is what actually we do to the plant. So for our shower example, that would be turning the knob in reaction to this controller output, we the plant will now have its own output. Why, which in our shower case would be the temperature of the water coming out of the shower in control design you generally have to preface is the words input and output with whatever you're talking about, because one things input is another things output. For example, here the controllers output is the plants input. So be sure to always preface is those terms. Now we're still missing something to make our shower example complete. And that is the temperature of the water we're trying to attain, which I'm going to call the reference are here. So we now have everything to make our shower example complete. We have a reference temperature that we're trying to get to. We are the controller trying to attain that temperature were out putting something to the plant, which is turning the knob, and the plant is out putting water at a set temperature. So open loop control is what you do. When you first go to the shower, you go up and you turn the knob to a set amount based on some pre defined knowledge that you have again. Open loop controllers do not react to the output of the plant. They just take some action based on some reference. So you go to your shower, you have in mind the temperature you want to make and you turn the knob. That's it. That's open loop control. Now, I don't know about you, but at this point, I generally don't just jump into the shower. What I want to do is I want to get that water temperature just right. So I need to make some fine tuned adjustments, and for this we're going to need a closed loop controller closing the loop in a closed loop controller, as when we bring the plant's output. Why, which on our shower example is the water's temperature and we bring it to an observer. Oh, now an observer in this case would be your hand because you're observing the temperature of the water. What you do next is after observing the temperature of the water with your hand, you now need to make some adjustments based on that information. So you bring that information back in to your controller. Now, on the other side of where I brought this heroin, we're gonna place the letter E, which stands for error. Remember, we talked about air before. Error is the difference between your reference, what you're trying to attain and where you currently are in our closed loop control weaken do one of two things with this y value, we can add it to our reference and calculator air from that, or we can subtract it when we added, this is called a positive feedback loop. Now, positive feedback loops can be very good or very bad depends on what you're trying to achieve. But generally for closed loop control, you do not want them to give you an example of a positive feedback loop. Here's my oversimplified analysis of Google's success. Google created some search our of them and as the search algorithm got better, more people started to use. However, as more people started to use it. The search function got better because the more people who are using it, the more data they had to make their search better. This then made more. People want to use it because their algorithm got better. When their algorithm got better, more people wanted to use it because more people were using it. Their algorithm got better because their algorithm got better. More people started using it. So on and so forth you can see how this sort of positive feedback loop conspire a lot of control. In this case, it worked out really well. For Google. Another example of a positive feedback loop is global warming. Some scientists believe that because the ice caps have melted, there is less ice to reflect the sun's energy back out into space because there's less ice to reflect the sun's energy. The earth heats up because the earth heats up. The ice caps melt because there's less ice caps. There's less ice to reflect Sun's energy back into space, and because there's less energy getting reflected back into space, the earth heats up so on and so forth, so you can see how, if positive feedback loops are working for you. They're really good if they're working against you. Not so much, however, for controllers. We don't want this sort of runaway effect. We see with positive feedback loops. We want the opposite. We want stable operation. So what we want is a negative feedback loop where we take our output coming from the plant after we've observed it and we subtract it from our reference, you can imagine this intuitively with the shower example. When you stick your hand in the water to sample the temperature, you compare that to the temperature you want to achieve in your head, and then you create some sort of difference, like it's just a bit too hot or it's a bit too cold. Then your controller makes a new output you, which turns the knob some slight amount. This causes the plant or the shower to output a different water temperature, which again you sample with your hand and you see, is it too hot, too cold and you make a tiny adjustment. The plant or shower outputs a new water temperature, which again you observe and you make another fine adjustment. This is closed loop control you are constantly refining until you get the exact temperature that you want, at which point your error zero and hopefully your controller output is zero. If we then think about closed loop control in our elevator example, it's quite simple to see what's happening. Our controller is the code we're gonna right. The plant is the motor and the elevator that gets moved. Why is the elevator changing its height? Oh, is the sensor we have detecting this change in height? Our is our set point or the floor we want to get to and e the error is the distance from our set point to where we are now. Now that we have some formal basis of our knowledge, we can now move on to proportional control.
10. P Control: Welcome back to P I. D. Controllers intro to control design in this section we're talking about proportional control. Proportional control is the first part of R. P I. D controller, which is the whole focus of this course. In the last lecture, I talked about closed loop control. When you first turn on your shower, you are exhibiting open loop control behaviors. When you just turn the knob to a pre defined point based on some knowledge that you have, you do not react to the showers out. Put it all then for closed loop control. When the shower starts out putting water at a set temperature, you sample it with your hand and make adjustments. Based on that, whenever we're designing a controller, essentially, what we're trying to do is determine how the controller should act based on a set error it receives. That is, what should the controllers output be for a specific set of heirs For proportional control , we curate a controller output that is proportional to our air. As you see here are controller output. You is equal to our error time, some value called KP, which I'm going to call the proportional gain this proportional gain is a constant number that we set before we turn our controller on. Determining your controller gain is more of an art than a science. Before we talk about that, though, let's talk about how a proportional controller would work in our shower. Example. Essentially, a proportional controller means that the size of your controller output will be proportional to the size of your air. So for our shower example, you can imagine that if the shower temperature is way too hot, you need to make a big adjustment to the knob that u turn. Conversely, if the shower temperature is almost right, it's just a tiny bit off the controller output or the amount that we need to turn The knob would be a lot smaller. This should make intuitive sense the farther away you are, from the point you're trying to get to the larger change you need to make, and also the closer you are to the change you're trying to get to the smaller change you need to make the proportional gain. KP can also be positive or negative, depending on the relationship between your error and your controller. Output for your elevator. If you make a proportional controller and the elevator moves in the opposite direction that it should be. It means you're proportional gain is the wrong sign. You need to either make it negative if it's positive or positive. If it's negative, setting the size of the controller gain, such as setting it a 0.5. Or maybe it as five or 50 again is something that you're just gonna have to try and see how the system responds. For the elevator example, though, you should have a rough idea of where to start. For example, let's say we want to have a maximum acceleration of two meters per second squared. Well, we know if our error is 20 meters because we're 20 meters away from the floor we're trying to get to than a good KP would be 0.1, because our error of 20 meters time 0.1 will make an output of two. And if we have the controller output gain set correctly, this will yield an acceleration of two meters per second squared. Thinking back to our fuzzy logic controller that we created earlier, this middle section here is identical to a proportional control you can see the linear relationship that as you get closer to the set point, the controller output gets smaller. When you go to code your own proportional controller, it really is that simple. Calculate the air, which is the difference between your reference and where you currently are times that by some number that you will have to tweak and tune and set your controller output to your error times. That number It's really that simple. A proportional controller like this would be a great solution for that tank filling problem I talked about in the previous sections. However, we know that it will not actually solve our elevator problem with our current settings. But making a P controller is our first step to making a full controller that can solve our elevator problem. Here's a question for you, though. Do you think there are settings under which API controller can solve our elevator problem? If you're interested in learning the answer to that, stick around for the bonus section of this lecture. The question I asked is, Are there any physics settings that we can change to make it so that R P controller will solve the elevator problem, and the answer is yes, friction Now it won't be a great solution and it'll actually take a while, but it will be a solution to our problem. Friction is generally calculated, has the velocity time some coefficient? You're friction is a velocity times some coefficient. So if your velocity is 10 meters per second and the coefficient is 100.1, your friction force is one Now. Obviously, this isn't true for all cases in your car, for example, air friction follows a trend of velocity squared. So if you double the velocity that you're going in your car, you're actually encountering four times the air friction. Let's see what friction does when we just have a peer output. So let's just output three without friction First. As you can see, the acceleration is staying. A constant value on the velocity is constantly rising. This will continue forever. The velocity will just keep going up and up, and the acceleration will stay the exact same. So let's add friction and see what happens. As you can see, the acceleration immediately starts dropping as the velocity increases, which makes sense because the friction forces increasing as the velocity increases so Eventually the acceleration is gonna be zero. You can see it's really getting reduced on. The velocity is hitting some peak value, which it won't go above. Now, this is a lot more physically intuitive because if you're in a car and you keep stepping on the gas, eventually your car is gonna hit its top speed. So in the example where we were controlling a tap that was filling some bucket or tank or whatever, I said that if you're directly controlling velocity, proportional controller can solve that problem. I believe I said a linear controller, but But they're the same thing for our problem. If you kind of squint and close your eyes of it, you can sort of see how we're controlling the max velocity of the of the elevator. We're not really controlling acceleration were kind of just setting the max velocity. So, for example, if we make this a smaller number, the max velocity that the elevator will hit will be smaller. Let's design RP controller and see what happens with frictional. Let's actually change the set 0.23 just so things will happen a bit faster and let's actually do with friction off first, So we've already seen this response. What's gonna happen is the elevator's gonna ossa late, up and down, and it's just going to keep doing that forever. What will happen when we turn friction on? You can see that the acceleration isn't getting as high and the elevator isn't getting as far up or down as it was before. So you can see how eventually the acceleration will become zero. And the position will be exactly the set point we had set. This is what I meant by it will solve our problem. But it won't be a great solution. Obviously, you would not be very happy if you were in this elevator.
11. PD Control: Welcome back to P. I. D Controllers intro to control design in this section. We're talking about derivative control in this lecture. I'll go over what derivative control is, how it can help us and how to implement it. At the end of this lecture, you'll be already to solve assignment to. So let's recap the issue that we have with just the P controller with a P controller. The controller doesn't stop accelerating the elevator until it gets to the set point. Because remember, the controller output equals the error time. Some gain value. So if the air is any number than we are still accelerating, it's only when the air is zero. When we get to the set point that the controller stops accelerating. Now, all of this acceleration is bad because it generates a very high velocity. So even when we have zero acceleration, the elevator has a lot of velocity, and it's going way too fast when it hits the set point. What we need to do to solve this issue is we need to decelerate and note that having zero acceleration is not deceleration. Having a negative acceleration is deceleration, and we need to decelerate before we get to the set point, that is, we need to slow down our velocity and bring our velocity close to zero before we get to the set point. For those of you whose calculus skills are a bit rusty, let's talk a little bit about what a derivative is before we see how we can use it. If we have some function Y equals F of X, The derivative at some point is the slope or rate of change of the function. At that point, let's bring this abstract mathematical function into something that we're more familiar with. Here I have a function X equals f of tea, where X is our distance from the store and T is the time. So here's our function, our distance from the store with respect to time to find the slope, a rate of change of a line. You might remember that we can use something called the rise over run rule where we take the rise divided by the run, and this will give us our derivative denoted by X dot here so we know how to calculate the derivative using rise over run. But what does the derivative actually mean to figure that out. Let's stop working with a continuous line like we have here and let's work with discrete points. This is something will be a lot more familiar with. Let's see, these discrete points are all the points we have measured as we're walking away from the store. So remember, exes are distance from the store and T is our time. So for each one of these points, we know our distance from our store and we know the time at which we took them. So to calculate the derivative again, we do rise over run, which is the difference in rise over some distance and run. For this case, the rise would be the difference between two points and X we located and remember, our exes are distance from the store and are run would be the difference between two points in time. Let's start adding in some numbers. Let's say that at X one we were 100 meters from the store and it x two were 200 meters from the store. Our rises thus 200 minus 100. If t two is 150 seconds and t one is ah 100 seconds than our run is 150 minus 100. So to calculate are derivative. We take 100 meters and divided by 50 seconds and we get our derivative of two meters per second. So here you can see that are derivative, is a velocity. It's our velocity away from the store. X is our distance from the store. T is our time. So our ex dot the derivative of X is our velocity away from the store. So you're derivative with respect to time is your rate of change of whatever it is you're taking the derivative of here we have a distance so are derivative of that is a velocity for our elevator example, since we're always dealing with distances, that is the distance from where the elevator currently is toe where it wants to be when we take the derivative of things were going to get the rate of change of that distance, which is the velocity. So how can we use derivatives to help us solve our control problem? Well, let's think back to the example of the autonomous vehicle I gave earlier. Let's think very specifically about what the error the controller output and the derivative of the error or the rate of change of the air is doing at all of these points at the very start. When the autonomous vehicle first decides it wants to move from the left lane to the right lane. Our error is big because we are as far from the right lane as we're ever gonna be. From here on, we're just gonna get closer to the right lane. Our you are controller. Output is also going to be big because remember, if we have a proportional controller, a big error means a big controller Output, Our derivative of error is zero because we haven't started moving it. We just started in the left lane and we haven't started moving towards the right. So we have no rate of change of our error. So that zero, however, what would these three terms be at the center line that is just a sui crossed the center line. If we follow this red path in this case, our air would be small because we went from far away from the right lane to a lot closer to the right lane and are you would also be small because of our proportional controller Since our error went from big to small. Our error dot is negative and it's big because we had a large decrease in air decrease in error. Going from big to small is a negative rate of change. Now, if we think back to the issue we had with our P controller, we don't want a big negative rate of change of air. We want to slow down our air because the reason why it's not working right now is because we have such a big rate of change of air that were flying by the set point. So we have a negative big rate of change of air and we want a small derivative of air. If we have a small derivative of air, will get this nice, smooth transition to the right lane because we're slowing down how fast we're approaching the set point. So again we have a derivative of error that is negative and it's too large. We're approaching it too fast and we want a derivative of air that still negative but a lot smaller. So what we need to dio if you positive is turning our steering wheel to the right and you negative is turning our steering wheel toe left. We need our controller output to be negative. We need to steer to the left to slow down the rate at which we're approaching the right lane. A solution we could try for this is setting our you equal to our error derivative times some constant case. This may seem a bit weird, so let's actually think about this for a bit longer. Now, the picture I've drawn here is a bit exaggerated. But imagine if you were driving at the lane at such a sharp angle. That is, you have a large error derivative because you're coming at the right lane very fast. In this case, you would want to steer hard to the left. You'd want to steer very hard in order to get into the right lane smoothly. Conversely, if we had a small air derivative, that is, we're approaching the line at a very smooth angle. We wouldn't really have to steer that hard to the left. The amount we want to steer or the size of our controller output in relation to our air derivative makes intuitive sense. The thing that might be confusing you right now is still the signs whether we should turn to the left or to the right, based on your error derivative being negative or positive. Remember, all of this will change in your final control. Design is whether you're k value is negative or positive. So where's case scenario? You can just try both negative and positive K value and see which one works. I already showed you how to calculate the derivative. If we have a bunch of linear points like this since we know our controller is running at 20 hertz, that is each dot is being recorded 20 times a second. We know the time difference between all of the points. 20 hertz just equals one second, divided by 20 which is 0.5 seconds. So for the bottom part of our rise over run, we know I run the time difference between every point is gonna be 0.5 seconds for this example. It doesn't really matter where we calculate the rise over the run because all of these points are along the same line. Thus they all have the same slope or rate of change. But in our riel controller. It won't be like this. We actually won't have all of the points at once. What will really happen in our controller is over Time will eventually sample new points as our elevator moves, and thus we have to calculate a new error derivative. For all of those points, however, the calculation remains the same. We just do the rise over run of our current point. So we take our current error subtracted by our previous air and divide it by our difference in time between those two recorded errors, which is 0.5 seconds. So recall are proportional controller. We just took our error times it by some gain value. KP to turn are proportional controller into a proportional derivative controller or a PD controller. Now what we do is we add in our error derivative times, some different gain value and that's it. That's RPG controller. You are now ready to solve assignment to recall the past conditions. In order to pass the assignment, you must get within 0.3 meters or three centimeters of the set point while going less than 0.1 meters per second or essentially stopped This will be indicated by a pass showing up in the position plot. You are not allowed to exceed 18 meters per second, which will be indicated by a fail in the velocity pot, and you are not allowed to exceed five meters per second squared, which will be indicated by a fail in the acceleration plot. Here are some hints to get you started If you set the P and G contributions to the output equal to P out and d out, you can set the option p i d. Debug equal to true and this will help you visualize what your gains air doing. I've already provided some starter code, so it should be pretty obvious how to set the p out and dio terms to tune the gains. I suggest you start by increasing KP with a k d of zero and get the elevator to start moving at a nice velocity. Since you'll just have proportional control, though, the elevator will fly by the set point. So once you get the elevator going, a nice velocity start increasing Katie until it slows down while nearing the set point. Now, in the lectures I talked about using 0.5 seconds. However, I provided a term called D T, which is the time difference between every two points. Use that instead, and you'll get a much cleaner solution. Once you have completed the assignment with the default physics settings, check the quiz, which will be the next part of the course, as it will ask you to try a number of different physics options and report whether you're PD controller that worked with the default options will work with those options. You don't have to create a controller that passes all of these different physics options. All you have to do is report whether your controller that worked with the default options works with the other physics options. If you need help with the assignment, the next video I post, will be a walk through of assignment to. However, I strongly suggest you try doing this on your own first and give it a real effort. Other than that, enjoy the assignment and I'll see you in the next section
12. Assignment 2 Walkthrough: Welcome back to P I. D Controllers intro to control design In this lecture, I'm giving you a walk through of assignment to here. You can see my solution to assignment. To note. You could have settled on vastly different KP in KD gains. But as long as it passes, that's all that matters. Here is my solution running for the non friction case so you can see everything's pretty smooth, but it takes a little while to get there now with friction again , it's quite smooth, but it still takes a really long time to get there, especially to close those last few centimeters. Now let's try adding the weight of people you can see the elevator stops well short of the goal. This is true when they're both is or isn't friction here, it doesn't make a difference. It will still stop there Now. One thing I didn't talk about in the lecture, but is really good to do when you have a problem like this is set a maximum output value. Let's look again at the acceleration of the controller when I have the values shown here. Notice how there's a sharp spike here where we accelerated around 4.5 meters per second for honestly, about a second and then it quickly drops. Since we can't exceed five meters per second squared, that really limits the size of our KP value. We can't make it much larger or else it'll exceed five. What about if we made it so that instead of accelerating for four meters per second for about a second instead, we accelerated at around two meters per second for two seconds. Essentially, we spread the acceleration out. Well, one way we can do this is we can set a maximum value on our output. I used the code highlighted here to set a maximum and minimum value of our output, which I set with this output max term. Here you can see that I'm allowed to use gains that are much larger. This makes the controller a lot more aggressive. However, we still won't exceed the maximum acceleration because of that maximum output. Let's see how this controller with more aggressive gains but with a maximum output performs . You can see that was a much more aggressive stop. We're pretty much on full acceleration and then Mac, then minimum acceleration or sorry, maximum deceleration, and then it goes to zero. That was much faster than our other controller. Yes, it will be a bit rougher of a ride for the people in the elevator. But remember, before we had a sharp spike of 4.5 meters per second squared of acceleration, where now our maximum acceleration is 2.5. Let's try that with friction again, setting a maximum output value performs really well here because it allows us to set more aggressive gains on our controller. You can imagine a situation where in real life you're motor has some maximum force that it can output. This is called saturation when you are having your controller demanding more than what the physical motor can. Output by using a maximum output here were essentially making some fake saturation. And as you can see, it's not always a bad thing. Sometimes you can use it in your failure, especially if you have maximum acceleration limits. Let's add people mass to this controller and see how it performs. You'll notice that the controller still has some steady state air, but it's much smaller than the other controller. If you go back to the slides and look at the steady state air. Remember that the steady state air will be based on your proportional controller. So if you have a larger KP, you will have a smaller, steady state air. But no matter how large you make your KP, that steady state air will never be zero. That's it for assignment to I look forward to seeing you in the next section.
13. PID Control: Welcome back to P 80. Controllers intruded Control design in this section, we're talking about integral control. Integral control is the last part of R. P I. D. Controller. After we finish, this will have a full on P I. D controller and you're ready to work on the last assignment of the course assignment three . If you went through the quiz in the previous section, One of the question did ask you is How did your pd controller perform when you added people into the mix? What you should have noticed is that once you added people into the elevator, the PD controller stopped well below the set point. This issue can't be solved with a different open gain or with friction. It is something else entirely. Before we start talking about integral control and how that can be used to solve this issue , let's dive deeper into what the root cause of the problem actually is. Let's imagine we have a simple P controller and we have a perfectly counter weighted elevator that is the force of gravity on. The counterweight is negating the force of gravity on the elevator when we have an error or output of zero. The elevator is not moving. It's staying perfectly still, however, once we had people into the mix. Now the force of the counterweight is less than the force of the elevator, plus the force of the people pulling downwards. Since our error is zero, our controller output also is zero. Because there is more force pulling the elevator downwards than there is upwards, the elevator will begin to move down. However, once the elevator begins to move down, our error is no longer zero, and thus our controller output is no longer zero. With the P D controller you implemented an assignment to you would have noticed that your controller had the elevator come to rest at some point below the set point. Eventually, the force of the motor generated by the controller output plus the force of the counterweight equalled the force of the elevator, plus the force of the people. So essentially, the elevator reached this equilibrium state, where it has some non zero error, but it can't move in either direction. Before we begin solving this problem, though, let's just talk a little bit about what an integral is for those of you who might need a bit of a refresher. If we have some function Y equals f of X, the integral at some point is the area under the line. It's essentially the sum of all the points underneath the line. Let's look at a simpler example, so you can get a better idea of exactly what the integral calculation is doing to calculate the integral of that first point there. One method is to take the height of that point and times it by the width to get the area underneath that point. So for that first point x one, X equals toe one and T goes from 0 to 1, so we do one times one minus zero to get the answer of one for that area. Remember that the integral represents not only area underneath that point, but all of the area previously. Thus, when we want to calculate the area under 0.2, we also have to take into account the area under 0.1. It's when we calculate the area under 0.2. We know that point to goes from time 1 to 2, and it has a height of two. So therefore, the area under 0.2 is two times two minus one plus We have to add the first area we calculated, which is one. So the total area under 0.2 is three. To calculate the area under the third point, we again do the same. Hopefully by now you can see that the answer to this will be six because the area under the third point is three plus all the previous areas is three. So we get a total of six. The easiest way to think about inter girls is think of them as addition. We're just constantly adding up all of the points we have collected this far. So here you can see I've broken it up into one by one blocks and you can see that our final number six is just the addition of all of these blocks stacked up. So to design an integral for your controller, essentially, what you'll need to do is have a constant summing where you're always summing whatever the previous value was, plus your new value. The general form of this integral calculation looks like this are X value times the delta T , which is the difference between the Times. So for our previous example that would be from 2 to 3 r Delta t would be one plus all of the previous inter girls. The issue where we're currently at is that we have some error. But even though we have air, the controller isn't doing anything extra about it. If we were to look of the plot of error over time for that equilibrium position we had reached, it would look like this. We have some constant enter that is not decreasing. Well, what if we tried adding the integral of all these airs? That is the sum of all these airs and added that to our comptroller, O put, even if our error is really small, which it is because we're close to our goal. But we're not quite there. Eventually adding all of these errors up will produce a large enough controller output that will actually move the elevator to the set point. If the elevator eventually moves to the set point, eventually the air is gonna be zero. Because we are exactly where we want to be. Remember, however, that an air of zero and a controller output of zero is exactly what caused this problem in the first place. So how does the integral solve that? Well, remember that even if the current error zero the current integral is still the current error plus all the some of the previous airs. Therefore, just because our current error zero this does not imply that our controller output will be zero because the integral of the error term will still be greater than zero. This means that our controller will eventually get to a point where the only thing contributing to the controller output is that some of the previous heirs or the integral of the heirs we now have our proportional we have are integral. And we have our derivative parts of our controller. That's it. That's all of P I. D. Control. You can now design a controller that can handle varying masses of people, and the integral term will make up for the difference. Here are some hints to help you out with this final assignment, make sure you save your new integral value that you calculate in self dot integral because you'll need it for the next time step. I'd also suggest you try making a maximum value for self dot integral to prevent something called integrator wind up. This can happen when your integral term grows too large, because remember, your integral term is constantly summing the air. So if you take a while to get from your start point to your set point, by the time you get there, you'll have already accumulated a lot of air. So at a maximum value for your integral term and make sure it's big enough to hold up the mass of the people to prevent this sort of integrator wind up. I'd also suggest you test your controller going both up and down, because once we put people in the mix, this unbalances the mass, so you'll get a different response, whether you're going up or down. Next, be sure to try different weights of people and even try some negative weights next to the physics option where you set the people's mass. I've commented out a few masses that you should try pretty much ranging from negative 200 to 200 kilograms. A negative wait here is essentially adding weight to the counterweight, making it so that the counterweight pulls more than the elevator. And with that, you now know how to design a P. I D controller and you can solve this final assignment. You've learned a lot so far. I hope you enjoy implementing it now.
14. Assignment 3 Walkthrough: Welcome back to P I. D. Controllers intruded Control design In this lecture, I'm giving you a walk through of assignment three. Here you can see my solution to assignment three. Note. I used a maximum integrator wind up of five and a no put maximum of 2.5. Here's where implemented a max integrator term to prevent integrator wind up. Let's see Oh, my solution performs with the people mass of 200.  You'll notice that it's quite smooth, but it takes a little while to close those last few centimeters. If we turn on P i. D d bug, we can see why this is note that the I component hits its max value, but it's really value. It needs to be Acto Hold up the people masses here. However, the I component can't go down unless the error is negative. That is, we're above the set point. So the error has to go above the set point. So that way we can start reducing our I term so it can perfectly balanced the people mass. Here. This is essentially a little bit of integrator wind up If I set the integrator wind up term to something much larger. The elevator will hang at that up position for much longer.  Ideally , you would want to set your max into greater value to the very max that you could want to encounter. In the rial situation. However, you generally don't have that much knowledge, so it's better to set yourself a little bit of breathing room. Now let's try in the down direction. Note that this is pretty equivalent to setting a negative mass with the same direction I had before, so you can see that the controller struggled a lot more with that one because before it had to go from positive 1.25 the integrator term had to go from positive 1.25 toe one. Now it has to go from negative 1.25 because remember, it's starting at the bottom and then work all the way back up. If you had a lot of prior knowledge about this system, for example, you knew that the counterweight mass was always equal to the elevator mass and the people mass was always positive. Then you could use integrator wind up, but not have it symmetrical. You could have a floor of the integrator at zero. So that way the integrator term would only have to go from 0 to 1. And it wouldn't hang so long in the bottom part. There just for completeness will show you that negative mass, but it's the exact same as what I just showed you again. The integrator term has to go from positive 1.25 all the way down to negative one ish. So if you had some better prior knowledge about the system, you could set a floor to the integrator. Now, when you look at the different components the P component, the I component and the D component from a coding standpoint, a P I. D controller really isn't that complicated. It's not that many lines of code, however. Hopefully, in this course, you didn't just learn what specific lines you need to write. But you actually understand the logic and the reasoning behind them because not all situations will require a P I. D controller. Some situations might just require a PD, some just a p I and some just api controller. So I hope now that you finish the core content of this course, you feel confident in your ability to not only create a P I D controller, but actually understand what's happening in one and why. Like I said, that's it for the core content. Stick around for the next section, though, where I'll talk about advanced controllers and, actually war game a few p i. D controller scenarios if you're going to design your own for a certain environment.
15. Outro: Welcome back to P I. T. Controllers intruded Control design In this lecture, I'm going to talk about advanced controllers controllers. I go beyond the abilities of P I ds just so you can know what P Id's limitations are and sort of what's beyond the horizon. Next, I'm gonna give you some P i. D implementation advice. Specifically, I'll talk about when you should use the individual P I and D components and when you shouldn't. In the introduction to this course, I talked about how p ideas that were everywhere. However, there are plenty of situations where P ideas just don't cut it and more advanced controllers are required. One of the main drawbacks to P I D controllers is that it's purely reactive. P I D controllers don't plan ahead or think about how their actions will affect the future . They purely react based on the past. Because of this reactive nature of P I ds, it can be very difficult to deal with constraints. Constraints such as don't overshoot the target or don't have two large oven output. These constraints are really tough to deal with with P I ds. Yes, you can set artificial constraints like capping the output. But that doesn't mean the P I DS will approach it smoothly. It just means that it will hit a wall when it gets to there. Lastly, P ID's don't take advantage of any advanced process knowledge you might have, such as how the system behaves, or anything like that, because they purely react. They don't try to plan ahead, and they can't use any extra process knowledge. You have an example of the type of control scheme that uses process knowledge to actually predict the future. And likely the subject of my next course is model predictive control with model predictive control. You give the system a physics model, and it will use this model to see how it's controller. Output will affect the plant output By knowing the effect it's controller outputs will have . It allows it to make the best decision not only for right now, but for the future. This is why it's really great at handling constraints, which is one of the primary reasons it was designed for a P I. D controller. The reference was just the single value we wish to attain. However, for model predictive control, you need a more complicated reference and you use something that's called a cost function, whereas in a P I D controller, the whole point is to make the error zero. The whole point of a model predictive controller is to minimize this cost function. For the example of the autonomous vehicle shown here, the cost function would not only include the distance from the point you are trying to get to, but it would also include other factors. Such a smoothness of the controller output, the acceleration, things like that that would make a smooth response because you could imagine if you're driving a car and you were just trying to get to the other lane as fast as possible, it will be a very jerky, sharp ride. You should really have other considerations. That plate, such as the smoothness of the path you take and minimizing the rapid changes in acceleration. Another type of advanced control system is nonlinear control. Now, so far, we've only don't with linear systems, and I have that in quotes because the systems we've been working with aren't necessarily linear. We've just been approximating them is linear, but they've been linear enough toe work for our purposes. A nonlinear system like the one shown here are much harder to control. For example, in this system as the controller output X increases. First, the plant output increases and then decreases. API I D controller could not control this situation because at some points the plant goes up, and at other points, the plant goes down to ever increasing inputs. If our elevator example behaved non the nearly like this, that would mean that if we had a controller output of 10 the elevator would go up. But if we had, a controller will put up 20 the elevator. With all of a sudden start going down, a P I. D controller would not be able to handle a situation like that. The only way a P I. D controller could work is if you restricted it to some linear ish portion of the problem, such as letting the controller work on Lee in a limited section of the problem. This is, in fact, what you'll be doing almost every time you design a controller, as almost every problem is a nonlinear problem. If you look at it big enough, there's this common joke and controls that splitting up things into a linear and non linear categories is a bit like organizing all the objects into the world into the categories of bananas and non bananas. Almost everything folds under the non bananas or the nonlinear control category. Now let's talk about some specific P i. D implementation advice, and then we'll war game a few scenarios together and see if you can guess what the proper components you should. News are the P component of a controller is almost always implemented. The reason for this is that it allows a very quick response to an error. The only time it's not used is when there's a large amount of noise coming into the controller, because this very quick response react suddenly to all of this noise and create a very noisy controller output. In a situation like this, with lots of noise, you could just use the I turn, or you could filter the input coming into the controller. However, this could cause stability problems from things such as phase leg. Once you begin filtering, the I component is required when the plant needs a constant input in order to remain stable at a set point remember for our elevator problem. Once we added people into the mix, the plant needed a constant input. That was a constant force from the controller in order to hold the elevator stable. That is what I mean by the plant requires a constant input. Even if the plant doesn't require a constant input, there are still cases where you'll need an eye component. These cases are when changing or unknown. Offsets need to be accounted for. Think of our elevator problem with a perfectly balanced counterweight in this situation. If we haven't offset, that is when we do a controller of put of zero, we actually get a controller output of positive one. Or, when we do a controller output of negative one, we get a controller. Output of zero. This is what I mean by an offset. In this case, we would need an eye component. Even though the plant doesn't require a constant input. A better example. Oven offset is probably are autonomous vehicle steering problem. Let's see for this problem when we're steering straight, that is where you have a controller. Output of zero were actually steering a little bit to the left. Just imagine that this is because of some mechanical issue or something. Now, when we're going dead straight and we're right on the set. Point our area. Zero sore controller output a zero. But this would actually make us because of the offset. Steer a bit to the left. Without a ni term, we would constantly move to the left of tiny bid. However, with an I turn, it can account for this offset and make us drive straight with an air of zero. The last reason to use an eye component is that it can smooth out the noise. Remember how it's a summing of all of the previous inputs? Well, that's essentially acting like a filter so it can help alleviate some of these noise problems. Whenever you implement the I component, you should always be thinking about integrator wind up, and it can always create problems if it isn't protected against. Also, whenever you have output saturation, that is, you have a limit on your output. Realize that this will negatively impact the I component and could be a source of problems related toe. Wind up the D term can be used to purposely over damp or slow down the response of the controller. Also, as we talked about quite a bit de term is required when you have these sort of acceleration problems, and they have low friction forces. In one of the bonus materials, I showed how if you had extremely high friction forces, you don't actually need a D turn for an acceleration problem. But these situations could be quite rare when implementing the D component. You have to be careful because it has a divide by the dealt the time or the time between two successive runs of the controller. If you are running at a variable rate and you actually program the D. T to a set value, this will create problems and lots of noise in your controller output. You can try this on your own in the simulator by replacing D T with 0.5 and see what happens. And you also have to be careful that if you're using a very high rate that you don't accidentally divide by zero, if you're difference between your two. Successive runs of your controller, are so close to zero that they get rounded to zero, just like the P component of the controller the D component is also prone to noise and will generate noisy controller outputs. Recall that the calculation for the D component is error minus previous air. However, if you were rapidly changing your set point or by changing your set point a large amount, you will cause the error to go up artificially very quickly. Thus your error minus previous. There will be a large number if you originally have an error of zero, and then you change your set point to some part really far away. This will have a massive error minus previous air delta. Thus, to avoid this instead of using error minus previous air, just use plant output minus previous plant output. And this will make that issue go away. And if you look at the math, it essentially works out to the same thing. Okay, lets war game that is actually walk through a few scenarios of designing a controller and see what we can do. So let's imagine we have the cruise control problem. What components of the P I D controller should we use should be use all three. Should we just use one? What do you think? In my opinion, we would definitely want to use the P component, as we would want to react quickly to any air. And also speedometers seem like they would have small amounts of noise. We would need the I component because the system will require a constant input due to air and other friction forces because remember, if you're going 100 kilometers an hour and your air is zero, because that's the speed you're trying to attain your controller output would be zero with the proportional component. Now we know that with air friction of you let off the gas or the accelerator pedal at 100 kilometers an hour, you will begin to slow down. Thus the plant requires some constant input. Now the D component isn't required because we're going from an acceleration to a velocity, and thus it's a velocity problem. But a D component could be used to really smooth out and over damp. The controllers response that is, make the controllers response a lot smoother and slow down because in a car we really don't want any rapid rough changes. Okay, next scenario. Let's see, we're designing a controller for robotic arm that is, we're trying to control the position of the robotic arm on. We have some input to its motors. What do you think? What components do we need? I think that we would definitely want the P component because we would want to react quickly again. The I component might not be required because I don't think we need a constant input. But it could be used to handle any offsets or history. Sis, that is stickiness in the gears because this seems like a situation where there would be quite a bit of friction. So for the D component, it's probably required because the electric motor's output a tour force and we're trying to control the position. But I could see there being a lot of friction in a problem like this, so maybe it's not required. Okay, last scenario. Let's pretend you're trying to control the temperature of your shower. Now let's imagine that there's two different comptroller o puts you could use for one of the controller outputs. You can turn the knob left or right that is, you can control with your controller the velocity at which the knob turns left or right, and the other controller output. You can just directly set the knob position. You command a set knob position, and let's say it's in between. The range of zero and one on the novel instantly moved to that position. What components of a P I. D controller do you think is required for those two situations? And do you think that they can use the same components? After all, they are trying to solve the same problem. For the first scenario, you can think of it as a velocity problem as we're controlling the knobs, velocity or angular velocity. And there is some specific, angular position that will yield the perfect shower temperature. So essentially, with the P component, we can solve the problem on its own. I would also add that you should use a very small proportional gain because the dynamics of a shower are very slow, so you don't want the knob rapidly turning from one end to the other. You want it to happen very slowly so that the system can actually have a chance to respond . And this is actually a pretty good general thing to be thinking about is the response of your controller versus the response of the plan and on the different time scales that they happen Now. For the second scenario, the controller output is position and we're trying to find the ideal position. So we have, like, a position position problem, but we don't know where that position is. So this is actually a pretty weird area to be using a P I. D controller. This would be a great place to use a logic controller. However, if we just use the I component, it would still work. You can imagine this playing out in your head when there is some air that is, the temperature isn't quite right. The I component will start slowly summing these airs and increasing the output. Eventually, it'll get to the point where there is no air. It'll stop something and adding to the controller output and there you'll be. You'll be at the perfect knob position again. You should use a very small K I so that these changes happen very slowly, allowing for the dynamics of the shower to touch up to the controller output. Also integrator wind up would be very easy to control because you know the maximum and minimum output you want from your controller. You can sit the wind up maximum to be one divided by your integral gain. With that, you're done the course. Congratulations. You've now learned a great deal of knowledge that's generally reserved for fourth year engineering students. You designed and implemented a P I. D controller from scratch and successfully controlled a dynamic system. You've also become more familiar with control theory parlance, terms like reference error pre facing and put an output plant gains open and closed loop control positive and negative feedback. All of these things are familiar with you now, and you would have no issue using them in conversation or hearing them and understanding what the person is trying to say. I also want to say thank you for sticking with it this far. This is not only my first online course, but it's the first time I've ever done a big media project like this. So if there are any technical issues you encountered, I sincerely apologize. Also, any feedback you have either on the course content or technical issues such as you prefer the mike I used in Section two as opposed to section six. Please do let me know any feedback you give me will be used to not only improve this course but improved the courses. All make in the future again. Thanks for sticking with it. I hope you enjoyed it. I hope you found it interesting and good luck in all your future control endeavors.