Transcripts
1. Introduction: Hello, everyone. And welcome to this skill share Computer Science 101 class. This is an introduction video. My name is Kurt Andersen. I'm going to be the instructor for your class that goes over computer science basics. Sort of a good overview of all the topics that computer science hits and some of the main areas, like in notation and timing and things like that. In this course, we're going to learning these seven things, and I'm gonna go over them in just a second. But I want to sort of give you an idea of why we want to learn these things. Computer science is the way that we look at programming and algorithms so that we could make them more efficient. And this term, called scalable scalable, means that our algorithms will run efficiently if we put in 10 pieces of data or if we put in, for example, one million pieces of data. We want our programs to be running efficiently based or essentially with as much data as possible. Think, for example, if we're building the next Facebook, imagine if they only designed it for 10 people. That would be a problem, because when they got up to a 1,000,000 people. Suddenly there's a big issue. There's timing, problems. People can't load things because our servers and are algorithms are all inefficient. And if it's too inefficient, we can actually create an algorithm that does the exact same thing two different ways. One of the ways it could take 10 years to finish. And the other way it could take 10 seconds. So we don't want to create the algorithms that take 10 years. We want to create the algorithms that take 10 seconds. We'll be going over things like that throughout the course, breaking a couple problems out and getting the timing so I can show the difference between the times in this, the course using these different algorithms. And then at the end of the course, we're gonna be going over a project. So let's go over this and I'll talk about the project. First thing we do is a little math refresher. I know it might be a little while since you've been to a math class, so we're just gonna cover a couple of the basics. Some of the things that will help you throughout the course. This isn't math intensive course by any means. We're learning sort of new areas, but it will help if you have a solid foundation. So we're going to a quick math refresher just so that you guys can remember it. And if you have a little bit of, ah, shaky background on something, you can look it up a little bit and learn some more on that before you get started. Then we have in notation that is sort of the core principles behind computer science. It's our tool in the toolbox that allows us to look at two algorithms and to tell which one is quicker. So if I look at Al Gore that I say it's in log in versus another algorithm, which is in squared as a computer, scientists, we know what those mean. We can now label things differently and communicate the speeds toe other computer scientists. That's what the in notation is working over data storage and a raise. This is a way of storing data, uh, available in every modern programming language like Java JavaScript, things like IOS development, which is usually Objective C or Swift or Andrew Development, which is job. Are Scotland all a raise linked lists, stacks and queues, they exist in all those. Then we're gonna go over nodes and linked lists. These both there almost sort of opposites of one another. So they do different things, and they allow you to solve different problems in different ways. Stacks and queues which are think of a que almost like When you're waiting in line, there's things queued up. So this person goes, and then this person goes, and then this person goes, and if people come in, they all start lining up so we can actually create that in an algorithm, which helps us in a lot of different situations. They're going to go over starting albums. This is sort of the baseline of our computer science content. This is always where people start. There's a lot of different ones, and it allows you to see a progression. Were to go for some really bad sort of hologram algorithms that take way too long, and we're gonna work our way up to some really, really good sorting algorithms that are efficient but a little bit complex. They were to go over trees and the binary search tree or the BST. Those air two important topics within computer science. And then finally, we're gonna do the project. Was that the end of the course? And the product project is essentially we're going to be taking all of this and bring it down into one problem. There is a problem. At the end of the course, we have 10 million pieces of data. I'm trying to sort that data, and I want you to go through the problem and then tell me how long it will take to run. You'll be ableto tell me by the end of this and then how can you make it more efficient? There'll be a couple of quick, easy changes, some data structures. You can change stuff like that that will make it more efficient, and you'll see that the time goes really different, depending on how you implement these changes. So that is our courses, is what we're going to be learning. It's gonna be a great journey. There's a lot to learn, and you will slowly understand more about the computer world as we go through it. Thanks everybody, and let's get started with this course
2. 1-1 Time Complexity Introduction: So we're gonna be starting this course where ah lot of CS courses start, and that is in time and complexity. The reason that we starting time complexity is because it gives us a standard, something that we can compare our programs to and something that we can talk to other computer scientists about. It's a way that they can, especially professors and the ones teaching us can communicate to us. Why a certain program algorithm, our way of doing something is inefficient versus another way. So it's standard way of comparing different out room. That's what it all boils down to. And what I mean by a standard way, is the fact that a computer scientist from let's say, the Netherlands and from America and from India could all communicate in the exact same way because we use certain notations like big O notation. So, for example, it would look something like this. This is big O notation, And so any computer scientists understands what this means, and later on we're gonna be discussing that as well and usually learn. You use certain little aspects, like in in squared or other scalable factors. So we're not using like it you know it runs in 25 units time because we don't know what 25 units means. Does it take 25 clock cycles that take 25 seconds of the take? Would it run in 25 cycles on my computer verse your computer? It's very hard to compare when you go for that. So what we're doing These were going to be looking at the standard for this, a way that we can compare to algorithms it completely independent of the computer, that it's being run on the area, that world, how fast the Internet connection is just being able to look at two of them and to be able to compare them and allows us to figure out ways that we can improve ourselves. So if we can look at a certain you know, a certain notation, then we can figure out how to be better than that notation and what things are worse than that notation, and that gives us a lot of wiggle room. It gives us the ability to improve ourselves a set goals and to research into different areas just like any other aspect. Any other science, we're going to be going over the basics of this science and how we can analyze it and try to figure out how to make it better.
3. 1-2 Math Refresher: Logarithmic Functions: computer science is based off of a foundation of math. It's basically just applied math that you throw into a processor and then it does it all for you. And you know, you can keep building off this to make your programs. But in essence, it is math, and that's why I just want to go over a couple of mathematics terms that are going to be used throughout the course. Now, don't worry. This is not a like heavy math theory course or anything like that. There's not even gonna be really any calculations. However. We are going to be referencing some math terminology, some math nomenclature, that I wanted to make sure that we're all on the same page before we advance into that sort of nomenclature. And they're just some things that, you know, we've learned maybe in high school. Or we've learned once or twice, and we just don't really use in the real world. But when you get into computer science, they become just a little bit more prevalent. And like I said, you don't have to know how to do it all by hand, but just understanding them. That's what I want to get to the point of is you being able to understand these these different things and where you can see it in like let's say we have a runtime of an algorithm. We can understand what that math term means in relationship to the runtime. And so that's what I'm going to going over in. This lecture is just going over some refresher material so that when we hit him again later on in the course, you'll understand that at a base level, and you can sort of expand your knowledge a little bit easier. The first thing want to talk about is something that's used in computer science, but not a lot in other places. And that is the idea of log. So right here it's referenced, or the nomenclature for is like this. It's log based, something so like let's say it's based to of variable in here or a number equals some other number. So, for example, let's put eight right there, and so this is a log function. Now what exactly is a log function? Well, what it's short for is something known as a log rhythmic function and a log rhythmic function is the inverse meeting the opposite, the inverse of an exponential function. So a log is the inverse of an exponential. Now, what's an exponential function? Let's say that we tracked it over time. We tracked the growth of both these functions Over time. An exponential function goes a little something like this, where you go straight up into the air mean because after every iteration the change grows. So, for example, maybe from 0 to 1, it changed by one from 1 to 2. Maybe it changed by two from 2 to 3. Maybe it changed by four, then eight, then 16 then 32 64. And it keeps going higher and higher and higher until when you make one change your moving billions or trillions of numbers to the next one. So that is an exponential function. These are pretty bad in computer science. If you're talking about run times because that means that the more stuff you put into it, it just goes up into infinity of how long is going to take Now A log function is the inverse of this. It's the opposite of this, and what would be the opposite of this? Well, that's simple. That's if you take this and you know the exact opposite, so we go like that instead. So what this is, is it's going to have a big change at first. But over time, the change between numbers is gonna get less and less and less over time. So where this one goes almost up into a straight vertical line, this one goes almost into a straight horizontal line. This is good because that means that the more stuff we put in our program, it's almost like the runtime isn't changing at all. It's getting less and less and less until we're getting to like that horizontal line. And so that's why log functions are important because they're the inverse of exponential. And there are some algorithms are run in this sort of this sort of time, So that's the theoretical side of it. Let's take a look at a couple of examples, so let's look at first. The equation for this log of X is equal to our log based X of why is equal to Let's Go b and that, in turn, goes into the exponential function. So this is a log with my function. This is the exponential function and the exponential function is x of B equals. Why? Okay, so let's blood some numbers in here so that we get out of this sort of abstract, weird variables and we actually understand this a little bit better. So if we take two to the B and it equals eight, so what does this mean? Well, this means that if we take two to the something and if, for example, what we're saying here is, for example, to the one equals 22 to the two equals four. And that's because it's just too times to you take 22 times two to the three is equal to eight. And that is just two times. Two times two you're taking to multiplied by itself three times it equals eight, and so on and so on. So we're saying is that there is a number be that exists such that it equals eight. Well, over here we just figure that out. It's two to the three equals eight. So if we plugged in three for B, we have to to the three equals eight. This side equals eight. This cycles a. That's an okay expression. So up here, B is three. Well, how does this help us? What does this tell us? Well, if we transform this into a log function, we can actually get a little bit of information out of it. So if we get log of, let's plug in our X value, our X right here. So it's a long base to. And then now let's plug in our, uh, our B or actual R Y writers So long, based two of eight. What is that equal? Well, we know that it equals three. So what we can do with log rhythmic function is we can actually find this number easier. We confined that be easier. And this is important because with a log rhythmic function, we can plug in numbers here and we can get numbers on the right side where with an exponential function, we plug in numbers and left, and we get numbers over here. So this helps us because now what we can do is we can sort of look at the relationship of why a log function is important. Let's say that we have this or let's first talk about the basis. So, like, why? What's long base to with long based 10. The basis is just what we're taking the number two. So, for example, if we did tend to the one tend to the to and tend to the three or we'd have is 10 then 100 then 1000 and that's because of this over here tend to the Jewish is 10 times 10. So is 110 of the three is just 10 times 10 times 10. It's 10 times three of itself, so it's just 1000. So in this situation it would be logged based 10 because the number right here is a 10 and this is there's no significance this you can do. Log based 27 Log Place 98 Log based 1000 It doesn't really matter. Computer science. We stick with Log based, too, and we'll explain that a little bit later. But it just comes down to how computers have really only a zero and a one to work with. So they're quote unquote two separate states here, and that's why we let use Log base to that went over your head. That's perfectly fine. Just understand that we use the too. So let's say that we had long based Let's say we have an algorithm that runs in log time. Let's say that we have long based, too, and this is the amount of information going in. So let's say it's long based. Two of we have 64. So maybe we have a Facebook account and we're running some algorithm on Facebook that goes through our friends. So we have how many friends in this situation? We have 64 friends And how long is it gonna take? Well, our equation runs in log, so we complete that 60 foreign, and it's going to take some value over here. And in this situation, we can just go. What will? What is to to the end that equals 64 that comes out to be roughly two to the sixth, which is you can see right down here. It'll go 8 16 So if we go to the fourth, it's gonna be 16 to the 5th 32 to the 6th 64 right? Like so. So what is 2 to 6? So now we have to the six. And so now we have over here six. So, in this right side, what we have is the runtime. So let's say this is seconds. So now we have an algorithm right here. We have log base. Two of 64 pieces of data equals a six second run time. Well, let's kind of make this go. Ah, little bit more into the extreme side. Why is long such a good algorithm? Why is it so important into computer science? Well, if we could get an algorithm that is like this, we can start seeing some really, really neat things. So let's go ahead and clear up a little bit of space here and let's look at this a little bit more. So what if we had long of two of now? I don't know. One. Let's double this. So 1 28 Well, that's going to equal seven seconds. So we've doubled the amount of information that's coming in, and we've only gone up one second. We've only changed just a little bit. Let's go even higher. Let's say that we want to instead of doubling this or yeah, let's just keep doubling. Let's go to 56 right here and now we have eight seconds so you can see that the change between these is getting greater and greater, but the seconds are only going up linearly. They're going on Lee going up by one second each time. Let's extrapolate this out. Let's say that we took two to the I don't know, let's say 32nd. And why is that important? Well, to to the 32nd. What is to to the 32nd equal? It was a very, very large number, and that number happens to be 8,000,589,000 or 589 million, 934,000 592. What's significant about this? This is the first number in this sequence that's greater than the population of the planet . And if we're dealing with Facebook here, that means that our algorithm at Max will run 33 seconds. It won't go far in that you can't have more friends than there are people on the planet. So we just sort of had this. We have this amazing algorithm right here where it doesn't matter how much data it is. It's always going to run 33 seconds or less whether these air milliseconds or these air microseconds or other sorts of time units. This is a really great growth rate. That means that the next version of this would be if we doubled this again, so that would be 16 billion. So we have a change of one second going from eight billion up to 16 billion and then one more second to go up to 32 billion. So this is why logs are important is because of this relationship that we talked about over here, how they don't grow very much over time. And if you keep going out into infinity, they almost become a horizontal line where you can put in as much data as you feel like throwing in and your algorithm is still going to run basically the same speed. So that's the log rhythmic function. It's very, very sort of simple function once you understand it. But it's still pretty weird in the hole, the whole discussion on mathematics and it's something that we don't really see too often. So that's the first thing that I wanted to sort of cover right before we jump into it. The next thing I wanna cover is another one that a lot of people have seen, but they don't really understand at a base level, and that is something known as factorial
4. 1-3 Math Refresher: Factorial Functions: So what exactly is a factorial? Well, a little looked like something like basic like this. Faux three and then an explanation point Or maybe 27 an explanation Point. Um, what is exactly? Does a factorial mean what a factorial is? Is just If you break it down, it's whatever the number is in the left spot multiplied by all proceeding numbers. So in this case, the fact three factorial is going to equal six. It's going to equal six, which is one times 22 times three. So one times two equals 22 times three equals six. So three factorial equals six. You'll notice something very important about this is that this growth rate is insane. So if we went one factorial, that's just gonna eat one two factorial legal one times 22 So it's, you know, pretty similar to the rest of the growth rates. Three. Factory alot equal six four factorial. So we take that six and we multiply it by a four, which is going to be 24. So instead of just one time two times three, we have one time two times three times four, so you can see that's 24 now, five factorial. We're gonna take this number, which is this number over here multiplied by five. So we can just do some quick math. Why that you can see that we're already getting the point where we're gonna have to do like , um, basically hand math to get these things going. And we're only at the number five were, like with logs. We could just sort of keep going up and up and up on it wasn't too hard. Six. Well, let's move by this by six. Now and then we have 120 so it's gonna be 600. That's give me 720 right there, Right? Yeah. 720 and you can see or 820. Act No. 7 20 You can see that this number is growing by a substantial amount. Every single change is getting more and more where we had a change of one right here. Now we have a change of four. And then a change of, um, down here we had a change of Let's say, let's see 18 here. We had a change of 96 here. We had a change of 600 you could see that this is getting just out of control. So factorial there something that we don't ever want to encounter there always represented by either like and in, which just means any number. Can you plugged in there that just, like, sort of If we want to go generic, we can put a variable into the explanation point, or we can put in a number to get an actual answer. However, this is what a factorial is. It's just a multiplication of every single number proceeding it So like I said, a factorial of a seven is just or one are seven. Factorial is just one time two times three times four times five times six times seven. Nothing too complicated about it. But like I said, a lot of people don't see this, especially they may be seen it once or twice in math class, but not again. Sometimes we will talk about factorial is they are very bad algorithms. If your program ends up at Factorial, they can take Like I said, you could put in a program that's maybe four factorial and it'll run really fast. And then if you put in a program that's, you know, 20 factories. Just just, you know, 16 more pieces of data it might never complete. You know, the time it might take to complete the sun might burn out quicker than that. So that's sort of the importance of a factorial.
5. 1-4 Math Refresher: Algebraic Expressions: And then finally, I just wanted to go over some basic algebra or some things you might see in the course. So, for example, we might see something like this, which is N log in. So what does this mean? Well, this is just a sort of a generic function. It's just showing you what a growth rate or what a function were using in this situation. So let's say A is equal to the amount of data, the amount of data. So if we had a function that is like this, all we're saying is that let's say that our in comes out to be 700. We don't know whether it's gonna be because, you know, programs are dynamic. They run in different spaces. Sometimes a person might have 20 Facebook friends. Sometimes they might have 1000. We don't know how many Facebook friends are gonna end up having. So that's why we write something like this. It just almost like a placeholder. It tells us what we're gonna use whenever we get this number. But before we don't really know what is. This is a basic principle of algebra, and so what we can do when we have a formula like this when we understand the algorithm, the computer album or the using is that once we get a concrete example, this is like the abstract. This is something we don't know just yet. We can plug in that concrete example into here, and we can actually get a number. So in this situation it would be 700 times log of 700. And let's say that these air base to Let's go with something that we can actually sort of calculate in our heads. Let's say it is equal to, um, Let's go with 16 will go a 16. So that means it's going to be 16 times long, based two of 16. And we learned in the last one This is just it comes out to log of what? Or to the base right here to of what equals that 16. Well, two times four is going to equal 16. It's two times two times two times 22 times, two times two equals eight. So this is 48 16. So now we know that four is the answer to this, so we have 16 times for which will come out. Teoh four times. So it's gonna be 40 is gonna be 64 right? Like so. And so that's what we'll have a couple of these. I mean, they can get, you know, they get any more complicated than they could be in squared of log to the in third or in over to log off in over three or anything like that. But just understand that whenever you have a function like this, all it means is we're taking these are both going to be the same number that they're both the variable end. So they're both going to be the exact same number and where whenever we get a concrete number, we're going to plug that number in. I think that's good.
6. 1-5 N-notation: So we have a good understanding of where we're headed in this unit. Let's start building that process. Start understanding this a little bit more and go over something called in notation in notation is just a way of looking at how our program runs represented as a function of in. So that's important right here. It's represented as a function of in a function of end. And so what this means is that we're gonna have an arbitrary number called in. So, for example, we have in right here and what exactly is in? Usually it's judged by how many pieces of data are being processed by a program. So basically what a program is is you have a bunch of inputs right here, so we'll call these inputs. You're a bunch of in points. They come in to our program sort of like a black box, if you will. And then they have a bunch of outputs right here. And that's basically how programs work. You have bunch of inputs, they're processed in some sort of way, and then they're stored or out. Put it in some other way. So what in is is how many inputs are we getting how many times we have to run it and on each process. So it's basically how, for example, in this situation, it's how Maney inputs could be in how many sort of calculations we need could be in. And then how many outputs come out could also be in. So it's something that's arbitrary enough that it could be used on any process of the program execution, but it still allows us to understand how the programs were react. So let's just take a step back and let's look at this in a little bit more of an explicit way. Let's say that in this situation in equals 100 so now we actually have a number here. And so if we plugged the mental, you know all these different versions over here, let's say that this isn't in algorithm and comparing it to an in squared algorithm. You can see that no matter what this number is, this number is always going to be much, much larger, and in this situation it is much larger, and it's actually 1000 versus 100. And the reason that we don't usually explicitly plug numbers in is because the amount of data that's being processed is usually never guaranteed as a certain number. For example, let's look at let's say our program inputs the amount of Facebook friends you have. So our program inputs the amount of Facebook friends we have. It's in putting this data over into our box. But how many Facebook friends do you have? Because I can tell you that I'm my number is gonna be different than your number. Your number is gonna be different than probably a lot of people taking this class. So if we're building an algorithm for specifically 250 friends, it's going to not work for a majority of the cases. So what we do is we build algorithms that can take in an arbitrary amount of data. And in this situation, that's exactly what we actually call. And so it can taken in amount of data. And then once it processes it and outputs, it is going to output in amount of data. And so with these, this sort of classification were able to compare al Berlin's without ever understanding how many numbers are going to put in. And that's just because we're looking at orders of magnitude here. We're not looking at the difference between 10 and you know, nine units of time or whatever. However, it comes out we're looking at in verse in squared. So, for example, in this situation, let's say that if we put in in amount of Facebook friends that it's gonna take in squared amount of time. And now we understand that maybe if I had 10 friends, this would work fine and be only, you know, it come out to, like, what, 100? But what happens if I had a 1,000,000 friends, then what would that number come out to be? It would be an absolutely massive number, and the calculation difference would be huge compared to if it was just an in out. And so let's take a sort of a look at this a little bit more in depth, and you can see that there are a bunch of different ways that we can represent in right here, and this is a graph of its scaling over time. And so what we're trying to figure out is how does in scale into infinity? So the reason we that's on important question is because, as it scales to infinity is the possibility of what our program could input. So we're not looking at How does it scale to 100 or 1000 we're looking at? If we put a 1,000,000 pieces of data or a 1,000,000,000 piece of data in this, how is it going to react? And these graphs tell the story. So down here we have something called constant time, which is this bracket or this Ah, column right over here. And so constant time means that if we put in 1110 1 no matter what, it's always gonna run at the exact same time, there's no end involved. Because no matter what, it's gonna come out to the exact same time, then the one above that is actually log base to of in. Remember when we talked about the binary system? We don't use log based 10 like most mathematics because we're not dealing with the tens number system. What we're dealing with here is the twos number system. So we used long base to and then over here we have the square root of in in in log event in squared two D and in factorial and you can see that there's a large, large difference between these two right here and these two right here. And you can see that the angle gets larger and larger and larger as this goes up straight to infinity. And if you really brought this out, both of these to infinity, this would look like it was almost straight up in the air. Well, this home would look like it was, you know, still at the 45 degree angle. And so that's what we're trying to focus on. Here is, when we compare them, how much different are they going to get over time? For example, this might look like a very small change, but if we put these numbers up into infinity, you'll start to see that the difference starts to come out. So let's look at a couple examples. Over here we have the number is zero, um, 10 101,000 in equals 10 1 So in this situation, what we have here is we have the number of the left and then an algorithm that runs end time and I'll throw them that runs in squared time in an algorithm that runs in log in time and then over here is the constant time. So if we put in one piece of data, you'll notice that what we have is basically they're all exactly the same and zero doesn't exactly always doesn't make sense. So log is a little bit tricky. In that sense, it will put out a zero or undefined sometimes. But it just means that it runs in one time, So all of these are exactly the same. Now we start to scale past one. Would you see, they all start in the exact same place. We start to scale past one into our 10. So, you see, they're all starting here and now we're moving to 10 which is right here. You'll start to see that the differences come out in, goes to just 10 in squared, gets to the very top of this chart at 100. So there's already a major difference here. And then end Log of end gets up to about 33. You can see it's right about there Now, if we move up one more Decca unit to the right, let's say our number is 100 well in. It goes up to 100 in squared goes up to 10,000 which is, if you put 100 of these and stack them on top of each other. That's how large the number gets. That's where it is up there. So this is one of them and you have to put 100 more of these. So this would go through my ceiling and then up in the sky, like probably two or three stories just compared to the 100 over here, which is, you know, right here. You could even large in that into 1000. And what we have is instead of the 1000 right here, which gives us so if we just scale this up over time, it would probably be somewhere laying all the great up here. This one is one million, which is if you took 1000 of these and stack them on top of each other and kept going up into the sky. So this is like a 25 story building. So compared to like maybe like right over here, compared to a 25 story rate over, like here, maybe a little bit farther out in this direction, compared to a 25 story building. That's the difference. And this number goes up and up and up and up, especially if you get a really large numbers, like a 1,000,000 because then this one will be a 1,000,000 then this one will be somewhere , like a couple of one trillion or maybe 100 trillion somewhere around that area. And then what you have here is you have sort of a different number, and you notice that these two look like they're very, very similar. But when you start getting up into really large numbers, the difference starts there really come out. So you'll see right here that 10,000 versus 664 and then one million versus 9909 165. There's very, very large difference. And so that's all we're trying to do with N is, what we're doing is we're looking at equations were trying to know as and goes to infinity . Which equation is better? And if we have an equation that's in squared verse, an equation that's in, we understand that this one is gonna be substantially better, more efficient than this one, and so I just kind of wanted to go over if you didn't understand what log was really quickly. What log is And that is, for example, we have an exponential graph. This is like in squared right here. So it starts off zero. And then over time, its rate of change goes up mawr and mawr and more up until it's almost a straight line. So this right here is in squared. What a law graph is is it's exactly the opposite of that. So it comes up really fast, and then it goes over and it almost becomes a straight line over time. And, you know, we have our axes right here. Same for over here. We have some axes, but basically the reason for this is because log is the inverse of an exponential. So over time, where this the rate of change. So these slope of it a k how much it goes from here to here versus here to hear this is going to get larger and larger and larger up until it's almost going straight into the air while this one over here is getting smaller and smaller and smaller over time. So that means that if we put this one to, you know, a 1,000,000 Every time it goes up one this number might grow by a 1,000,000,000 or a trillion. Well, in this case, once we get to really, really large number, we might be going from 8.1 to 8.0 05 You know, something like something like that, where the change is gonna be extremely tiny, and that's kind of how log works it. Just think of it as the inverse of an exponential function and you'll notice that there's actually a neat little sort of thing that comes out of. That understanding is if we go back to the graph here, you can notice that we have log of end and in constant time and because this one over time almost becomes a straight line where we're only moving, you know, maybe 10.1 decimal. Everyone over log in and constant timer actually usually treated one in the same because, like I said there, their rate of change slows down where these angles don't actually change it all very much. Where in the opposite happens with in squared as we get farther and farther and farther, the angle start changing up into a point where it's almost a 90 degree change because it just goes straight in the air. So I just wanted to kind of explain that if you didn't understand what a log meant, it's the inverse of a exponential. And so when we go to infinity, we have to think of some other a different idea. And that is the fact that if you notice and that we weren't talking about, what's the difference between two in verse in? And that's because the constant the number out in front does not matter. Why doesn't it matter? Let's take a look up here. So what we have is we have in squared and we have to in squared. And as you get larger and larger and larger, the difference between these two starts to dwindle, where the number the change in it isn't as significant as it used to be. And this all kind of comes down Teoh fractions and, you know, sort of numbers like this. But what we're trying to look at right here is we compared to things that were not trying to compare is an end square adverse and in squared. Better because we know that both of those are inherently probably pretty bad. What we're trying to compare is Is this in squared Better versus an end. So in this situation you'll notice that no matter what number we put out in front of here, it's still going to be substantially larger than this number right here. And so it doesn't matter what constant we have out in front When it goes to infinity, the constant doesn't matter, and we only look at the variable itself, even if I know this is a little confusing. Even if we have a 1,000,000 in verse in squared, in squared, it's still going to be the worst case scenario. And like I said, this is a little bit confusing at first just to really wrap your head around. But it all comes down to infinity, and infinity is a number you can never reach because of how large it is once we get up to in going to infinity. Once we get up to near infinity, the constant will not mean anything because this number will still outpace a 1,000,000 times in it will still outpace a 1,000,000,000 a trillion times in at some point. So you got to really think theoretical here, but it all just boils down to this simple rule. If you you know, if you can't get your head around the theory, just think of this civil rule. If there is a number out in front, disregard it. It does not matter in our comparisons. So now let's do an example right here. Let's have some fun. And let's sort of create an example where we can actually kind of see this in a real world aspect. So if every cycle of a program takes 0.1 seconds So in this situation, what we're looking at is our every cycle of a program so are in is that you know, when we did that box are in in this situation is the cycles and side or some sort of thing inside this box, that's our in. So every cycle of a program takes 0.1 seconds. How much faster will program running in log in run than a program running in squared? If 1000 pieces of data come through and so this is really where it gets where you can really see the difference. So let's just run through this. We have 1000 pieces of data coming through. This is N log in, which equals we replace our ends with what are in was given rate here. So it's 1000 pieces of data, which is which will take 1000 sock cycles. So in this situation, our in is equal to 1000. So now we can do the math behind it. We say that it's gonna take 1000 times log of 1000 and that is going to give us 3000 times 30000.1 which in turn gives us 30 seconds. Now, let's compare that to in Squared, which is gonna be 1000 to the second, equals one million times 0.1 seconds because every clock cycle takes 0.1 2nd So that's what we're multiplying by in the end there. And what we get is two hours and 46 minutes. So because those program runs it in squared instead of in, log in. And let's go back to our graph right here and take a look. Remember, these two looked pretty close to one another, but they're not. So let's go back to our graph our example here because it ran in squared. It's gonna take two hours and 46 minutes, or two hours, 45 minutes longer than this one right here. And let's bring this up to an even larger number. Let's say that it has in the log of in so of 25,000 US there in equals 25,000 right here. So in equals 25,000. Now you'll really start to see the difference. So in this situation in Logan equals 25,000 times log of 25,000 which is gonna eagle somewhere around 109,948 times 0.1 So it's going to equal right around 18 minutes and 32 seconds. But let's take a look at the end of squared. So this list, you know, it looks long and looking away longer than 30 seconds. We also notice that it's way less than even 24,000 less data 300.7 squared. If we go to end squared, it's 25,000 square, which Eagles 625 million times 0.1 which eagles six million, 250,000 seconds, which is 72 days. So you can see that this number starts to really take off right around here. And we're only at 25,000 pieces of data. There are how many people on Facebook right now, And their data there are algorithms probably have to run with how many people are on Facebook. So imagine if you tried to put the one was that I think maybe one billion amount of people on there right now through this program it would take this would go up into the billions of years it would never finish. And so this is where you can kind of see the difference of the ends. And so basically, all of this is just trying to boil down to a couple of key facts. End is just four comparisons. So that's kind of our first fact is is in is just to compare two programs. It's to check the run time of two programs. It's not for practical use, meaning that we're not going to be running in in algorithm on a, you know, a program and seeing if it can return stuff within, you know, 20 seconds or anything like that. We cannot do that within. And the reason for that is in is just a standard is a way that we can look at a program an independent off whether the computer is fast or slow, where the Internet connection is faster, slow We can compare it with another program, like in squared or in Log in, and we could start understanding which algorithms are the best and which ones wouldn't scale properly. We also need to understand that it's not bad to have larger numbers. Sometimes it's inevitable. So don't just think that you know, the the idea of a computer sciences is never to find in squared again. It's not supposed to use for practical purposes we're trying to do is we're trying to theoretically solve different problems. And so, for example, all comparison swords can't do better than in log in. So we go back to this chart right here. You'll see that all comparison sorts fall in this line, so they're there somewhere over on this side. They're not in the rest of this easier, Quicker graph and comparison sorts are how you would, like, sort a bunch of numbers so even like everyday use, sort of algorithms still fall into in log in. Also, multiples fall away while comparing so understand back into this lesson that multiples don't matter and then exponential is can be really, really dangerous over time. And our brains don't really have the ability to understand the differences. And it just kind of shows you in this one where we wouldn't think that this would be 72 days verse 18 minutes. Just because we moved from this graph to this graph but exponential overtime are really, really, really strong.
7. 1-6 Big O Notation: So now we have a good understanding of in and how exactly it works, and it ties into our analysis of algorithms. We can start kind of adding on to that. So we know that in is a way of classifying how fast in outer them runs, given a certain number in How long is it going to take with respect that number? So is it going to take just in time? So if it was, 1000 is going to equal 1000 or is it going to take something like in squared times where when it's 1000 it's going to equal? UM, one million, and this is a really important sort of classifications. However, programs aren't that easy. They don't just run at an exact time all the time. A lot of times they have abound. So, for example, maybe you wouldn't loading. It's going to run and in squared, but it executes an end, and in it, you know it exports in log in and so we have to be able to look at this, and we have to look at the program and give a classification on the program as a whole. So, for example, this program, we would always look at the worst case is always going toe. Worst case take in squared, however, at certain steps is gonna run faster. And because of that, we actually have this sort of system right here, which is going to classify the, um, the bounds on our in notation. So let's just kind of go down this and take a look at this. These up here are called omicron. You have faded in the middle and then omega, and this is lower case of mega down here and so you can see that they're Greek alphabet. But we just a lot of times in Matthew's Greek because alphabet gets confusing. So we use Greek toe, sort of symbolically represent things, and in this instance, every one of these Greek symbols symbolically represents abound. So what you have here is let's draw, for example, a bound right in the middle here and up here is faster, so we'll say up here is gonna be faster and below here is going to be slower. So our first notation is little. Oh, and you can see that we just don't like to say omicron like big omicron notation that just takes a lot. So we just say little Oh, big oh, and so little Oh, right here. It just means it's going be faster, which means the program will always be faster than this bound. So, for example, if our Baron was in squared right here, if we had little, oh, of in squared right here, we would see that it means that it's never going to touch and squared. It's actually always going to be faster than in squared, so it's always gonna be right above this line, but faster than in squared. And so what we can do is we can actually sort of use this to figure out the bounce. So the next one we have is we have big Oh, and Big O means that it is going to be faster, your equal to. So that means that it's going to touch this line or be faster, so it's gonna be at worst, and that's something that's very key here. This means, at worst, it's going to be in squared. The next one we have is data, which means it's not going to touch either of these two. It's not going to go low. It's not going to go high. What is going to go is right on this line. So no matter what it's going to be in squared, it's gonna be right along this line somewhere. And then the next one down we have is slower or equal to so we have. This one is Omega, and this one is slower equal to so it touches this line and it's always slower. So, you know, we got like into third into the fourth down here, up here we might have like in. So it's always slower than this, or or equal to n squared. So at best, So this one is at best, it's going to run at this if you put this. So, for example, if we right of n square, like so cut off a little bit right there but of in Squared like so then we understand that this program will always run, or at the best it'll run is in squared, which means it can run worse than that, and then beneath that we have slower than which means it will always run worse than in squares. Little never touch and squared, but it'll run worse than in squared And so why is this important? Why do we want to focus on this one out of all of these? And that's because Big O dictates the worst case scenario and we're concerned with the worst case scenario because given the probability and statistics are, program might touch the worst case scenario at one point and we want to know, without a doubt, what is that worst case scenario? So that is why we use the big Oh, it's see at the worst. What is it going to be? And you can see I'm gonna bring up the chart again right here. When we have this idea, when we have the ability to say, What is our program going to be? We can actually start sort of filling this end. So if we have, for example, um, maybe big o of in. We understand that at worst, it's going to be in so it could be in. But it could also be anything greater than in. So we can prepare all of our time diagrams and sketches and anything like that, going off the assumption that it's never going to go on this side of the line. It's always going to go on this side, and that's really powerful because now we can start comparing programs, worst cases, and we can start seeing that may be when a program scales the for example, If we had an in log in program versus an in programme, we could see that when this one scales, it's going to be worse than this one. So let's kind of just take, uh, let's break this down a little bit and we can kind of see why some of these are meaningless . So we have a program that runs that little oh in square, which means it's faster than in squared. So that means it could be anywhere from in squared all the way up to basically as we just faster than it. So it's not going to be in squared, but it could be anything faster than in squared. And this one it is kind of just like big Oh, but the thing is, we don't though the thing we're given, so it's a little oh of in. We can't actually touch this, so it just makes it a little bit confusing it. It gives us a bound, but it doesn't make it easy. for us to see. We can't immediately look at this and go, Oh, this is gonna run in time. We can look it. We have to look at this and be like, Oh, it is going to run faster than end. But we don't really know where, and it doesn't really give us a lot of wiggle room. And then we have our big O notation and that's what we just went over is we can look at this and we'll be like, Oh, at worst, it'll run at in long in Fada would be great if we could always use data. What is the? This is either equal to or sometimes is used as the average, but most the time it means equal to end, So this would be great. But the problem is, programs don't always run exactly along the line. Sometimes they might run, you know, maybe at one part it runs an end. But at one point it runs in, log in and it goes anywhere in between. This data is just too specific for us, still really like. And then these are pretty much meaningless because they don't give us a lot of information . Think about it, This one right here, which is Omega So omega in square. That means it's going to be slower or equal to omega squared, which is like you're saying our program is going to be either running it in squared or it's gonna be slower. How are we supposed to plan with that in mind? Because slower could go into infinity. It could be as slow as possible, and we would never be able to prepare for that because we'd never have abound on. It would be like, Okay, so it's going to just run at how fast it could take in squared. It could take in factorial. It could take end the infinity. It doesn't help us analyze the program because there is infinite side to it. And exactly the same idea with the, um, little Omega over here as well is it goes off into infinity on the bad side, which does not help us. So these help us because the bad side is let me erase some of this here and make a little bit more room. So these help us because the bad side. So let's say that over here is off to infinity. It gives us a bound so anything it's going to be at worst right here. But it could be better and we don't care like if it comes back and it runs that constant, that's fine. That's perfectly fine. That means our programs on a great shirt, but we can plan for that. It just means our programs going on, you know, execute faster than we expected. So what we could do then is just plan for the worst case scenario. However, if you go on the other side of this line, you cannot plan for infinity. So that is why Big Oh is so important is it's the most useful out of all of these notations . It gives us a worst case scenario, and so it's kind of take an example of how we might apply this to a particular problem. So usually what you always do is you use the worst case scenario, meaning at what point in the program isn't going to be the most inefficient. So here, for example, let's say that we had big o of in squared. We had a big, uh in log in big O of in, and then big O of Constant time, and so these down here are great. But the thing is, is that no matter what, we're always going to be bottlenecked by this guy right here. And that's because our load time rate up here is in squared, which means this program is going to run it in squared plus in log in plus in plus O two, the first, which makes keep it sort of constant. Here we'll go plus one. And you might be thinking, Why am I doing this? And that's because you can actually add each part of the steps together. So because you do this part, then you do this part. Then you do this part, then you do this part so you can actually add them all together. Remember what we're doing beforehand when we're talking about how these how we try to look at a program as it goes to infinity. So if we look at this number, which one of these is going to outpace the others when we go to infinity? Well, for example, let's put it at a 1,000,000. How significant is one million, which is in right one going to be verse one million going to be verse. Maybe somewhere around three million going to be verse one quadrillion or maybe one trillion 60 sits was one and 12 zeros. This one is going to be substantially larger. These are going to be a smaller and smaller portion of the number as it goes to infinity up into the point where these will go near zero as significance. So you'll start having, you know, one Google number, which is 100 zeros plus maybe, like eight million over here, something like that. And that means that this part over here is completely insignificant at this point. So what we do is we eliminate the lower ones, and the runtime of this program is in squared. And so let's just sort of cement that with another example here, let's say that we had Big O. So we had a big O of in squared big O of In Squared Big O of Petroleos here Vigo of In Squared again. And let's see now we have big O of end of the third. And so now what we have here is we have n squared plus in squared plus in squared plus into the third, and so we can actually combine these down to three and squared plus into the third. And now remember what we talked about in the last lecture on the in notation. These don't matter. The constant right here does not matter. Because as we go into infinity, this becomes less and less significant. Until once we get to a sufficiently high number, it becomes basically zero so we can cross that one out. And so now we have in squared plus into the third. And then, of course, this one is going to take off way higher than the other One sore program ends up just being in third. And now, since those are all big oh, notations we know at the worst case scenario, our program will run it in third, which we is now. We have this bound of in third and infinity of run time goes off this way, and now we can plan that at the worst case we're gonna have into the third. And we can plan everything all the way back to constant time, because now all we have is just is bound right here. We understand that, you know, if we put in a 1,000,000 piece of data, it might not work. Maybe we can look to bring this over, but it gives us somewhere to start. It gives us somewhere to compare it to others. So that is a big O notation. Very important. And over time you're going to start. Basically, this will become second nature. Just be able to look at this. You won't really, ever see these other ones. Too often. You'll see this one. Sometimes. If it says like a certain point of a program is equal to this run time, then you'll see this one. But other than that thesis, you'll probably never see and you'll probably never see little Omar Khan to. So these notations right here just sort of memorized them, understand what they mean, and you should be good to go. You'll start to understand a lot more of sort of computer science documents.
8. 1-7 Real World Big-O Example: it's not. We've learned a lot of theory. Let's go ahead and take a setback and go over some real world code analysis. Now, I'm not gonna using complex or any sort of code that you need to know right here. It's all pseudo code, and I'm gonna be explaining it. Every step of the way is if you've never touched code before because that's kind of how I want to explain. This entire sort, of course, is we're looking at the theory behind everything. Not the code, however, applying it to something riel world can help, you know, sort of figure out some of the concepts, and that's why we're doing it. So let's just look at our first piece of code. I'm gonna go ahead and walk through this here. So what we have is we have it says I wrote it in pseudo. That sort of just reads right off the tongue, something you can look at and understand what's going on. So it says for each data in data list, so we'll create a data list here in a second print. Data to screen OK is enough. Let's say we input into this a piece of data a list here that maybe is three to 10 and then would just go buffalo. So this is three integers and a string. That's how it would be technically classified. But we're not even gonna look at that. These are four pieces of data. And so what we're saying is that for each data, So for every single data in our list, we're going to, um, print that data to the screen. So in our situation here, what is in? Well, in this situation right here in equals, the 1234 Because we have four pieces of data. So in this situation in equals four So now let's see what the runtime of this would be. So we have for each piece of data and then we printed to the screen. So Okay, let's go down the list. Here. Let's go there. First piece of data. So our first piece of data is three. We grab three from our document rate here, So this is number zero. An array, or, if you want to think about it, is 1234 Computers usually go with 0123 it just the way they work. So we're gonna grab our first piece of data right here and then we're gonna printed to the screen. So we grab our three we printed to the screen. So that's one runtime. So are run time right now is one. And then now we're going to grab the to because we're going down the list. So for each piece of data, we've touched this one. So now we move on to this one. So now we grab two and we printed to the screen. So now our screen looks like this. And so that's additional runtime right there. That's one run time. And then now we print our next piece of data, which is 10. So now we have 32 10. That's additional runtime right there. And then now we print buffalo, which would be 3 to 10 buffalo. Like so And that's an additional run time because all we're doing is we're grabbing the piece of data and we're printing it. There's no special thought process going in here were just grabbing, printing, grabbing, printing, grabbing, printing. And now if we add all this up, we'll see that it comes out to one plus one plus one plus one, which equals four and so are run time in this situation is four and you'll see that our runtime equals the amount of end right here. And if we can just think about this theoretically for a second, if we had a 1,000,000 pieces of data in here, there would be no, at no point the program where we'd ever have to touch more than a 1,000,000 pieces of data . So no matter what we do here are run time is going to be whatever our end is, which means what we have here is a run time of in in this particular situation, we could actually use Fada as in because it actually equals to end. There's no sort of, uh, wiggle room in here, but we will go ahead and just say that it is at worse going to be in, because that's the notation we like to use. So this is an example of an in which this is called a for loop. So four loops are typically, in example, oven end in a situation. Let's go to a slightly more complex problem here. So let me break this one down as well What we have here is that it says for each data in in a data list. So that means instead of calling it Dad and out there, each piece of data in a list is going to be in check if the data is in the list. So we're going to then go for each data W in the data lists. If n equals w print. True. So let's go ahead and sort of break this one down a little bit as well. Let's go with our same example from before, which was three comma, two comma, 10 comma buffalo like so. And in this situation, our in is still equal to four. And so now what we're gonna do is we're going to look at the first problem here, So let's take a look right now. Let's say we're going toe grab each one of these pieces of data, so we're going to grab the three and that's now in. And then now, for each piece of data in this data list and you can see these names are exactly the same. So that means this is the data list. We're checking for both of these, so we have our three way we've got our three grabbed. And now we're trying to check if this three over here is equivalent to anything up here, and the only way we can do that is quote unquote brute force. So we're going to take this three. We're gonna check it with the 1st 1 We're gonna check it with the 2nd 1 We're gonna check it with the 3rd 1 We're gonna check it with the 4th 1 So that's going to be 1234 operations . So first, let's break this, like, really far down here. We have our three. We've grabbed our three, and now we're checking it. Does it equal? Let me make that a little bit smaller is gonna be a little bit bigger of a graph. So we're saying, does three equal the first piece of our data, which is going to be the three again? Because we didn't tell it to start at a number that it isn't or any fancy things like that . We're just checking this list a second time. So we've got our three and we're checking. Doesn't equal the 1st 1 So does it equal three. Yes, it does. So this is this will print out a true message, and this took one operation. Now it does. Are three equal to does not. So we don't print anything. That's one operation. Does our three equal 10? It does not. So that's one operation. Does our three equal buffalo? It does not. So that's one operation, and now we're done with the threes. So we've moved on. We've taken care of the threes, so let's create a 2nd 1 down here. Let's see this. What? It's gonna be the one we're checking with their exactly the same. But I just want to kind of make this a little bit more visual for you all. So we're done with threes. So let's move on to the twos now. So does too equal. Three. It does not. That's one Operation does to equal to it does. That's one operation does to equal 10. It does not. That's what operation does to equal buffalo. It does not. That's one operation, and then let's go down the list. One more does 10 equal three Does not that's one Operation does 10 equal to does not that's what operation does. 10. Equal 10. It does. That's when operation does 10 equal Buffalo. It does not. That's one operation does Buffalo. I'm just gonna abbreviate it as be here, so I want to keep writing out. Buffalo. Does Buffalo equal three? It does not. So that's one operation does. Buffalo equal to it does not. So that's one operation. Is Buffalo equal? 10? It does not. So that's one operation. Does Buffalo equal Buffalo? It does. So that's one operation. And now if we add all of these up right here, what we're going to see is this is going to be 123456789 10 11 12 13 14 15 16. So in this situation, our in was four, but are run time was roughly 16. And what does that come out to be? Well, that comes out to be in squared. So let's say if we took four and we squared it, it would equal 16. And we can theoretically think about this as well. If we expand this out to five, every single would have not only an additional five here, but we have to multiply it because now we have to check an additional one as well. So every single instance, will have five and will have an entire other five, which squares it so in this situation are runtime is in squared. And the reason for this is even though we have 24 loops, there is what is known as nesting. We've nested 14 loop inside of another four loop. So this part out here, let's draw this and read this part out here is in while this part inside is in as well. So what we do is we take the end on the outside, we multiply it by the end, on the inside, and we get in squared. And so that will be our final run time for this situation. So, like I said, this course isn't heavily designated on writing code or outward, and it still is defined the theory. But I thought that this might kind of help show you what we've been talking about this whole time. How code could be actually analyzed so we can understand these different pieces. And you can see that no matter what, we've actually just learned something very important. Four loops are always going to be in, but nested for loops are always going to be however many nesting there are. So if this one right here nested 1/3 1 this would be an in squared formula, and you start getting up into really large numbers to start checking all of this stuff. So I hope that the this practical example helped you understand this concept a little bit better.
9. 2-1 How is Data Stored: So let's start jumping into one of the next major topics of computer science, and that is the ability to understand and manipulate data structures. Data structures are the way of basically taking data, organizing it and being able to grab access, save it in certain ways that fit with our goal. So some are faster than others in certain respects. So, for example, some of them we'll take up more space while being really, really fast access. Some of them won't take up a lot of space at all, but might take a little while to access. Some of them have quicker inserts. Some of them have quicker remove ALS. It all comes down to what exactly you want to accomplish in your program or theory or whatever sort of section of computer science you're going into. So the first thing we though before we get into those data structures, is we need to understand how, exactly is data stored, because that will help us start to understand these data structures and why one might take longer than the other and why one might take up more space than the other. So the first thing we need to do is we need to first. Like I said, understand data. So what data is is It's just like, for example, it's something some sort of zeros and ones that means something. So, for example, a three might be a piece of data or an entire text, Or maybe an entire document could be a piece of data. And what happens in a computer is that these types of data are stored on, for example, like a hard drive, which is in section into different little parts and those parts of section in the even more parts And then inside of one of those, you have like clusters, which kind of come out to look like this right here. So, for example, maybe this represents this entire thing right here and what each one of these represents is a piece of data. And so each one of these will actually have an address in memory, like an address for where your home is. If you want to send a package, the computer uses the exact same thing. For example, maybe this one. It usually uses Hexi decimal so that so I'm going to use Maybe this one is 00 and so this one would then be 01 And this zero symbol with little X next to it usually means Hexi Decimal. So that's what I'm using for this. So it would go on continually down here. This would be 020304 etcetera. And so why this is important is because once we have the address, we can actually grab the thing that's in that address. So, for example, if we took a piece of data, let's say we had a string or a piece of data that was three and four, like so And let's say that three was stored. Zero x 00 and four was stored in zero x 01 So now if we want to grab these pieces of data, what we can do is we can create a table with these sorts of data and their associations. So, for example, our table can say three. It's stored here. This is stored here, except it would usually be the inverse. We would say what is at 00 and it would be some sort of data. What is that? 01? And it would be some sort of data. And so Now, when we want to actually retrieve these pieces of data, we can either look up the address or vice versa. If we want to see what's at it, we can go to it and figure it out. So let's say that we wanted to retrieve. Three. We want to retrieve the three in our data set. So three is located right here and four is located right here. And so we want to retrieve three. So basically what our program does behind the scenes. A lot of times, unless your program at a very low level, which means very close toe like machine, manipulating the machine completely and totally within yourself, usually assembly languages and stuff like that. Then all we have to do is say, you know, just var X equals three and with the machine is doing it saying, Okay, we're now associating X with the the address zero x 00 So any time the user uses or calls X , we're going to grab what's ever at zero x 00 Now the user comes in here and he then, you know, creates a new variable. Why equals four, And so now why is associate ID with zero x zero wine. That's what the program does is lets say we had the program go. What is X plus? Why? What is that equal? Well, what it does is it goes OK, so X is at 00 Let's go grab what ever? At 00 and we go, it scans to the hard drive. It goes all the way and it finds 00 and it goes, Hey, there's a three here. So the three then gets called, brought all the way back up in the processor, and then it is stored in Ram, which is a closer memory to the actual computer. But that's sort of that's getting really, really nit picky and how computers work. But all it does is it grabs that and says, This is a three. And then so now we say that we wanted to plus into why and so it does the exact same thing . It goes down its list. It finds the sector, the cluster, whatever goes down and says, Oh, it's address or one that it returns. Oh, what's address or what a four goes all the way back up and returns before And of course, it does this in near instant time, and then so then it could do the calculations it called. It's two pieces of data right here. And so now it does the calculation of seven. And so that's at a basis of how data is stored in a computer. It's just a Siri's basically of sort of these addresses, and there's different ways that you can store this sort of data. So, for example, this is more of a direct way of storing it where we can call the address specifically. What way said We want to allocate, Let's delete a couple of things off here. Actually, what we can do is I'll duplicate this slide right here and we'll move down. So you guys will have this topside, will duplicated and then sort of delete off it and do a second example here. So that was a very direct way of saying what I was. If we said we have this data, this structure that is going to be between here and here and it's address as a whole is going to be zero X 00 now If we called zero x 00 let's say, Well, let's put some data in here. If we called zero x 00 there would be no specific data here. We're calling this whole entity, so that doesn't help us at all. So what we could do is this is where we start to sort of get into. The next part is we can actually sort of at little mini names to each one of these little segments. So this is just 12345 And we can use these really basic sort of classifications because we have our main address right here. And now, since we have our main address, what we're gonna do is we're gonna go to the main address, and then we're going to use one of its little mini names to find whatever piece of data we're looking for. And this is what's called an array. And the next lecture. We're really gonna start to take this into the next level, but this is called an array. So what we can do is we could create, save our X equals and then create something like this. And then now it's stores that whole piece of data in this piece of memory and that now if we said Return X of whatever it's little mini name is. So let's say X of one. Now it's going to come. It's going to go to the zero x 00 and then it's going to go to the mini name of one, which is a two. So it's going to return to and so that's a different way of looking at. Instead of having direct access, you could have it segmented in pieces like this. And what it turns out to be is this is actually usually the more common one. Because if we had a direct address for every single piece in memory, we would be wasting a lot of space and be some really, really big tables, cause you gotta think the memory it can't Ah, computer can't create memory on the fly like you can't just store it in imaginary land. If it wants to store every single one of these relationships, it has to have a whole nother section of memory that specifically designated to store memory. So if you try do store every single piece of data like this, your space starts to get really big, and these are all concept. We're gonna start breaking down as we go further. But I just wanted to sort of show this in a top down level so that you can start to understand how does memory work. And then when we get in these data structures like a raise, like trees like linked lists, you could start understanding how it might work under the sort of the hood on. We can get into a little bit more of, you know, just that understanding, and once you understand it, you can use it better. You can sort of manipulate it better.
10. 2-2 Fixed Array Introduction: So let's go over our first major data structure, and that is going to be the array and in this situation, the fixed array. So what is an array? Well, in a raid, by definition is an ordered Siris of arrangement. And so all that means is that it's a grouping of sort of like elements together. And in this situation, the like elements just means that there piece of data so in a ray in computer science world is just a like collection of data or a really just a collection of data that's all next to each other. And what I mean by that is we'll go over different data structures later on where you might have a piece of information here on a hard drive and then another piece over here and then another piece over here and there, just linked together with something called Pointers. So it's like, let's say, if we wanted to represent the below thing down here, we'd have something like this. But there could be, you know, a bunch of different pieces in between, and a bunch of different pieces in between as well. And that's not what in a race that's something called a linked list. So in a ray is when they're all together and you get certain benefits of that, you get certain disadvantages of that. But an array sort of looks like this whenever you represent it. Visually, you have this this segment of memory, and when you create an array, you typically always assigned how long you want. You're asking the computer. I want a section of six pieces of data and the computer is this going to search through its hard drive, and then it's going to find a place of six, and it's gonna give you that slot. So, for example, on our hard drive, if we ask for a place of three, it would look through, look there and like, Oh, right here we have a section of three. So it would give you these three pieces of data, and that would be it in code. It's typically represented by a declaration of brackets like so. So you would say something like X equals thes brackets, and this means an empty array. And then, if you want to put information into it, it might be something like X equals 123 And that's just going to create an array that looks a little something like this, where you have the first lot of one, the second slot of to and the third slot of three like so. So that's sort of how it would be represented in code. Now, how does an array count? What is it? You know what? It's what is it used for? One of the most important part is that the array starts at zero. It's called zero indexing. That means whenever we want to grab the first element whenever we want to grab this element or in this situation, this element would ever want to grab one of those developments. We have to start at zero. It has to be this zero right here. This is because ah, lot of computer formulas work with zero. There's actually entire online discussions of why we still use zero basing. There used to be, ah, whole bunch of different sort of reasons for it and the reason that started falling off. But whatever it is right now, we still use zero basing. So in array is always starting by zero. And that means the end is something called in minus one what is in minus one mean will end in this situation is just the length of the array. So that means the last one is going to be in minus one. It's gonna be only six. So this, as you can see, hold seven piece of information. 1234567 However, if we want to grab the last piece of information, we don't grab seven. We grab six, a little confusing at first. And a lot of times, even veteran computer scientists, they get it wrong all the time. They'll put, they'll put, you know, if they're trying to grab the last element, they'll put X of seven. And that's going to return something called a segue fault, which is just the computer telling you, Hey, you're accessing memory outside of the bounds. You're accessing memory. I haven't given you control over and weaker language or languages that don't have safeguards in it. Well, actually, just return whatever garbage is there, so that's actually called Ah, buffer overflow attack is if you attack that right side, you can hack into things by that. But that's what like the really basic languages like C, whether it's in a lot of sort of safeguards. But whenever we try to do it in something like, Java is going to tell us that the second fault were accessing memory that hasn't been granted on the hard drive, something else is there that we shouldn't be touching. And so that, like I said, is called an off by one air. Put the important part that you need to get out of this lecture. This introduction to fix a raise is that one in array is just a collection of data in the same place on the hard drive will go over the advantage that in the next election we talked about the run times. But this is what it is. It's a collection of this data together. The next thing we know is what is the fixed part of this was going to talk about different arrays. The fixed part of it just means that it is not ever going to change. So in this situation, if we created an array of size four, it's always gonna stay size for there's no code to make a go larger and making go smaller. Once we fill it up, it's filled up. We either have to overwrite something or delete something or move something somewhere else to put more information enjoyed. But anyway, that's the introduction to fixed arrays, really, really need data structures, and let's jump into some more of the nitty gritty of how you actually use these things and what their advantages are.
11. 2-3 Fixed Array Run Times: Let's take a little bit of a deeper look into the fixed arrays and let's look at their runtime, really using the things that we learned in the last unit, you know, the time analysis mumbo jumbo we're gonna be taking that we're actually going to be implementing it and looking at the speeds of different sort of functions within the array. And then we'll be able to take those sort of speeds and will be able to look at different data structures will be able to compare them together. So that's what we'll be doing. We're gonna be seeing some of the advantages. We're gonna be seeing some of the disadvantages, and we're just going to be sort of deep diving into what fixed array is and why you might use it. The first thing we're gonna go over is insertion random. So there is something that we have to sort of come together with. What we're gonna be talking about whenever we talk about an insertion, a deletion or even searching what we mean by an insertion is if we have a set of data that we want to keep in the relatively the same sort of order So, for example, all data is important in all of our examples. So if we have a one a two or three, we don't. When we say insert, we don't mean toe overwrites something that's called replace. So we're not talking about, you know, just changing this to a two. So it's 2 to 3 or changing this 1 to 2 or, you know, doing whatever. It's not about that an insertion is inserting between here or inserting into a place around it. So, for example, in insertion would be This is a blank space and we want to insert a three here. Or insertion might be that after one, we want to insert a three. So how are we going to do that? The end product? What we want is we want something that comes out as 132 and so that's the in search, and that's what we mean by a random insertion. So what is the time of this random insertion? What will it take if we want to randomly insert something in here? And let's go ahead and delete ah, couple of the back elements. So we have something to work with here and so There we go. So then what is the time that it's gonna take to insert something into a fixed array? Well, the time ends up being O to the end o to the end. So exactly why is it o to the end? Well, let's take a look at this. This example right here kind of shows it. Whenever we're inserting random, we have to assume that we don't know where it's going to be inserting. It could be inserting here. It could be inserting here. It could be inserting at the end or at the beginning. But what we do know is that unless it's at the very, very end, we're going to have to move some data. So we're gonna have to take this data, and we want to shift them all down to fit this other piece of data. So, for example, let's say that in this example, up here we have a let's extend this outwards, and let's say that we want to insert another number we want in here. We want to insert a four. What are we gonna have to do, what amount of work or we're going to have to do to get that four in there to do this, we're going to have to take the three. Or actually, we're gonna usually started the end here. So we're gonna take the two. We're gonna move it over, we're gonna take the three, we're gonna move it over, and then we're going to take the four, and we're going to insert it in. So that's going to take how many operations will at maximum, it's going to take the entire array. We're going to have to move every single element over and then plus the addition of the new element. So how many elements of that? Well, that is our trusty in we have one. Remember that in is the length of our the size of all the pieces of data we're trying to insert. So how many operations were going to do? Well, all current piece of data are going to require one Operation one move to move three, move for move. So that's going to be in. At best. It's going to be just a regular insertion constant, so it's gonna be just one. But this is where it sort of gets tricky. We have to look at the average. What is the average time it's going to take to keep inserting into a graph or into an array here, and that's where we get our wrote the end right here. Is that the average time it's gonna take? It's something like, Oh, in over two and you can kind of think about that. If we average over a normal distribution, you know, we put up them in a different random spots. Statistically and probability wise, it's going to insert them at the, uh, least the same amount for each one of the cells. So let's say we insert it. I don't know, 12 times probability wise, we should get to here to here, to here, to To to to down the list off of different place to insert. And that means that up here, it's going to take in times each time. And then here it's gonna take in minus one in minus two in minus three and minus four and minus five in minus six. So it's gonna take constant time, and so that averages out to all of them combined over to which is going to then give us our in over two. And remember, we're talking about run times. We have to cross out whatever we get. This is just a 1/2 in the front. So we just cross it out. And then that means that our average time ends up being just that in. So a random insertion is in time. So then what In what is insertion to the front? But we're talking about that, and that's actually the slow part of our insertions. Insertion to the front is going to be o to the N as well. Let me get rid of this over here. So in certain of the front is gonna be O to the end. Remember when we were talking about the example just previously? We said that if we inserted into the front, that's gonna be our worst case scenario. We're going to take everything and move it over by one. So that's gonna be the O to the end. Now, what happens if we insert to the back? So we're going to insert to this spot right here? Well, this is a very easy insertion. We can just put up a three here. If we wanted you and we've inserted it. It's a constant time. All it takes is a single operation that is to take number. Put it in. Doesn't matter how many pieces we have. This could be, you know, this could be zero and then 10 2030 40 50. And on those bunch of numbers in between, there's doesn't matter. We're just going to say, you know X of 50 equals three and now equals three. It's always constant time. So that's why, with a array insertion into the back is constant time as well. Now, then, delish in a deletion means that we're going to delete the element. But we have to remember, maintain the integrity. Let's raise all the zeros down here to get back. That means we have to maintain the integrity. And so what does that mean? That means that if we take away a seven, if we remove the seven from the beginning, what we need to do is we actually need to make sure that the rest of these fall backwards towards that. So let's go ahead and make ourselves a little example up here. Let's say that we have a bunch of numbers, like so we got we got a 1234 array that is size for, but the indexing remember is starts at zero. So let's say that we wanted to remove one from our data set. Well, whatever we remove the one from the data set, we cannot just leave an empty space here that does not maintain the continuity of our data . If we tried to grab the first element, we just assume that, you know, maybe we have an algorithm and always tries to grab the first element like this. It's gonna air out. There's nothing there. So what we need to do whenever we delete from the front is we have to do that exact same thing. We have to shift back, shift back, shift back. And as you can see what we talked about before, anything that's going to require us to shift back a bunch of times is going to be O to the end. Delish in random is also go to the end as well. So if we just took any sort of deletion randomly in here, it would be owed. The end now in array also has that nice benefit of being able to delete from the back in constant time because we don't have to shift anything. So if we wanted to just delete the the four from the back, all we have to do is just erase it. So in code, you know, it might be something like X of three equals no are empty quote or whatever it is to make it empty. So anyway, that's how simple it is. It Zingale operation. Nothing else has to come in, so it's also owe to the woman constant time. So these air these air, the short term, which I'll use sometimes is linear time. So these are all linear time. This is constant time, linear, constant. And I just want to teach you that vote now because, like I said, I might interchange them in the future and saying, Oh, to the 10 to the N Every time gets a little tiresome. So a lot of people just they'll say Constant, They'll say, linear those so exponential. Anyway, let's look at our last one. How long does it take to search something? The search and unsorted list. This right here is unsorted. We have seven. We have 89 10 and then one. So there's no sort of order here. We don't really know the order. So how long is it gonna take? Let's say we want to see is a is a nine present. Well, there's no riel. Fast way to do this. What we have to do is we have to do the brute fourth brute force method. That means we have to start here. Is this evil denied? Nope. Move to the next one. Is this sequel tonight? Nope. Move to the next one. Is the single the nine? Yes, it is. But if we're, for example, looking for 90 we'd be like, Nope. No, no. And we go all the way down the list until we run out. Until we get to a space right here that doesn't have any information. And then we would return. No, it's not equal to 90. So that operation ends up being like the rest. Oh, the end. Because at worst, we're going to get 02 p.m. We're gonna have to go down every single one of these and make it all the way to the end to find out if it actually exists within the array, we could get lucky. Of course it sometimes it might be at the very beginning. Sometimes it might be two or three elements in, but on an average basis, the worst case is it's going to be owed to the end. So that's also going to be linear time. And then the last one, the last time. We're actually going to say for the next lecture because this is gonna work prior, a little bit of sort of deep diving into why a search sorted array comes out to log of in. And so you might be looking at that right now being like, What does that mean? That's why we're going to deep dive into exactly why that search sorted equal log of end. But I just wanted to go over one or two more things in this lecture, and then we're going to go ahead and jump into that lecture. The last thing I want to go over is what does this all tell us? What does this tell us about the advantages and the disadvantages of an array we can see right off the bat that for storing data that all have sort of a integrity to them, maybe in a rate isn't the best for like if we have Ah, program that needs to insert and elite constantly because every time we insert delete, it's gonna be owed to the end, especially if we're inserting to the the front or randomly within it. And we actually have other data structures that can make those all constant times. So maybe something like that isn't very important. And something that sort of isn't over here but we've been taking for granted is the ability to just check what is at each one of these elements. I can just say, you know, I can call except three. I can call this element, and it's going to instantly tell me that 10 is there. That's something else that's a very big advantage to a race. Is this constant access time? That's something else that maybe we don't compare with others that much. But it's called access time. How long does it take to get to an element within the array and in this situation, all of these air constant time? That's what sort of an array gives you is the ability to just start somewhere and then go to any one of these elements and get to it immediately. So the access time is constant, and that's very important. If, for example, let's say that we don't have a program that takes out and insert a lot of information with the program that every once in a while it does that. But most of the time it's just checking different places. We're just storing different information here, and then we're like then the programs like, Okay, I need number three, what's at number three? And number four? What's at number four and then it's constantly sort of sending that information back and forth. But most the time. This stays quote unquote static, which means it doesn't change. That's where an array can come in handy. We have that really fast access time, and it'll make everything go a lot quicker. However, like I said, if we're inserting into leading all the time, maybe want to look at a different data structure anyway, let's jump into that really need example of the search sorted, and let's look at why that comes out to log in
12. 2-4 Binary Search Algorithm (Fixed Array Sorted Search): So let's go over what we talked about in the last lecture. And that is gonna be the fixed array search sorted. So why is this different? Why does this help us get to that that interesting runtime of log in which is substantially faster than the O to the end. The unsorted version. This is gonna be a lot faster than why when we sort it, Do we have this advantage? And that's because we do something called a binary sort. So it's going to be a binary sorting algorithm or a binary search algorithm. My bad. A binary search algorithm. The reason that this allows us to do a binary search algorithm is because of the way that it set up. So imagine the old way of doing it. And that is the brute force method. Let's say that we're trying to find does 19 exists? So we're trying to figure out is 19 exist like So to do that with a brute force method, we just go from the very beginning. Is it here? Nope. No, no, no. And we just keep going down the list. And of course, it might not be in the list and Therefore, we have to go through the entire list to figure that out. Or it might be halfway to the list or near the end, or maybe right at the beginning. Either way, at worst, it's gonna take That whole list is gonna take O to the end time. However, with a binary search, we can guarantee that it's on Lee Going to take log of end time and log of in time is like I said, it's a whole lot faster. Remember when you're talking about log in and the previous tutorials, it goes like this, meaning that the at the very beginning, the number off the number of elements that we have. So let's say that this going to the right here, this access is going to be the number of elements in that we have. And this is how long it takes. The search or runtime are run time. And so what that's gonna tell us is that with the Mawr in that we get this run, time slows down so that at some point we get out like 32,000 and it's gonna be basically the same runtime at 64,000 which is basically the same runtime as 128,000. It's gonna go up slower and slower and slower each time. And so this happens because of the way the binary search works. So let's go ahead and go through that example. How would we go through? How would we search a fixed array? Well, what we do is we go ahead and we do this little formula. It's called left plus right over to. So we take left plus right, and then we go ahead and we divide that by two. So that's our formula. So let's go ahead. Let's take that And let's actually move that up there because we're gonna be referencing this a lot as we do this. So left, plus, right over to. So what is left left is gonna go ahead and be the very left portion. It's gonna be whatever number we have left on the left, if that makes sense. So in this situation, it's a full array. Are left, is zero and are right is the full right on the right here, which is the 10. We're gonna take those two numbers, we're gonna add them together, and then we'll divide by two. And that's the index We're gonna search at first. So that means our first step right here. We're going to take the left. Let's keep the numbers the same here. We're gonna take the left, which is gonna be zero, and then we're going to go ahead and add in the right, which is the 10. And they were gonna divide that by two. Which, of course, just comes out to 10 divided by two, which is five. So our first step, we grab index five right here. So we go ahead and we grab index five. We go ahead, we take it, we bring it back and we take a look. What is it? So what is ex of five? Well, it returns 12. So now our next decision is 12. Less than or is it greater than the number we're looking for? Well, 12 is less than the number were looking for. And remember, this is a sorted array. So if 12 is less than the number we're looking for, why would we ever search on this side? We already know that this is sorted, so I mean everything before this is gonna be less than 12 and we know our number is actually greater than 12. We know that 19 is greater than 12. So 19 has to be on this side. So therefore, we just take this entire side over here and we just exit out. We don't even look at it. We're not going to touch it. We're not going to verify what's in there because we trust that this array sorted. Therefore, we're gonna go ahead and go to the right. We're gonna take a look at the right side, and then we're gonna do our little formula one more time. So we're gonna go ahead and take the left and the left is now a five. Typically, though, we've already searched this. We don't need to do Look at this. So we're actually going to since it's greater than we add one to it. If it was less than we would take one away. But we're gonna go ahead and start with six now, so we're gonna go ahead and go with six, and then we're going to add in a, uh, plus the right side, which is still 10. Divide that by two. And this situation, we're actually going to get a 16 divided by two, which is going to equal eight. And so that we're gonna look at section eight or index eight here, we're gonna say, What's that? Index eight. We take a look. And, of course, X of eight. Well, that returns a 20. Well, is 20 greater than or is it less than our 1920 is greater than 19. So our second search was right here. 20. We take a look. We realize that it's lesson. So we go left. And that means that Hey, we don't need this entire section over here. So now we're going to go ahead and take the eight minus it by one, and then we're going to take whatever is left. So the left part of this is six. The right part is seven. So we're gonna keep going down the formula. So we go ahead and we take the six and then we add it to the seven in this situation added to seven. Then we divide that by two, and we get ourselves 13 divided by two. Now, this is important. We have 13 divided by two. With this, we typically do something called truncation where this does not equal 7.5 in this situation . Or actually, none of this would be 6.5. This is not equal 6.5 and we don't round it up. We don't round it all. We do what we call it truncation, which is We just truncate off the back end. Whatever this is, we erase it and we take whatever the integer is. We take whatever the number at the beginning is the whole number. So, for example, if we had a 7.9, it would still be seven. We had 7.3. It would still be seven. If we had 7.98 it would still be seven. That's what truncating is. So that's what we do. Whatever we do this formula. So we truncated off and we check six. We go ahead and we take a look at six. We take a look at six and we get ourselves the number 17. We do our little process again. Well, we know that it's right of it and we can actually we could do the formula one more time here and usually have like a check. If it's the last element, just look at the element of it. Is it there? Is it not move on? However, we can do the whole thing again. We can do. The seven is a left. Plus seven is the right divided by two. So that's 14 divided by two, which equals seven. Then we do our formula we check is seven. Right here is 19 equal to 19. It is. That means right here we go ahead and we just check it off. It is so that's what this formula does. It takes it and every single time it divides it in half. And that's sort of the important part here. To understand is that every time we divide the problem and 1/2 so that means, for example, if we started with oh, I don't know, like 128 different items after the first generation were only checking 64 items. Then after that, we're checking 32 items and after that we're checking, you know, 16 and in eight and then four and then two and then one. And the thing is, is because this doubles every single time, the number of steps only increases. It increases slower and slower. So for example, each one of these is one step. So we have 123456 and were to go over this multiple multiple times throughout this course because it's so important in so many different areas. But what we have here is that 128 equals Onley six steps. Now, if we double that, if we go one more step further So instead of 128 items, we have 256 items Will 256 only equals seven. We only do one more step to get there. And then, of course, we could go down another one, which is gonna be 512. And that's only any equal eight and so on and so on and so on. So we get that that that graph that we're talking about not not not that ugly. Have a graph. We get that graph that we're talking about, where it goes up and over where the number of in that we get the change, the run time, this is the runtime of the left gets less and less and less and less and less over time. And that this pattern here, this is called Log in. And if we actually applied, if we actually put in log of 512 it actually gets us. It outputs eight. That's what the pattern is. This is This is just the way that mathematicians represent. This pattern is through this thing called log. And so that's why the runtime comes out to log. Because this little complex pattern here, it's not actually too complex. Once you plug into a computer, it's like six lines of code. But this pattern right here actually does the whole thing. It does it and it cuts it in half every single time so that we get the log in time. And therefore, since we can do it in there, we can use this binary search to do it. Our worst case, runtime, can be brought down to just log of n
13. 2-5 Circular Arrays: in the last lesson, we discussed a raise and how we might add subtract. And then we also discussed the time of these rays, like what? Every sort of operation, what it costs. So if you wanted to answer randomly, we could do it basically instantly. But if we wanted to insert it in the front with respect to, like the rest of the race, we have to push the rest of the ray back would actually have to go all in. And all these sort of, you know, they assume that we're not overriding. Things were they assume that we're managing our data effectively on we're not deleting or overriding or going too far out of our array bounds. So these are basically the the array bounds that we have created, but we can actually improve on one of them, And that is gonna be this one right here, the insert at the front by creating something that's known as a circular array. So in theory, what we're doing is the array and memory is always going to look like this. It's always going to be a block that's going to go from, you know, the very beginning, all the way over to the very end like so it's gonna be allocated and then it's going to be filled, and that's about that. You can't really make it like anything else. But we're theoretically doing is we're going to figure out a way that we can take this arrive and we could make it into a circular ray, which is like this. So, for example, evident right now and now we're going to actually take this array, and we're going to make it into this circle right here. On the benefit of that is now that if we want to insert something, let's say we have 123 if we want to insert something at the front. So let's say this is the front. All we have to do is just look one back and insert it right there. And how fast with that. That was constant time, which means we can actually improve the insertion onto the front as constant time by creating this circular array. And so what we're gonna dio is this is a little bit difficult implement in code, but the theory behind it isn't that difficult. So we're gonna have what we what is known as Pointers. We're going to have one that points the top right here. And they were going to have one that points to the bottom right here. And they're both going to point in the exact same spot right now because this top one, this top one, is gonna be the front of our right. We're gonna have something that tells us this is the front, and we're gonna have something that tells us this is the back. And so now, whenever we insert into the array and we're saying we're not doing random insertions were only doing insert at the back or insert of the front of the array. So we're inserting at the back right now, or the front. They're both pointed the exact same place. We're going to go ahead an insert, a number right here. And so now that we've inserted a number into our array, we can then take the back and depending on how you implement it. So, for example, we could always insert one plus whatever the back is pointed out, or we can point the back at the next available space. I'm going to point at the next available space So we filled this one now and on the back is actually moved over here. So now it's pointing over here like so And so now we have this are back, is pointed to here, and so let's go ahead and insert another number. So our program, we can just actually, if we wrote this out into a program, we could actually do something like this. We could go X back, and it'll actually point us to the exact place we want to fill. So we go X back equal six. And so now we can put a six in here. We're gonna do exactly what we just did before. We're gonna select this right here. Let me get a selection right here so I can select at all. We're gonna grab this element, and then we're going to just move it over to the next part over here, and somehow part of my arrow didn't make it with that. So let's okay, part of that I was going to stay there. So what we have now is the back is moved over one. So now are back is pointed here, and so all goes well. But what we can do now is because we have these pointers telling us where exactly everything is headed. We can actually insert of the front without doing um without having to push everything forward. And that's because instead of pushing everything forward, we're going to just move the pointer back. So let's insert of the front now let's go ahead and insert the front. So, for example, we could go, um, we go x of front equals some number, but you'll notice that in this situation the front is actually pointed at the front. So what we can do is we could go x of front minus one, and that's going to take us to the spot right next to it. The minus one spot. And we could have done what we did with the back where removed it and said doing that. But I wanted to do both ways here, So we're doing X of Front minus one equals seven. And so that's going to get us ex of negative one equal seven. But that that doesn't work. We cannot add a negative one inju. Our ray will get a second fall. We'll go outside of our bounds. Negative numbers do not appear in a raise. So what we can do and said is it's actually going to do this algorithm here where we're going to take the the whatever number is. So we inserted into negative one and we're going to add however many, um, spots there are in here and so you can see there's 123456789 10 11 12 Because remember, this is in minus one. Over here, this is in minus one. So that means our in equals 12. So we're going to do is we're going to take the illegal bound and we're going to add into it. And in this case, So it's negative one plus and so we're adding in as 12. And so that's going to give us the position of 11. So now when we go to this negative one, we're going to then re associate it, and we're going to actually insert this into the back, right? Like so, insert this seven right here into the back of the ray. And now what we're going to Dio is we're going to take the this element and we're going to drag it over here like so And now the front of our ray is Let me go ahead and race that there so we don't get to any arrows in here. So now the back or the front of our rate is actually at the back of our right, and so this gets a little bit confusing. You're like, What's the front? What's the back? So we have to not think of this like this, this straight line anymore. We have to think of it as if we curled this around in circles. So we're doing is to make it to emulate that curl is we're actually taking the front and the back we're keeping track of them, and this will allow us to insert at both of these positions by just calling a certain number and inserting there. And this will keep going this way while this one will keep going this way. And, of course, you might get to a point where they start overriding each other. And again, that's not something that we're trying to deal with you. That's a mistake. If you ever have your front and your back sort of like tryto attack the exact same area. That's a mistake on your part on your programme's implementation and it'll need to be addressed in the coat somewhere. But this isn't about the code. This is about the theory. So with this, what we've created here with this circular ray by adding the front and the back pointers and, you know, they could just be variables that literally our numbers. So we could now say front equals 11. While we could say back equals two and then if we ever wanted to insert, we could just go x of back like we did up there equals something. Or we could go x of front minus one. And I did both of the minus one and the exact in this because, like I said, I just wanted to show you both the ways, but you might want to keep them sort of similar. Um, so it should be back minus one and front minus one or back and front. It'll just be confusing if one of them is different than the other. But what I'm trying to say here is now that we have these variables and they're really easy to keep track. After we create something, we just say front plus one back plus one or whatever. Our front minus one back plus one And then we can keep just calling it in. Starting at the back in the front of this ray, we make it circular. And through all of that we create, we can cross out the o of in and with a circular ray are insertion time actually becomes o of one.
14. 2-6 Dynamic Arrays: up into this point, we kind of ignored important case. And that is the fact that a raise can fill up and what we do, win a raise, fill up. We've been discussing before this point basically what are known as fixed arrays, which are raised that once they get to the end, once they're full of capacity, nothing else happens. You're gonna have to overwrite some doubt, a grant to delete some data. Somehow a compromise going to have to meet have to be made. However, there is a sort of an array that can grow with your program, and this is called a dynamic array. And so what we're gonna discussing in this lecture is dynamic raise how they work and some of the run times that might be associated with such a raise. So the first thing we need to understand is that one side and Ray gets full. So let's say that we have in a ray right here and once it gets full of numbers or piece of data. So we have, like, a three here a two, maybe a 1 1007 you know, doesn't matter once it is full. Once our Ray up here is full what we do to add to that. All right, So if we, you know, label down here the certain the areas here. So we have the 01 column 234 The only other way that we can add to this is that we have to overwrite something. And if we want to add to the end, then we get to a big problem because we can't add to the and there is no end. It's going to say, um, for example, if you have a program that just increment ID by one every single time, we wanted to add to the end. Once we get to this point and we try to add to the end once more, it's going to get a second fall. We're going to have something like, you know, um oh, we have to add up to ex of five now equals seven. That is not going to work. That is going to end in a segue fault because there is no five here. There is no five to the right over here, so it's going to try to touch data that it does not have the ability or the knowledge to touch, which will set fault the program. It'll air out and it'll never execute. So then what do we do in this situation? What we can do is we can actually dynamically increase the size of the array. Now the problem is, is that once an array is created, the size is created. If you remember, back to have a memory was created, for example, we had a bunch of segments and let's say in every one of these segments it created this array right here. But the problem is that this is segment was already assigned and after this was additional data. So you can't just increase the size of the array. That doesn't work because you'd be overriding some other maybe system critical data over here. And that is a problem. We don't want to be doing that. So what you can dio is instead of just adding to the very end, you can make a new array and copy all the information over there a new, bigger array. And so that's the common technique that's usually used. And so there. You know there are a couple of different ways to do this. Let's say that instead of a five, array wanted to add one to the end. So now what we have to dio is let's create ourselves in a rare right here. Let's say it is instead of five bigot, six big. So it's here. Here, here, here, Here was that 123456 Okay, And then so then we copy over a data. So we have three and this hold up the problem here Is that what we have to create a new array? Because what we just talked about, we cannot just make it longer. So we have to create a new array, but there's no raise empty. Now we're gonna actually have to go in and grab every single piece of data and copy it over . And so when we do that, what is gonna end up being is we're gonna have to go one transaction. So one cost here is three, then one cost to get the 21 cost to get the 11 cost to get the 10 1 cost to get the seven and then the insert here. So, for example, let's create an insert. Now it creates ex of five. We said it equal to seven. And so we get that seven inserted. But you'll notice that instead of the normal insert to the end that we're dealing with, remember, back in all of this, inside to the back should be constant time. So the dynamic Ray the problem is, is that when we tried to insert to the back of a an array that was already filled up, we added an additional one, but it took in time. Why did it take in time? Because there in numbers here there's 12345 So in this situation are in our in equals five . But we had to take. We added one to our ends with an R N became six, and it took six operations to copy everything over. And so imagine if this was an array of, let's say, 100 of these right here. It would take 100 operations to copy everything over. So that means directly proportional to how many numbers we have here. How maney in we have here. That is exactly how many, um, operation is gonna take. So instead of our fast odor first, or are constant time is no longer that this insert actually becomes o of end time, and this is a problem because inserting into a race should be fast. That's one of its advantages. However, if we get to the end time than some of our other options start to become a lot more appetizing, because no longer do we have instantaneous inserts. So what do we dio? There is actually a way around this. There's a way so that you don't have to do this implementation so you don't have to change it toe o to the end. You can keep relatively this right here, and it all comes down to some averages. So let's say that instead of doing it the naive way, which would be to insert a new box every single time. So if we need six, we would take We would insert a box here at the end, which we did right here. And if we needed one, we create an entire new array down here, and then it have you know, it B B seven large, and we could keep doing that. But everything that time we do that it does become owed the end, so we can do instead weaken do instead is it's double the size of the array now. So if you have an array that's been created, let's go through the double sort of chart here. So you have an array. It's now filled with one. Now you insert to the end. Now, now we double it. It's too than four than eight in 16 than 32 in 64 1 28 and so on. It goes up we're doing to Exponential is here to do the X right up here. That's what we're doing here is every time the rockets filled, we're going to double the size of the array and the way that the reason that this helps is because if you'll notice there is a decreasing amount of time that we have to use over the end. So, for example, you'll see that in each one of these points, it has to be O to the end. We have to you. We have to copy over the entire data from the above. We have to copy over the previous data into the new one. But you also notice something here is that there's an increasing amount of distance between each of one of these numbers. Increasingly, it's getting longer, longer and longer, which means if we don't have to resize it, we don't have to resize right here or here here. Which means between for example, 65 65 to 1 27 We don't have to resize it, which means that all of those operations a road to the one because we're just doing regular inserts at that point. So that's kind of what this this down here represents the bottom one right here is What I'm representing here is we're taking this size and instead of having it five, we're doubling it to 10. So now we copy over the data right here and we go to three, 21 10 107 and at the seven. And now, instead of having to do over the end for the next four operations, these ads will all be oh, to the one because we don't have to resize it weaken. Just insert it like a normal array. And so this one was the O to the end. And then if we bring it back, if we look up here, we can think about this so this one would have had to been o to the end to insert, and then it had to double. So this one would have been out to the end. But then we have a spot of 02 with one oh to the one and then would have a double. This one was five. It should be four. If we're doing it this way, we will see that it starts getting increasingly amount of the O to the ones. And why is this so important? Because that means that instead of our entire operation being O to the end, we can actually create in a sort of an average. So let's take a look at this average. So we go in here and let's go through some mock creation here. So we have one insert 23456 I'm just gonna go down the list right here. We're just gonna create ourselves a little table here. Of all the possible inserts that we could do up until a certain point and you'll start to see the pattern here, you'll start to understand why this is actually really good to do it to double the dynamic array, and we'll go up to 25 right here. We're actually let's go up to 32 because that's one of our bits are one of our double points. Okay, so when we insert 12 here, it's gonna be a constant operation. We have an array. We're going to create our right and it's gonna be F size one. So this is gonna be constant is gonna be a one right here. Now, once you get to this point, will run ran into space, so we had to double it. So now it's too large. So this was an ode to the in operation, which, in actuality, was actually a no to the one operation because we only had one piece of data we had to transfer over. But it's technically an ode to the end, So we're gonna sit without to you in there. Now we have an open space here, So if you just think about this, I'm gonna draw the picture over here a little bit at the beginning to sort of illustrate what we're doing here. So we had one spot. The problem was we wanted to insert a 2nd 1 There's no spots left. So what we did was we doubled, copied the data over. So now there's enough spots. Now we want to go to three. The problem is that when we get to three, we need to double it again because there is no spots available. Now we're going to double again, and we're gonna fill that insulin is gonna be another O to the end. But now number four to fill in the fourth spot, it's It's open, so we can just go to the one we don't have to do any doubling this spot right here. The spot rate here is open, and so we keep on going, and we noticed that five is gonna need to be doubled because we've run on the spots so we can go ahead and double it again. Copy all of our data over to the new array. And so now let's go ahead and remove the ambiguity right there. And so now we have all of our data. Our data at this point is filling in this one and this one and this one this one and five, And now, once we get to fight. So that five operation that one took us go to the end, and now Once we get to six up here, we can fill it in immediately. So go to the 17 is exact same way over the one and are place has eight places. So it's o to the end right here or my bad, my bad. It's still one more with one because we do have an open space here with another O to the one. So now we have the entirety of the array filled up once again, this is 12 12345678 It's filled up, and now we have to double once more, so we double it again. And so now what we have to do is we have to create a no to the in operation, and I'm gonna keep drawing it now because now you can kind of see a pattern. Here is so we've doubled it. And now it's good up until 16 spots, so we could go out with one. Oh, the one with one. We can keep going down to this until we get to the next double part. And then now we're at 16. So now it's go to the end, and then we're gonna keep going Constant time, constant time, constant time, constant time, constant time, constant time. I'm not gonna write Theo's anymore. I'm just gonna right the constant times constant, constant, constant, constant, constant. And then up until if we did 32 constant 33 in and so on. And so why is this important? Well, you'll notice at the very beginning, we had a bunch of O to the N operations. But after that it was getting increasingly less so to the end. And increasingly more log cabin or increasingly, mawr constant time. We, for example, right here We have three constants than one linear time. That's what it is called. We're gonna start calling it linear one linear time, then a bunch of constant times, one leaner time than an extraordinary amount of constant times than one time. And what happens is over time these constant or these non constant times these linear times are going to get outnumbered. They're going to get outweighed. And that is because the fact that we're doubling in each time, so when we double it, we're actually creating a log rhythmic occurrences of O to the end. So we're creating an occurrence like this you can see that read. At the beginning, we have a bunch rid of the beginning. We have a bunch of different. It goes from, like, zero toe. 1234 calls of logged in. But after that, five calls is over here. Six calls his way over here. Seven calls is just completely off the screen. And so it gets less and less over time. And all of this boils down to an important sort of, um, an important realization. And that is let me go ahead and do something right here. And that is the fact that if we double it each time when we double it, what we're actually doing is we're creating log of in time, a log of an average four our ends, which means that this kind of gets into some little bit of a tricky Matthew. So I'm just gonna I'm just going to tell it to you and just sort of understand it. Maybe we'll attach a document that it understands a little bit more. But what happens here is that log of end, because in is associated with that, because our calls, because our calls to because our calls to end start to go into a log of in. We can actually bring this down to constant times. The average becomes constant time, which means the overall worst case scenario becomes, for all intense purposes, constant time. Now it's a touch above true, constant time, but for relational purposes to compare it to other things we can understand and weaken. And it's very widely believed that this insert becomes constant time. And so, through all of this, it just you can think of it. There's a couple of different ways you could implement a dynamic array, but there is a very efficient way to do it. And you'll notice that a lot of computer science has stuff like this where there's a very efficient way to implement these designs and these patterns. And there's a very inefficient way. And so in this situation, doubling in each time is very sufficient in the fact that it comes out to constant time all the way at the end because of this log rhythmic change. And that's why it's understanding. The math is sort of important, because through this, Matthew could understand that when you create this long average of chains over time that your operation actually sort of averages out to constant time, and that would be the most efficient way to do this. So that is dynamic arrays and a little bit of the theory behind them. A little bit of math behind them and just sort of a good introduction to how dynamic raise work and how you could sort of create a dynamic rate, which is just creating a new one and then doubling the size and then filling all the information in every time it doubles. So those are dynamic raise their really, really cool because they remove some of that limitation of space size having to be constant , and they allow you to just have a lot of sort of ah flexibility in your programming.
15. 2-7 Array Review: So we've covered a lot about a raise in just these past couple of lectures. And so I kinda wanted just take a step back and just go over it all again really quickly and sort of tie all this together in 11 video. So we're going to be just going over quickly. What we've learned in the previous lessons sort of re, you know, re illustrating what we've learned and then just tying all of these aspects together so that we can really have a good understanding of a raise, how they work, what their run times, etcetera. So the first thing we need to do is just remember how race works. So raise are just a point of data inside of our hard drive or our ram or some other point of our computer. And that data is brought into a segment on whenever we create an array were pointing to the piece of data as a whole, and we're giving it a name. So, for example, it has a physical address, a computer address, but we might be just declaring it as a variable like, let's say, X equals an array of what's a rank length five. And so in this situation we might be declaring our array of of the name of X while in the computer talk. It's finding an address of specific place and memory, and then once we call that address, it's going to then bring us into the array itself. And the array has these many names, which I like to call them, which are just the sort of the part of the array that we want to contact. And then so we can contact all of these and they're usually like they might even be put on the end right here. So this might be zero etcetera. However, the computer does that sort of outside the scope of this course, But the computer will do its thing so that we can access each one of these points instantaneously just by giving it its full name. And then it's many names. So in this situation, if we wanted, if we filled the sinless and data rate here, So, for example, we have, you know, just some numbers here, and we wanted to grab a specific point or a specific place in here. All we have to do is just give it the name and then the mini name in this situation, too. And then we'll get back our results. So, for example, this would be setting it to 10 and then to get it back, we would say we would just call it and it would call it back and it would give us 10 because it would go. This is its main name. It's grabbing its mane name. It's used in its many name, and then it gets our piece of data like so and so a raise can also have additional properties, the array in this situation. What we're doing is we're actually just going from one to the next to the next the next, and we're calling it arbitrarily saying to Should be 10 3 should be 44. However, if you want to add a little bit more of a style a structure to this, we can start adding to the back or adding to the front and just by itself. In this situation, we start getting in the problem where if we want to add to the front, we're going to have to take all of our data and shift it down by one because there's no way that we could just associate. But this is the front. This is the end. We can't go to negative one. Negative one is a SEC fault that doesn't exist in our program. So we'd have to shift everything down. And that brings a problem so we can dio to fix that. Problem is, we can start to create sort of an additional aspecto honore. So if we want to make it circular, remember, this was lectured to If you want to make it circular, we just add a front pointer and a back pointer onto it and this will emulate a circular array. It will emulate this sort of thing where the array is actually circular in that it'll if you want to add to the front all you have to do is move one to the left, going out to the back. All you have to do is move one, or in this situation, counter clockwise or clockwise. If you want to add to the back and so we can then move these pointers what's on everything we can then move these pointers from one area to the next. If we so choose so once we add an element here. We might associate the back here. We might associate the back to right here. And then we can add to the back in instant time and add to the front and instant time as well. But then we run into the case where, for example, are array might run out of space. So, for example, we might have this right, and it might get too large. And so if our ray ends up getting too large, what we can actually dio is we can go ahead when we can double it's size. So this makes it a dynamic array, weaken do really, any way to make it larger? We could make it go, for example, Um, just add one to the very end. But we learned that that doesn't make it efficient. That actually makes it really inefficient. So if we had an array of to, like so we could double the size once we reach an invalid point and we could make it an array of four like this and we'll keep doubling it up and up and up, and we'll see that it will slowly get larger and larger and larger to fit our data over time, and this will allow us to create a table or an array, which over time adds an insert of the end of O of one by averaging. So we'll have a couple of instances these instances right here where it's going to be in where it's gonna be a linear time in each one of these instances, and that just gets to the fact that we have to copy all of our contents from every time we re size it. We have to copy our contents from the older A into the new ray and then add our additional number. And so every time we have to expand this, we're going to end up having to copy it all over, which is going to be in, um, procedure. So it's going to be an in procedure every time we have to do that. But if we double it each time this in procedure gets less and less common until it goes up into an infinite point at which it basically becomes, we're going to have to resize it. We're gonna have to use the almost zero times so thinking if we go to infinity at some point, there's going to be a time where we're using, you know, a 1,000,000,000 constant operations and then one in operation. And if you take one and divide it by a 1,000,000,000 you get a number that's so small that it might as well be zero. And because of that, it averages out that the ends as they go to infinity, the ends here as they go to infinity, actually end up becoming zero, and they become basically useless. We don't look at them anymore. So we just look at the one that sort of one out, and that is the constant use of constant time. So if we double our A raise every time they run out of data, we can get a constant time insert to the back, and that makes our program that much more efficient. So that's just sort of the rundown of what we've been covering in these last lectures about a raise, how you might make them circular, how you might make them dynamic, and in some of the times associate with each of these rays. So this is will be testing sort of this data in the next sort of lesson, where we're actually doing a little quiz here of just sort of making sure that all of this stuff is cemented into your minds and you understand at least the basics. You don't have to understand this whole thing 100%. A lot of computer scientists don't understand this whole thing 100%. We just want to have a good understanding of it so that if we do forget something, we can go look it up. And we have a good enough basis that we understand what we don't understand. If that makes sense, we understand the part that we're having trouble looking up. And if we just understand a tiny bit, if we have a foundation that we can continue our learning. I'm excited that we made it this far that we guys have started learning your first data structure in a raise, and I can't wait to see you in the next unit where we're going to be discussing linked lists
16. 2-8 Array Real World Examples: Now that we have a good understanding of a raise and how they theoretically work, let's look at some real world examples of where we might have interacted with an array without even knowing it. The most common example, or something that's used very commonly is on our smartphones. Ah, lot of times both the lists here and the sort of the APP layout are all array based. So what they do is a storm in a raise. And whenever you click on this, it calls the command whatever array you have. So, for example, let's have this grid over here on the right. You can see that we have 12345678 all the way down. And this is the array. Numbers in each of these are stored in one of these arrays. So what have we click on a button? So, for example, if we clicked on Facebook rate here, which lines up with number 10 it's going to go in the array and look for the command at 10 . Once it sees that the command at 10 is open Facebook, it'll open up Facebook, and we'll do the same with if we had 15. It'll go to go back up. We had 19. It will go to local etcetera, etcetera. So what it does is when it builds it, it creates an array with a bunch of references of what it should open up. And these rays also have other information. Like what? The icon should be the colors. You know, um, and other sort of information that would work with each one of these, the name of the APP, stuff like that. And then whenever we click on it, it goes in and it finds the actual piece of software to open up whenever we click on it. And over here we also have a list as well. A lot of times these are created with a raise, so you'll have the account array, which will have a bunch of different things here. And then you'll have the general array that will have a bunch of different properties here as well. The reason arrays work. The best here is because of how instantaneous they are. So when we click 18 we don't want any lag between this. And if we clicked one, we're gonna learn about another data structure in the next lesson, which takes longer the farther down you go. So, for example, if there were 45 APS, we don't want number 45 to take longer than number one. We want them all to the instant. Same with over here when we click on something, we wanted to be the exact same time. And, like we've been talking about a raise, have that ability. You can call the array of two or four or 10 and get that element instantaneously, and that's why they're so important. And they're very key to use. If you want to look at what some of the calls for a raise might be in different languages, there's actually a really need Wikipedia article on it. But there's this awesome table in here and you can see all of these languages, and every one of them has an array in some way, shape or form. Here is sort of the big players in programming right here you got you know, your ruby, your python, your javascript, Java C C plus plus so really, the big players in the market and they're all called with the name of the array and in the index number So, for example, this could be AP array, and then this is Index 10. So this would be a call of APP Array 10 and it would call everything inside of this, so that's just sort of an interesting way to look at it. You can see that even though all of these languages are so different from one another, if they all use a raise in some shape, or some form another common use of a raises with databases with really, really big databases, Sometimes they aren't stored in a raise, their stored in slightly different manners, like trees or stuff like that. Just because once you have such a really large array, you like billions, it gets a little bit tedious and a lot of ram has to be stored if you wanted to store all those instantaneous. So what they do is they create stuff that's near instantaneous but will take a little bit more of time. But that's that's for sort of later explanations. This, though, is what in a ray might look like. If you have a smaller one or even just tables inside of a database, they'll be using a raise. So, for example, if we wanted to sort by cookware, we could grab all of the ones that equal cookware and then just call back. Will could make maybe a list of all those numbers, and then we pass it to a function, and then it just calls everyone of those numbers. So, for example, to would be in our list and then 13 and 14 will be in our list. And when we called to 13 14 we could build another list with only cookware so we could filter this list out. Arrays are also commonly used on the Web. This is actually one of my websites that I have, and this is using WordPress. But with the neat part about it is that I've gone through the code and each part of these is actually using an array in some shape or form basically what it does that calls all of the articles as an array. And it places them on here just like our app example earlier. And whenever you click on one of these, it calls the functions, the names that everything that comes with it, and it sort of passes that through. So this Web pages Facebook, Twitter they all use a raise in some shape or form to show the data. And then also, I wanted to show some of the back end code here. So this is actually some code from, um, the website, and this stuff is very, very complex. I don't even understand it that well. It takes a little while the field to just jump in here and start understanding things. But one of the neat parts is that you could just see right here. This is a comment form on WordPress. It builds in array. It creates an array of all the information that it collects from the comment. And it's used right here. You can see it. The array is declared right here. And there's actually Honore down here and array up here. There are rays of in a raise in a race. Like I said, really, really complex. This code is sort of gibberish, unless you really look through it and sort of spend hours and hours and hours analyzing it . But I just wanted to show you that an array is present in all of this as well. So anywhere you look that has to do with coding a raise our there there present and they're being used a raise are almost the most. Do it if not the most used data structure out there. So understanding them really well and then understanding some of their benefits like they're fast run time will sort of show you why they're used in the real world so often.
17. 3-1 Nodes: So before we get into linked lists, we need to understand what are linked list comprised of what are they made of? And so we're gonna be talking about is what they're made off. And there are called nodes. So what exactly is a note? A note is really just a locational object. It's an object that references other objects, and so that might be a little bit of a confusing definition. So let's go ahead and sort of write this. Let's let's illustrate what a note is. So these boxes down here, these two boxes in the bottom this left one in this right one thes would be counted out our nodes. And up here we have memory. So they could be an array. It could be Ram. It could be really any sort of memory up here, but what I know does is it takes it has a bunch of different properties about it. So, for example, it could have, um, let's go with, like usually it has a data property. So that's the important property. For example, it could have a piece of data like so, and maybe this one has a piece of data out like so, but they usually aren't inside of the note because this is just supposed to be like a locational object. So what the node actually does is it has a spot for the data at the top, and then it has other sort of characteristics of the bottom, which we're gonna go over here in a second. And this top part of data, it actually contains the address to a point of memory. So, for example, let's say in our memory, here we have, um we have these areas and this isn't an array these or did they just, you know, there is memory additionally over here and over here and these air just actual points in memory. So, like this one is F zero. This one is F one. This one is F two, and this one is F three. And so what? It's actually storing here, is it storing the location of the data so it doesn't have to be in order. For example, this one could store zero x f three. So it has a pointer that basically points it to this piece of data. Well, this one over here could maybe store zero x f one And so it's going to have that means that Pointer is right here. And so now whenever we want our data whenever one our data from our node. So if we want the data from the nodes, what we can do is we could just call the node, and then whatever its data part is called and it will give us it'll go and do the work for us. It will find it and then return it for us. And so the important part about nodes and the reason that they're used so often is because they have another special characteristic. And I'm gonna use this fund galaxy Penda show this characteristic, and that is the fact that they have the ability to connect to one another. So, for example, we could say that Well, let's say we have a narrow coming in here and we have an arrow going out here. We could have maybe right here something like a next, so we could have a next part of each one of these nodes. We could have a previous part to one of these nodes and these can really be anything. This could be the front. We get out front on one of these sides that will always point to the front. We could have it always point to the back. We could have always point to specific locations, but a common technique is the have it previous and front. And so let's go ahead and just right that a little clear previous, next. And so each one of these has previous and next. And so what happens is that we tell it. Where should the next go when we can point to another node? And in this situation, we could point to a previous note, and this previous node might point in this direction backwards like so let me. That was a bad air. Let me read your all that arrow so it could point backwards like so And then this one might point onwards, which is right there, and it could have an arrow that actually points backwards towards it, like so like that. And so what this does is it creates a linked list, and so that's what we'll be discussing is we're using these guys these nodes right here to create linked lists, and so we're going to going over exactly why linked lists are important how they help us, Um, and how they, you know, they compare to other things, like the array that we've already talked about. So in this sort of lecture of the sort of unit were going to be discussing linked lists.
18. 3-2 Linked List: So now that we have a good idea of how nodes work, let's investigate a little further and get behind the intuition of linked lists. So what a linked list is what I described in the last lecture where you have a bunch of these little nose. So, for example, we could have a bunch of those. I mean, these things could be infinite, so we could have a note here. A note here, a note here, a note here, note here, and new does so on and so on. And the shorthand for sort of drawing these is inside is gonna be the data point. And if it points to some other place, you just draw an arrow. So what we have here is we have a bunch of different random locations in memory that are all linked together by these pointers. So we're in an array. We had it all in linear space. Honore, for example, we had in an array. We had all of it in, like a linear space where it was one after the other. So that would be like 34 10 12 and then 11 right here, where we had one After the next in, in in one of these sorts of in a linked list, what you're gonna have is thes are gonna be all over memory. So this one might be somewhere. Like, for example, this four might be, you know, down here while this 10 is over in some other piece of memory and this 12 isn't it some other piece of memory and so on and so on. So what a linked list lives you dio is it allows you to connect data all over the place together. And so what this gives you the ability to dio is it gives you the ability. Let's go ahead and get rid of this. It gives you the ability to actually continually add onto the length list without having to do things like expansion or doubling it. And if you think about it when we're talking about the array, how it had to double every time to be efficient when it got bigger, a lot of times we might run into a case where only half of the array is filled, which means we have all his other room over on the other half. That isn't filled, which is inefficient. We're wasting space. There is just being It's waiting to be allocated, but it isn't allocated. So linked list actually allows us to always and constantly have, um the memory that is needed to maintain our list and no more and no less. So if we need to add a number on to this, we can add it to the end. So, for example, if we wanted to add ah, four to this, we could go. And then we just go to the end and we add before to the list right here and what I'm talking about right here are Singley linked lists. And what that means is that you have a starting point, right? Like so And so this is marked as start the starting note. So when you call start, you go here. And what is gonna dio is you're going to go from here and you're going to basically only the only way you can traverse is is going from here to here, here to here, here to here and just following the arrows one after the next, all the way over until you make it to the last one and then at this one the way that you know you're at the end is because instead of this one pointing to another node, So in this situation, like for example, it might point to, it might point to another node here. Let's say that this one is like a five, but if we're at the end of a list, it's going to point to something called No. And what no means is that it's nothing. It's pointing to nothing. There's no memory allocated. It doesn't actually have a location in memory. It's pointing to nothing. And once we understand that, we're at nothing. Then when we insert something, we just replace that nothing with our new node. So we've created this little note here. It's like, for example, let me show you how we might do this. We've created a new node, a new know down here called 15 and we want to attach it to this linked list. And so right now this one is pointing at No. And so what we want to do is win. Attach this to it. So we're going to go to this start. Were to go to the start. I'm going to go down, are linked list. It has something. It has something. It has something. It has something. Oh, it's that know. So we're gonna do is we're gonna move this. No, we're going to Let's select this. Let's move this out of here to make it sort of We're going to grab this. No, we're gonna move it out of here, and we're actually going to grab the new node right here, and we're going to drop it in and this new node by default. When we created the node, it's next spot was set to know. So by default when this notice created the next but said to know and you'll see that this helps because now when we actually select this and grab it and move it up here, you'll notice that the end again points to know. And this one was now re allocated to point to the new node. And the cycle continues. If we want to insert another one, we can go ahead and 13 here, and we we created this. It automatically puts to know it's not in our list at, you know, just yet. It's still just being created. Now we want to add it to the list. So when we go to the start, we go down, we go down, we go down Now this one isn't a no. So like OK, there is another one. Here we go to 15. Were like, Oh, there's no here. Which means we're now going to make 15 instead of pointing to know we're then going to move its pointer. So we're going to delete the no off here, and we're going to move its pointer 2.2, our new node. So technically, it would do this because the node wouldn't actually move up in here. Were just telling it to point to this new asked that we created this new asset right here this new, this new piece of data that we created and then since we already created to already have a null of the end once it goes to hear, it will move to the knoll. And now, if we want to answer something else, it's going to move right to this point and we can get, you know, have a good sort of graph going. Now. Then what happens if we wanted to, For example, um, delete something from this in an array when we delete something from it. All we had to do was for example, let's have an array right here. All we have to do is call the location that we want deleted. So at 321012 we could just call X of two equals no or some sort of deletion algorithm comes into play. And I was all it does is it just goes, Oh, exit to Doesn't equal anything anymore done. Nothing to worry about If we wanted to lead the 1st 1 Easy done. Nothing to worry about with links list. However you notice that there are quote unquote dependencies if you delete if we just delete. If we said we wanted to delete 10 was a delete 10 right here. What happens? How on earth are we supposed to get to the rest of our list for no longer points to anything for just automatically got defaulted to pointing to know. So it points to know now because 10 is in there, which means this is not the end, and we lose the rest of our list right here. So we actually have to do a little bit of a complex sort of operation to delete let me just bring this back. So delete instead what we can dio. Let's say that we have a node right here. A note right here and a note in the middle. We have the Singley pointed the's single pointed single pointers. So it's Singley league list here. And if we wanted to delete this will give these all values so that we can call them by something. If we wanted to delete 10 right here. What we need to do is, before we delete this, we need to tell it Hey, three is now going to point to to like so So we need to go to three. We're gonna delete the next one. So we go to three and we tell it, Hey, you need 2.22 And then once this new allocation is created, this one will actually naturally just fall away. Now it's bad practice. Just leave this here because this is actually still a technically sound. So, for example, of this is a start, this graph right here is still technically sound. It's going to go 3 to 2 and then over here points to know. And so all of this still works it's still technically correct. However, what we have is we have wasted space here. We didn't actually delete the no. So we can actually, before we do that, we can actually save this to some sort of variable like before. Like we could say, you know, um, have used Excel. Race will go w equals this note of 10. And then once we re do this thing that we can say, you know, we can call delete on it and we're actually remove it from memory so that we don't have this this thing sitting here anymore. But what I wanted to show is that all we have to do to delete it is instead of deleting the asset itself, all we have to do is redraw the arrow over. And actually, technically, this one would still be pointing there. So we have to do is redraw the note over like so and then this one is technically out of the list. There's no way to ever get back to 10. There's no way to accidentally go to 10. It's just chilling there, and it can always be there, and our list would still be sound. However, the efficient way would be to then delete this away so that our our list isn't storing extra data that it doesn't need to store. And so we have one more case to cover of how linked list work, and that is going to be on how exactly one might how exactly one might insert into the middle of a ray. So, for example, what we do here is, let's say, are the middle of a linked list. So if you wanted to insert into a array, say its rate like so in a ray here, what? And we have a two here we have a seven here. We have a four here, and you know, we have 0123 And if we wanted to insert right into this part over here, all we have to do is ex of two equals seven and you know, it goes to seven Really easy to do. If you want to insert anywhere else. That you have to really do is just overwrite something. So if we just want to make this equal to zero, we can just go ahead. It'll just overwrite this and make it equal to zero. No riel, Other additional stuff needed. However, a ray or linked lists aren't as easy with a linked list. What we have to dio is we actually have to. We actually have to sort of reallocate or re points and things. So just like in the elite, we had a box here of three. It moves over to 10. And then let's say that this one moves over to 25 now we have a box here we have a box that we created a new node that we created that maybe 18 and it points to know. So how do we make it go right here? How do we put it right there? How do we make it go right there? Well, we can't just insert. We can't just call the second spot. Say, insert, there's no the Pointers will get all messed up. So what we have to do is we have to take this one right here. We have to delete it. We have to say it now points it now points down to 18 and then we have to take this one on . We have to create a new pointer that points up to 10 and there are a couple of little challenges with this. Because if you'll notice something if we go back to where it started If we delete if we delete this first, then suddenly we've lost the rest of our list. There is no way to get to the rest of our list over here. We can't touch anymore. So what we have to do usually is we have to save this next, this one over here, into a variable. So, um, what we couldn't do, there's a couple actually, ways we can do this. The most efficient would actually probably be to just set this one's next first. So once we get to once we have this spot in memory, once we've traversed are are linked list. And we've made it to this note over here. Weaken Didn't say, Hey, we want 18 to be equal to this. No, we wanted to point to this note. We wanted to point to this note. So we can dio is we can say this now points to this note as well. And then now what we can do is that we can go back to the beginning here. We can erase this, and now we can point this down. Now we can point this over to R 18 and then now we have a good insertion and we didn't lose the data on the back end. So linked lists there a little bit tricky there a little bit hard to understand. But they do have some benefits, and one of the biggest benefits is the fact that we can keep adding to the end. And it's only going to create data. That's as, um that create data. That's, you know, as big as we need it to be, we don't have to keep doubling things and it allows us a sort of ad at whim. We could just add it to the end. We don't have to worry about the sorting or the the organization of it, and we don't overwrite things if we're gonna insert we everything that's own piece of data that we can grab and move around instead of having an array where everything is sort of locked into a box which sort of limits some of the stuff that we could do with it, especially later on. And what's also sort of interesting about this approach is that a single a linked list in itself is, you know, only singly linked. But we could, for example, have Bunches of nose pointing out to different areas. And this allows so many things. And this is actually where we get into trees later on, adding bunch the notes. But this is sort of the basics of it may be linked list by themselves. Just singly. Linked list aren't the most useful thing, but it's going to be building into really useful things later. So that is Singley length lists. The next part. We're going to move on to doubly linking them and then sort of understanding the run times behind all of this and how it compares to something like it array.
19. 3-3 Linked List Run Times: Now we have a good, intuitive understanding of how linked lists work. Let's go over there. Run times Likely said in computer science, run times are important because they allow us to analyze these data structures and understand their strengths and weaknesses. So let's take a quick refresher on the run times of our ray right here. So what we have here is we have the run times that we came out for the array and so I'll just put that down here. This is for the array. And then also, I wanna designate This is for random deletion. If you're deleting, for example, in an array, um, at the beginning, like, for example, it's an unknown circular array and you deleted the beginning right here. So you delete this one and you want everything to shift backwards that will also be ove in just like the in sort of the front. So this is this one could be two different ones, depending on how you want to look at it. But delete random is always gonna be over the one while delete. Basically, thief front will be owed to the end. So I just wanted to put that little caveat in there alone, these sort of time complexities can get a little bit of exactly what are you looking at, like specifically So they could get a little confusing in that aspect. But if you really just think about them intuitively can understand. So, for example, in the array, if we are deleting out of the front rate here, everything has to shift back. So we have to move in number of, um, in number of places at worst over. And let's just kind of the intuition behind that. But we can kind of see some of this stuff here. We got the insertion random. We got in search in front. Delete search, unsorted search sorted. So let's go through these operations in a linked list. So let's create a little bit of an example linked list in here. So what we need to create is we need to create our nodes. So we have the start of the list, which we can just actually create as like, start right here so we could create, like Start, and then it points into the beginning of our first node. Let's say our first note has a three, and then this is a single link list. We haven't talked about double yet, so we're just gonna go with single length list. I double just improves slight bits of things, slight calculations, and then we'll go to, like, let's say, 19 here or something, and then this is the end. So that's one, then points to know over here. Okay, so we have are linked lists like so, and let's go ahead and create our 1st 1 So up here we had insert randomly. So if you want to insert something and this is randomly so anywhere within the the data structure, actually let's do the same. Notation is the last one as well. So we want to insert randomly in an array that took us instant time because we'd be overriding things if you were trying to calculate overriding. Then maybe you might have some caveats in this, but if we were just going to overwrite something perfectly fine, you could just say ex of five equals some number whatever, and it will override it in a linked list. However there becomes a problem is that you only can come in from here. You can't just jump to any of these positions. So if I wanted to get to 19 over here, I would have to go from the start over 123 and then get to it, and I would have to do this for any amount within this. So what that means is that our insert random actually comes down to in O of in notation. And that's because our worst case scenario is inserting randomly at the back. Which means going to insert 1234 are in. This situation is four. So it's going to take four to get back to here, and that goes with any list of any length. There's 1000 of these and you want to insert near the back. It could take 950 to 1000. So that means our worst case scenario for insert random actually becomes actually becomes O of N. And that's just because there is no instantaneous access. There's no way that we know how to jump between these quicker than going from the start and then moving down like so and so our next one is going to be inserted into the front, so insert front and in front, so with insert front and will do insert back as well with a basic, um, a basic linked list. It's going to be o to the end as well. Or actually, the answer to the front is going to be a constant time. Well, in starting in, the back is going to be O to the end. And this is because let's go with inserting the front first. If we wanted to insert a new node, let's say we have a note here. A new note here of four. If we wanted to insert this into the front, how exactly would we do that? Well, it's actually pretty easy. Whatever our front is, whatever our pointer is, all we have to dio is we have to just say OK, instead of pointing at this one point at this one, and then we had to set four to our old one. So I had to set four to our old front. And so what that does is it will actually just add it in in 123 steps every single time, no matter what you're inserting in there, so it's always gonna be a constant time and the way you actually want to do this. There's actually a way that you should insert here because this kind of goes in with losing memory. If I delete this and then make this point to this like so I lose the ability I lose this first note here, I have no longer enable to grab it anymore cause we deleted the only point that we have to it. So if you want to insert into the front, what we have to do is we have to take our node. We have the first said it to the three. And so this is still pointing the front. So we're going to say, Hey, four, we're gonna say, Hey, four, you are now pointing your next one instead of No, it's now pointing to three. It's now pointing to our start. We're gonna say it points to start, and that's going to allocate it to the beginning. Then once it's pointed over here, we're going to delete out of here, and we will make the start point down to here, and then that will create our inserts or insert is always constant time. It's always gonna be those three operations create assigned to the start a sign start two new no to new note. So it's always gonna be constant time. So that's constant time. And then we have insert to the back. So insert to the back, though, is going to take us a little bit of work because, like I said, there's no way to get back here unless we later on, I'll go over like you can create little pointers to help you out. But right now, with just a basic Singley linked list, there's no way we can get here without going through the entire list. And so this is always going to be in. It's always going to be in, which means, by default, it's worst case is in as well. So it's worst case will also be in because, like I said, there's no way we can jump here. We can't get from start and then just move to the end. There's nothing pointing us to the end. We don't know what's here. We don't know how many nodes air in between this. All we know is where to start and how to get to the next one. So we're gonna have to do is run start. We had to create ourselves a new node. So if you wanted to add one to the end, let's say we want to add this four to the end. The only way we could do that is we have to go. Okay. Well, what's at the end? Go down down 4 to 3. Afford a 2 to 10 this is the No, we're at the end. Great. Now, once we're at the end, we have the end note. We can Then tell it. Hey, you're going to move. You're no longer know you're going to be set to four. Like so And then, of course, by default. When we created this, this was pointed to know our list works again. So once you get down here, the operation is only like one or two sort of creation numbers. But to get here is owed to the end. And that's why our run can time will be Odeon as well. And so the next one we want to grab is going to be delete random. So we want to delete random. And so with a delish in it sort of depends on how you're deleting and where you want to delete. So the problem is is that if we're deleting off of the front, it's really easy. If we delete from the front, all we have to do is just say, start equals to the front next. So let me write that out because this kind of gets into some. Like if you start writing these stuff, it'll be like, um, start is equal to this note right here. Start dot next. And this arrow would be called next. So what we're saying is that the start is now equal to start next and then this one just get sort of removed out of the cycle and the arrow gets re associate ID out of here, and then it goes straight back to here. So if we delete the front, it's with one time. So if we delete the front, it'll be out with one time. But if we delete randomly, which is anywhere in here, if we want to say we want to delete the fourth element or the Ninth Element to do that, it's gonna take odienne because we're gonna have to traverse the entire thing. And then if you notice we can't go backwards. So if you wanted to lead this one rocks. You're gonna have to grab the one previous to it, and then we're going to have to. So, for example, if we wanted to delete three right here, we'll have to do is we'll have to go to two and then we'll have to tell it that it's next is going to be equal to its next is going to be equal to next dot Next, which kind of gets a little complicated. So it would be like we're at number two here. We're at this node, right? Here's all actions, right? What? The numbers are so right. Node four. And what to say? Node four dot Next equals for dot next dot Next. And so what you're doing is you're grabbing the next one end in the next one. You're saying it's now equal to this. So you're bypassing this one right here on by bypassing this, you remove it. But to get to this point, to make it to this point, you have to know that it's going to take over the end, which means the leading randomly is gonna be ot in as well. So this will be o to the end and we just but we've figured out that deleting from the front is gonna be constant is constant and you already see a little improvement there. So we've had sort of a lot of things that might take longer. But between the array, If you see that delete from front on an array will take O to the end while delete from the front on a linked list will take what does That should be constant. That should be constant. While delusion from this will take o will be a constant time, which means that it won't scale, which is great. And but you also see some other differences here is that insertion randomly was instantaneous. While insertion here is gonna be o to the end, An insurgent front in here was odienne. But insertion front in a linked list is over with one. So we're seeing some tradeoffs here. Some differences between these two. And so now let's go ahead and delete that last move. We're no longer deleting this one. It's gonna go back to a nice little link list right here, and what we need now is searched for sorted and unsorted. So what? We have this search for sorted and in search for unsorted. And so both of these are going to end up being O to the end as well. And I'm gonna explain that in just a second. And the reason is it doesn't matter whether this is sorted or unsorted. You still have to traverse the entire thing to find what you're looking for. If we knew this was 12345678 then it doesn't matter because we still there's no way that we can just jump to two or jump to number eight or anywhere in here. We can't jump. We have to start and then move from the start down the linked list. Which means that no matter what, if we want to go into here, if, like, for example, would find the two it could take up to go to the end, it could be at the very end of the list to find this thing. Whether it's sorted or not sorted. It's going to be an entire linked list reversal, and therefore it's going to be linear time. Meaning is going to have to go through every single node at worst case. So the search is a little bit slower in this situation. While if we go to an array, we'll see that the search unsorted has got the same. However, if we sort the array we can get up to log in, we can actually improve it quite a bit because of what we talked about earlier, where we cut it in half. We keep cutting and 1/2 until we end up finding the the number that we're looking for. And so there we have it. Those are the run times of a linked list. As I was saying earlier, there are a couple of similarities here. A couple of little, um, caveats between the two and sometimes like, for example, insertion randomly and insertion at the front here, you'll see that they're different. Actually, thes the insurgent randomly is faster than on the array. While the insertion to the front is faster on the linked list, insertion to the back is going to be really slow while in search into the back in an array is gonna be really fast. Delish in is going to be slower on the linked list. But faster on the ah, faster on the front search is going to be basically the same if it's unsorted. However, if it sorted, the array comes out ahead. So you can see that even though these two are vastly different, they're completely different. You might think maybe, like, maybe you prefer one over the other one was more intuitive over the other, that these all have very specific sort of caveat. So, for example, if we put here a raise and over here we have our linked lists, there could be certain situations where we want to use one over the other one. If we had a program that inserted at the front a lot inserted towards the front a time we wouldn't want to use an array for that because if we used an array for that, we would have to, uh, shift everything over and keep creating new A raise to make it actually work well, in a linked list, all we have to do is just create a new node and sort of the front. So if we wanted to insert to the front, this one would have an advantage. Insert front heavy programs part my writing here. So insert front heavy programs, all of an array, if we want to sort it and then search It would be best. So what may we have a sorted array? And we want to. So it's like sorted array searches or assorted would wanna make. It's pitiful in a race sorted data searches, and so you can see that each one of these is good in a certain field and bad in different fields. And that's how understanding this helps the other computer scientists, because you start to understand which one you should use based off of the advantages of one over the other.
20. 3-4 Doubly Linked Lists: it's not. We have a good understanding of how singly linked lists work and some of their advantage some of their disadvantages with a raise and other data structures. Now we can begin to sort of improve on the basic idea of a linked list, and we go into this idea of a doubly linked list. So what is a doubly linked list? Well, let's go back to our idea of a node. Remember I said that we had a note here and we can have any note and it can have, like, a piece of data on the top. And it could have a bunch of different sort of sort of pointers down here. And one of the main point is that you can add is you're gonna add next. And this creates a bunch of Singley Singley linked list right here. So we haven't just a bunch of next right here. Let me make this the same format here and for example, this is like four. We have a next here, but the thing is, with just next, we can only go one direction in the list to make it doubly linked. What we can do is we can actually add in previous is as well. And this means that it's going to also point backwards, doubly linked. This means that for every era there's going to be two arrows, one coming in and one going out. Now, how does this help us? Why does this help us? How does allowing us to go from, for example, to start backwards all throughout the list? How does that help us? Well, if we look at the run times, we can notice that it isn't actually going to help us that much in any of the run times. What is going to do is gonna do a little bit more behind the scenes work because it doesn't matter if we go forward or backwards. We're still going to traverse the list, at least wants to get to the part. But let's think of a different scenario. Let's think of, for example, our objective here is to, let's say, find seven and delete it and delete it from are linked list. So we have a linked list here. Let's say we've got one here and we got a two that moves over here. We got a three and then we got like 75 25 then down to seven, and we'll make one more. So this keeps linking on downward to 17 and this points to a no. So think about this. If we need to first find seven and then delete it, What we're gonna do is we're going to first search it. So we're going to search the thing. So we're gonna like, let's write a little log of what we do. So we're gonna search, and so we're gonna have to go. 1234 We found it is over here at seven. Okay, great. How do we delete it? Because the only way we can delete it is we have to touch the previous one, and we don't know where the previous one is. Once we get here, it has no way to point backwards. So once we get here, if you want to delete this, we actually have to redo the search again. We have to come in from this start again and then go down and go to the previous one and tell it to point to the next dot next. And so that will actually cause something like this. So we have the search to find It is oh of in. And then we have the deletion is also going to be over now. What this comes out to be is actually can make this. It's actually gonna end up being, If you want to be really technical about this is gonna end up being O to the N minus one because you're going to find the previous one behind it. So it doesn't really matter what this Hulk kind of boils down to is we're just gonna get this sort of relation. We're gonna get in, oh, of in plus in o of in And so for our comparisons of other things, remember, that's gonna come all the way down to just basically being a O of two in. And remember what we said at beginning is we would just actually cross this out, and it just be over in. However, in the real world, two times in might actually be a really big difference, something that we want to actually keep track of and understand and improve on. And if we can cut the time that is going to take in half, we want to do that. And so for example, you'll see that it searches. And then now we have to make the delete. And so it just gets a little tedious in this way. So what we can do is we can create this and make it a doubly length list. So now we have a doubly linked list here where they all point to one another and let me actually, let's let's make the let's make these arrows. Let's make them a different color here to make this little clear. And so what we have we have is an arrow pointing back. They're appointing back, winning back, pointing back in. Then there's an arrow that's gonna go point back. And so now we have our doubly linked lists, and in this situation, this one points to know as well. So the front this one's gonna point, um, to know over here and then it's previous gonna point there. And so now we have this whole thing nicely worked out. And so now let's try to redo this operation that's find seven and delete it. Well, we're going to go. We're gonna start of the front. We're gonna go down. We're gonna go over, over, over, over Oh, we found seven. So now to delete it, what we can actually dio is we can go and use this sort of pseudo code right here, seven dot previous. So now we have. We're now we're onto this node right here. Seven dot previous dot Next equals seven dot next, and so this might be a little complexes, like, sort of getting into the actual coding aspect of it. But all we're saying here is we're going to seven were saying its previous next. So we're selecting this one and then now we're saying now, since we're in here were saying it's next, the little part in here were saying a little part of 25 that is next is now going to equal whatever seven is, it's next. So it's going to equal whatever. Seven. Next is which turns to be 17. I know. Once we do that, we now have a connection. Um, it creates a connection where it short circuits it undone. Does this connection and then now it'll just move straight down and connect itself over here and in this line right here will disappear. And it's all we have to do now is one more step. This 17 still points to the wrong place so we can do something like we now are back into our 25 we just reset it. So we'll say that, like, 25 dot next dot previous equals 25. And so this is all pseudo code. This isn't like you can't put this in any where these two things over here on the left just mean like their comments like they're just sort of pseudo code of what would it dio? But now what we could do is we can say 20 fives and next dot previous equals. So we're selecting the previous part of the data part here of 17 right here were saying its previous now equals 25. And so then it goes back and it allocates like this seven is now removed and we have our final product here. And what did this overall, dio we'll all it did was it allowed our search and delete here to become one out to the end and let just saves us a little bit of time. The overall, instead of being too to the end, is just go to the end. And so, like I said, this is doesn't improve any of this sort of top down speeds right here. But what it does do is it does actually save us a little bit of time, and it allows us to sort of create a little bit more intuition behind things because now we're not. We're not stuck in these sort of going one direction. And if we have to go backwards starting over and then going one direction again now we can actually move intuitively forward and backwards through this list. It's still gonna take on average Oh, then to get to anywhere. But we can improve minute things like improving from two in to go faster, and it makes our coding just a little bit easier because we don't like I said, we don't have to keep accounting to start over and then go through it again. So they're doubly linked. Lists a nice addition to add to your linked list, and they make generally your life a lot easier
21. 3-5 Tail Pointer: So now let's cover one more little thing about linked list and something that's actually gonna improve one of our run times. And that is the idea of a tale pointer. So what that means is a tale Pointer is going to allow us to do exactly that. Is gonna allow us to point to the very end of our sort of are are linked list. So, for example, let's go ahead and just draw love really short one. Here, Let's go three points to two points to 15 and will say that this goes to know. And over here this one goes to know because we're actually make this a doubly linked list here. And, yeah, looking good. Okay, so then what we've been using is there's this sort of start pointer. There's there's always one that tells us where is the beginning of our linked list, and so that's been just sort of default. That's been sort of what we can use right at the very beginning, because if we didn't have this point or well, it would be hard to figure out how to even work with it because we would never be able to figure out where this is. So we have a start, pointer. But what we can actually do is we can go ahead and we can create ourselves a tale pointer as well. And what a tale Pointer does. So, um, let's go to draw it right here. Let's put it on. Maybe like red will say the tail pointer. So we'll say the tail pointer right here. What a tale Pointer does is it allows you to always keep track of what is the last node and how this helps us is having this tale. Pointer takes basically no sort of energy. Every time we create a new one, all we're gonna do is we're gonna move the we're gonna move the tail pointer over to the next new node and so it's gonna do, is at an extra operation toe adding one to the end now. But what it allows us to do is it allows us to now insert into the back in oh, toothy, insert into the back so we can cross this out if it has a tail pointer and it allowed us to insert into the back in constant time. And why does it allow us to do that. Well, let's just go run to the sort of example of inserting now. So now we have a new note here. We got a new node of 20 right here, and we want to insert this into the back before what we have to do is we got to go down the start, were to go down the start and then move through are linked list until we got to the back. And that was a problem, because that's so to the end. No matter what is always gonna take, however many there are in the list to get to the back, it's gonna be linear time. So we're gonna make that constant time, though instead of having to go through the whole list, we just go right where the tail pointer points us to, which is one operation. And then now we set this this nodes next to our 20. We said our twenties node. So we said this one's let's remove the no, we removed the north. We move it to set to here. We create this set rate here and in this one by default is pointing to know. And so because of this, what it allows us to Dio is all it takes is the one operation to get to here. 12345 maybe operations to add in the part. But that's gonna be constant. It doesn't matter anymore because now, once we've done this, we had in just a little bit of an addition right here where we just go ahead and we re draw little pointer to our new back. And so now, if you want to insert into back again, it's gonna be exactly that. It's gonna be roughly five or so operations to do it, which means if it's always going to be 555 no matter how big are in is no matter how much are in goes even if it's that infinity we can still insert to the back of infinity and just five operations. You know, this is, of course, theoretical right here. If we could just insert in five operations, that means it's always going to be constant time. So what a tale point? Or does it just a little extra piece of data that points to the very end of our list? It allows us to get to the end of the list a whole lot quicker in one operation and set up in O to the N operations. And because of that, it increases our insert to the back to go to the one instead of, um instead of linear time. And we can also think of this as the same way. I didn't write this down here, but if we wanted to delete from the back as well, So let's add this in here. Delete from the back as well. It would also improve that because without this tale pointer, if we wanted to delete from the back, we would have to go down the starter path, get to the back, delete it and then probably run through it again so that we can reallocate whatever was next to the back one. And so it's gonna be owed to the end without a tail pointers without a tell poner it's o to the end. But if we have a tail point or we can cross that out and it will actually become constant, so it doesn't make you know really, really constant. It doesn't make really, really large changes, but what it does is it improves just two of these it allows us to insert. It allows us to perform better operations on the back. And because of this, because now that we can insert and delete from the back and with one are it makes it faster , are at the exact same pace as an array in that regard. So that is the tail point. It really, really simple. Just a something that points to the end really assembled sort of implement once you get if you get into the code, but it can save you a lot of time and it overall, make sure program run a whole lot better.
22. 3-6 Linked List Review: So let's take a step back and tie everything that we just learned together so that we can get, you know, a nice refresher on everything and how it all sort of interact with each other. So what we have is we have stuff called a linked list, and a linked list is a series of nodes. So it's a series of basically pieces that point to a certain piece of data and then also point toe other nodes. And so our pizza that it might be three and then we haven't next and we have a previous rate here and what we can do it. Those is we can then take those nodes. We can create a list from those nodes, and this list allows us to link a whole bunch of data to each other. And the good part about this is that we can insert randomly. We don't have to, you know, allocate a certain size for this. It'll expand indefinitely. We don't have to like, double it. There's none of that sort of stuff going on, which means that we can kind of keep linking things to it as we come across them, and it will just dynamically keep getting larger and larger. Something disadvantages to this is right off the bat is that you always have to come into this start area and to get through the list, you have to go from one to the next the next. There's no direct routes to anything past the start in this situation, which means that it can take O of in time to do a lot of operations. We can improve this by adding it as a doubly linked less. This allows us to not have to go. Um, for example, if we wanted to go right here, I'm going to do something to the previous one. We wouldn't have to then go back to the start and then find the previous one. We could just go back and forth on this list as we see fit. And then what we can do to improve it farther is we can add a little, a little, um, a tale pointer to the end on what the tail pointer allows us to do is go directly to the very end. And this will allow us to apply operations to the end really, really, really quickly. And this will allow us to sort of improve a couple of our run times. Now, when we sort of look at our run times here, we can see that it has a lot of run times that are different than raise, and some run times that are same. And then some run times that are better. For example, inserting at the front will be off the bat just straight up constant time. You don't have to make it circular doing that sort of advanced logic. All you have to do is create all you have to do is just insert to the front and move the start point over to that new one and the same. If we create this tale point or in the back, we can get deletes off the back and inserts in the back into constant time as well. Some the trade offs is that it doesn't matter if it's sorted. We're always gonna have to go in to search this thing, which means that we have to the larger the set of data, the longer it's going to take us to sort it, which can you know if we're if that's a priority to our program, is finding data, then you know, you kind of think that maybe the length list is in the best way. Maybe an array or some of the other things in the future that will go over will be better than that. But that, in a nutshell, is or all of this is linked lists. We have single link list. We have doubly linked lists. We have tail pointers, and they all come together to form this sort of linked data structure that allows us to dynamically add and delete our data structure at whim. It's got some disadvantages. It's got some advantages, but it is one of the key data structures, and so that's sort of the end of this link list section. I'm really excited to jump into the next unit where we're going to be sort of implementing this into we're gonna be building off of these and creating a new set of data structures called Stacks and Queues. So I'll see over there
23. 3-7 Linked List Real World Examples: Now that we have a good understanding of what linked lists are, let's go ahead and look at some real world examples of length lists. Now this section will be a little bit shorter than most of the other sections because we're actually going to be talking about in detail two of the major uses of linked lists. And that is the tree and the Q. The next section is actually going to be on, I believe, stacks and queues. So you're gonna be seeing the queue in there and then later on, we're gonna cover trees. Those are two great uses of linked lists, but another usage of a linked list is with data storage. Whatever your hard drive fills in data, it looks for free memory, and it sort of links the free memory together. Now, in a perfect world, you would have, you know, some memory allocated in a spot for whatever program you're storing. But let's say at the very beginning of the hard drive, you only have like one gigabyte. Let's every part of these is a gigabyte, and all of these are filled in up to here. So let's say that these three over here are filled in. Well, if you only have one gigabyte, the front and maybe a gigabyte free here and a gigabyte free here and here and here. And you have a 13 gigabyte, you know, download coming. You don't want the program that Hey, we can install this. We don't have a lot of it in a Siri's. So what it does is it actually links these areas together. It'll install one gigabyte of the files here. Then it will point a pointer to this area. It'll point went over here, Put one up here and over here, and it does isn't a slightly more efficient manner than what you might see here. But this is the basics of how it does. It is going to link the program together through multiple of these sort of these loops, and it's going to actually just show you, or it's going to display that almost in real time is gonna scan through all of it and bring it back really quickly. But what it's actually doing is creating a linked list for that data. And the kind of the neat thing is, is when you defrag your computer, you actually removing some of this. You're trying to put it right next to each other and you're allowing the actual hard drive itself toe have to seek, which is where it moves to go. Look for this. Other parts of these might be on separate sides of hard drive. It puts them next to each other, so there's less time with that as well reduces load times. Another real world example is actually with a Web browser. If you think about it, this is a linked list. The back and the Ford Arrow. Right now we're at the end of the list, so it is not a circular list. I cannot click, go forward and go back to the very beginning. What I can do is I can traverse this list so I can go backwards and I go to Google Search for or the misspelling of Google search. Probably all five times. I can go back to three back to to back toe one, so my history is almost linked lists of sorts. I can go forwards backwards and I can traverse it, but only one at a time. Unless, of course, I went to the history tab, which would be more like a array in that sense, but that is sort of the closest or the most riel linked list Example I could think of. That everyone has most likely used so far is the history on our Web browsers. But that is just a couple examples of linked lists. Let's actually jump into some of the more in flint like implementations of it on a computer science perspective with some of these future lectures.
24. 4-1 Stacks: Now that we have a good understanding of some basic data structures are array and our linked list, we can begin building on those to create more complex but more useful data structures. And the first thing we're gonna be talking about is a very important one in computer science. And that is the stack. So what exactly is a stack? What we can think about it? This in sort of like an intuitive way. A stack is basically the way that trays in a lunchroom would work. So let's say that you have a stack of trays in a lunchroom or like a cafeteria or something . And so you have a stack. There's a bunch of different trays here, all stacked up on top of each other. Now, if you want A you know, if you want one of these trays, you don't go to the bottom to grab it. That would be you have to lift everything off and then sort of, you know, you grab everything, move it, then grab the bottle and then put it all back. So that's that's not what we want to do here. What we do is we typically take the 1st 1 off the top and we grab it, you know, So we have this tray, and then the next one goes and whoever grabs that, they have a tray. And then once we once we do that those are gone. So we like, remove those two from our stack. Now, if they if someone comes up with some clean trays, they don't put it at the bottom, they put him right back on the top. So we have some new clean trays there blue this time. And so they get added to the top. Now, if someone wants to grab one, they go ahead and they grab one. And then so now they get one like so and so they have a bleu tray and someone else has a blue tray, and then someone grabs the 3rd 1 So they have, Ah, black trade. Like so And then at the end of this, you know, we keep going up and down, up and down through this sort of this structure rate here. And every time we run out of clean trays, they get put onto the top and every time we want, when we take it off the top, so we Onley sort of execute on one side of this this this stack right here. And that's exactly how stacks work in computer science. So basically, what it's gonna be is gonna be a data structure in which we have a bunch of information that is sort of situated like this. So we have, like, a three to, um, we could have a word here. We could have cat here. We can have really anything we want inside of this stack. But what comes out of it is that if we want to grab from the stack, we cannot access anywhere in here except for the top. And if we want to grab something, it's called a pop. So we don't put anything inside of these princes because we don't get a choose. Where is we're just popping. We call the stack and we say, Hey, pop an element from it. And when we poppin element from it, we get the top element on the stack right here. So that means this five will end up going right here. And so will me pop. We get a five and then it gets removed from the stack. And so now, if we pop again. So if we call a second pop like so what we're going to end up getting is the second element will come out and it'll be pushed or Italy grabbed and put right here put rate here. And so then we do the exact same thing we delete off this one and this now are Stack only has two and three left. But if we're gonna add some information to it, we do something called a push. And then in the princes, we have to specify what we're adding to. Our stack would specify what's going here that's going to be going right here. And so weaken put you know, anything in here? Let's put 107. So we're pushing 107 onto the stack. That means that 100 7 will then go to the very top of the stack, and then we put it onto the stack. And that's generally how a stack works. And what we call this is we call it last. In first out L I f o. We're gonna be dealing with another one called F I F. Oh, that's a cue. That'll be sort of next in this series, but this is called last in first out. Now I'm gonna admit this is a little bit ambiguous. I sometimes get this confused with a Q because you think the last one in is maybe the one that's been there, the longest, and you're thinking that one is the 1st 1 out. But that's not how it works. It's talking about which one was the last one to come in. That's the one that's going to be first out. We can re think of this as we can re think of this as instead of last in first out. This is the most recent in most recent. So it's most recent in first out. And if that, you know, allows you to think of it easier than go for that, Just remember, L I f o is gonna be sort of the standard that they're going to say when they're talking about a stack. But what it really means is just the most recent one in is the 1st 1 out. And so if we just did a push, you know, we pushed on 107 we pop 107 we push on 107. We pop 107. We keep doing that infinitely. These two will never get touched. They'll be starved. They'll never be brought out of the stack. And in a lot of situations, this is actually important to us in computer science because, for example, this could be used to trace down like a. For example, if you're trying to solve a maze, this would be a great way to do it, because every time like you had a turn, you could push that onto a stack. And then if you get caught at, like a dead end, you start popping off the stack and then, like, let's say there was a second. There was a second direction here. So the second direction here. So you went down here. You went down here. You went down here. We have, like, a little mock stack right here, and every time you make one of these turns, you put this onto the stack. So, you know, turn one turned to turn three turned for you like, oh, we ran into a dead end. So what you can do is actually just pop that off to go backwards in time and then start in a different direction and start pushing on your new direction, which is this new blue path and such a sort of one of the little applications that could be used is great for, like, graph theory and some of the more advanced sort of concepts in computer science. But right now, all we have to understand is how a stack works, and we can also understand how it's implemented as well, because there's no sort of, um, you know, exact way to do this. There's a couple different ways that we can create a stack and their ways that we've already talked about. We can create an array that's a stack, and we can create a linked list that is a stack like so and you got to think about which one would be Mawr efficient. So we have our array here and let's say it. 01234 and so we would need to keep track of the basically the front of the stack this whole time because let's say the front is right here and then we have a cursor that you know is keeping track of that. So we have a to a 10 and a nine to the front is right here. And so every time we call Pop, so every time we call Pop in this situation, what is actually going to do it is going to grab whatever this cursors number is. So, for example, right now this cursor is set to will call. The cursor See is set to three. So when we pop, we're actually just going to grab the element at Let's say this whole array is set to X. We're going to grab the element at X three. So we get that element is going to be, it's going to deliver us A. It's going to deliver us a nine a nine. So let's do that. It's gonna deliver us a nine, and then what it does is it, then takes that number and it minuses it by one so we could go. So our new R C is now equal to C minus one, which means it's now equal to two, and then we can keep doing that. So just pop this information off. We grabbed it, and then we took the cursor. We took the cursor right here and we moved it back removed it back one over here. And so that's how it might work on an array. Now you might be thinking, this is going to have a limitation once we get to the very end. If we try to push any more information on to this end, it's going to break. We're going to get a segue fault, and that can make it a little bit dangerous. But if you know you're only working with 10 or so elements, this might be a quicker way to implement your stack. Now. The other way we could do it is we can use a a linked list, and this one's actually pretty intuitive as well. All we're gonna dio is every time we get a new piece of information, we're just going to put it to the front. So we're going to add it to the front like we've been doing with linked Less Solicitous is , let's put some numbers in here just to make this not so confusing. And let's say we wanted to add a four here, so all we have to do now is just link this up like so and then, of course, changed our starter position. So that it points to so that it points to our new one. And then every time we want to pop off the beginning, all we have to do is just change. Grab the information from this and then change our starter back, back up one. And so you can see that this might actually be a little bit quicker in the sense that you don't have to sort of set up everything here, but, um, and all special, because we don't have to grab anything from the back. We don't to search through this. These air always constant time up here, and it could be expanded to infinity. So, you know, you can keep adding notes. This as long as you want. You don't have to define how large is gonna be. So this is usually the typical way that stacks are implemented. It is some sort of link list. However, they can be implanted in a raise. I just wanted to show you that as well. And a lot of times something it isn't, you know, completely uncommon for them to be implemented with an array
25. 4-2 Stack Example: that we have some of the intuition behind a stack. I kind of wanted to cement the idea of a stack by going through a couple of examples, some sort of ways that we can look at how a program will input the data, output, the data and then how it's all gonna work together. So what we'll be doing is going over some examples right here. So right out here, we have just a basic example. It says give the final output and the remaining elements of the stack. So what we're gonna do is we're going to execute these commands and then see what our output becomes and what is still left on the stack at the very end. So let's start off by doing this step by step. So we're going to have an empty stack right here. It's got absolutely nothing in it. And then our output. I'm gonna put our output down here, actually, let's put it, Let's put it on the left here because it might be long we're gonna put our output right here, like so and so the output is just going to have these numbers can caffeinated onto one another. So I'm gonna explain that in a second, but are outputs gonna be down here under the bottom and then our stack is going to keep going across the top? So is this step one step one says push for So we're going to push for here, which means we just add a four under the stack. And then So now we go to the next one which is pushed to So nothing else has happened. Nothing's been popped. So we have a stack of four, and we're going to push to onto it. So pushed for, I mean, push to write goes right there. And so now we're gonna go to the next step says, Push 15. So we have a four, we have a two, and now we have a 15 like so And then we finally get to our first output. Our first output, says Pop, Remember, there needs to be nothing inside of the pot because it's just going to grab whatever element is at the top. It doesn't have a specified sort of element, is going to grab. You can't grab anything beneath the top. So all we have to say is just pop an element so we're going to do is we're going to pop that element off. And so all we have left is a four and a two, and then that 15 at 15 travels on down to our output down here. And then So now we have 15 in our output. And so this was this step. And so the next step we do is we push 27. Now we got four, we got to, and then we got 27 of the top. And then now what we're going to Dio is we're going to do to pops. So we're going to pop the 1st 1 which is our 27. So in this pop, the 27 will move down and be put into our output. So now we have 27 here, and you could separate this with commas or anything. I'm just putting them all together. So 15. 27 we're just gonna put this all together like so and then we have to pop again. So this was this pop right here. Where that pop. Now we have to pop one more time. So now we have just the four left on the stack and are too are too well, Sort of like they go both like that. So the chew, which is was right here goes down and it gets put onto the output, right? Like so. So then we have just the two added on. And so that was our second pop, right? Like that. And now you can see that we're going down this stack one sort of step at a time. The pushes the pops and this stack is being like, Dynamic is growing, it's shrinking, and it's only accessing from the top the entire time. So let's do our last couple of steps right here. We're going to push a three, So draw it right here because all of this over here is taken up. So we got four and then three is added, we pushed on our three, and then the next one we're going to do is we have to push on another four, so before three and then four. And so we push on that four like so and then Now what we have is we have three pop statements. So our first pop statement's gonna be here are second pop statement's gonna be here, and I third pop statement's gonna be here. And so what we have on the 1st 1 is we're going to pop off that four. We're going to pop off that four so it comes down to the output, and then we're going to pop off that three. So all we have left is before so than that three goes to the output. And then finally, we're going to pop off the last element. So we have an empty stack here at the very end. And so we have the four great, like so, and you'll notice something. You'll notice that at the very beginning we added this four and it didn't get removed to the very end. There was a whole bunch of stuff at the front here, but it didn't get moved until the very end, because it's a stack. And like I said, it could be starved down here. It could never be touched again. So that's just sort of like one of little caveats that comes with a stack. And so let me just draw these arrows sort of finish off. This diagram is gonna get a little sort of clustered in here, but that is how it's all going to go. So one step at a time. We're slowly moving one over to the next, the next the next, and we empty this stack out, and this is gonna be our final output. Is this number right here? 15 to 7 to 434 If we wanna put commas in here, it's it. Out. Put it like this. And so, yeah, that's the final output of our stack. And then our final stack ended up being completely empty. So let's do another example here, and I'm gonna make this one a little bit cleaner. Now that we don't. I'm not gonna try to draw those arrows again. That was good for the 1st 1 but let's sort of do it in a way that might look a little bit better. Okay, so I put numbers here. And so now let's go. Step one. Step one is a push to 10. So we had an empty sack to start with, put out, put down here. So we had a push of 10 right here. So let's go ahead and push 10. Step two says Pop. So we take that 10 and we pop it right off. So Now we're back in an empty stack. And then that 10 goes down to here. That 10 goes down right to here and let's put the step that it happened at above it. So it happened at set to. And so let's go to Step three. That is the push to So we're going to push the two on there like so And then Step four says to push a five to five. Step Five says to push a C. So we're going to be mixing different values here five and then see at the top. Step six is another pop, so we're going to be popping that sea off. So all we're gonna have left is a five and a two, and we're gonna put the see down here. And then it was on Step six that this happened. And then let's go to seven, which is another pop. So now we have only the two left on the stack like so and then we're going to also pop off that five and we put it down here. This is not step seven, and so we're at Step eight now. Eight says we need to push on a 14 So we have a two 14 nine says we need to push on dog. So now we have to 14 and then dog at the top here. And then now we're going to pop off all three elements. So 10 is going to go down to to and 14 and we're going to add dog down here. And this was on 10. And then we're going to go ahead and go to stop 11 right here, which pops off the 14. So all we have is too common to, and that was in step 11. And let's just finish is often we can finish this graph down here. It's a last pop, so it's an empty stack. So we get or my dad 11 11 should be 14 11. Should be 14 right here, and this should be, too. And then this is on step 11 and 12. Let's say though, that we didn't have step 12. Way came out to the exact same thing at the end here we came up with an empty stack, the same as we did in the last example. However, let's come up with a let's not ended in an empty stack. So let's say what? We cross out Step 11. Let's say we are step 12. So we cross out Step 12 and we're not using that. That means that this is going to end in a non empty stack so we can just erase this whole thing. And this is this happens a lot to in these sorts of examples is that the stack doesn't have to be in empty to end out. The problem is it can still have a value inside of it. Or could it still have multiple values inside it? And in this situation, we have an output here of 10 c five dog 14. But we have something left on the stack as well, and that's also important. That's why we have to do. Usually both of these situations went to get the output and the reigning stack in these situations. Because the stack might not be empty, it might not be able to just start as we did with this one with anti Stack. If we came back to this when we have to understand that there's still a two remaining in the stack and let's do a slightly different example, let's say that we tried something where we popped for 12 and then 13. We pop again if we do a pop on an empty stack. So there's an empty stack and we try to implement this last pop so that this will go with the original case where we're at an empty stack. And then now we're trying to pop something off of an empty stack that will result in an air . If you try to pop from an empty stack, you're going to get a massive air something along the lines of a segue fault air or doesn't exist air, whatever sort of program using it's gonna throw a different air. But basically what happens is you're trying to grab information that isn't there, and so you could write it in the code where it's just going to say, Hey, um, it's going to ignore it or whatever. That isn't usually the best coding practice because that means you're ignoring the problem , that your code was innocence wrong, where it's trying to pop something that isn't there, which means innocence that why was it trying to pop something that wasn't there? What sort of stuff comes from that? Maybe the design was wrong in the code. Maybe it's trying to grab something that it doesn't have, so we don't use. I want to ignore that. We want to let the air come through and then we can fix our code. We can fix the data coming in and out to fix that. So just understand, though, that popping. If you try to pop from Empty Stack, you will get an air from that. But that is the examples and sort of the intuition of how you might go down a a stack example pushing, copping really, really sort of simple, especially if you draw it out. I highly recommend you draw it out so you can see just how simple this really is. Your adding stuff to the top. You're popping it off the top and you're just listing out the output at the end
26. 4-3 Queues: Now that we have stacks under our belt, let's go with their brother. And that is the cues. Cues are very similar to Stax, except they have one important difference, and that is how information is pulled from the data structure. So let's do exactly what we did with the stacks. And let's kind of go over the intuition behind what a Q is. And the best way to do that is actually just in Britain, like British language and, I think, Australian language. They call what Americans call a line a Q, and that is for good reason. For example, let's say that we have, ah, bank teller right here. This is a bank teller and there's a line, which is what we say in America, Um, a line of people waiting. Now you'll have this person, this person, this person, this person and this person. And the way that this data structure works is that the first person that is here comes out . They get served, they get removed, and then any new people get added to the end over here. And this works in a way that it creates exactly a queue. The first people in are the first ones to get served over here. So this one is the Remember how in the last time we talked about l i f Oh, this one is f i f o. And this means first in first in, first out. So the first ones that arrived are the first ones that'll get, um, out again. And so, like I said, this one, these sorts of terminology could get just a little bit confusing. So if you want to think about it in a different way, this is the the, um, the longest waiting, the longest waiting is the 1st 1 out. So the longest person has been waiting is the 1st 1 out, And of course, that would be the person at the front here. And so the way you implement that into a data structure is instead of drawing a stack like this because that's that will only allow you to grab from one end is you can draw it sort of sideways. And that's kind of how I usually draw it is you have this sort of data structure where you , um, have, like, sort of sides here, And so if anyone, this could be the back of it, and this could be the front. And if people get added to it, they get added to the back. So let's say we have some data here now. If people get added to it will be added to this side of it, and when people get removed from it, they'll be taken from this side. So, for example, we have a new person that comes in. So there are 31 person gets served. So this three gets removed and the stocks kind of move down like this. And you see that it's a little bit harder to draw because it isn't exactly stationary, is it just It keeps moving over as you go. And that's just sort of the nature of a of a que in this situation. And so, for example, there are still the push and the pop operations, however, they just work a little bit differently. The pop is always gonna grab from the front instead of the back, so the pop grabs from the front instead of like in a queue where it's going to grab, or instead of like in a stack, where it's gonna grab from the exact same side it pushed. So Pop is gonna grab from front. Push is gonna push on to the back. And that is the major difference of this. And you might be thinking how exactly? With this, You know what? What? Some. The benefits of this. Imagine a computer. We have a processor right here. Ah, processor. So a CPU, how does it get data? Well, how does it get, actually instructions of what it should do? Well, the thing is, is that if it used a stack to get instructions, you would have a problem Where if, for example, let's say that this this top one was like, ah, Web browser. And then this one was something that needed to do for the just general processing. And then down here was a word document. Imagine if this was a sack and these two kept coming in our word document would freeze because it would never get touched. It would be what's known as starved at the bottom here, and that is not a good implementation for a CPU. Ah, better implementation would be is if instead of this being a stack, instead of being a stack, what we actually made it was a Q because now it's going to execute. The first task is gonna execute the first task and in the second task and in the third task . And then, if anything needs to be executed next, those are going to be added down here. And this Q will just keep wrapping around and around with new sort of things that it needs to dio and our CPU will just accomplish one after the next after the next after the next after the next. And it will be able to cycle through all of the programs that we need to. You know it needs to accomplish the calculations that needs to dio without ever starving a certain task. And so this is sort of how a CPU works. It's a little bit more complex than this, just because I mean it has to be doing this like something like three million times a second or some crazy number like that, and certain things have priority over other things. But in the basic sense, this is why you would use a Q over a stack is when you want everything to sort of have the same amount of value and you want the thing that's been waiting the longest to get served first. So then let's sort of look at the implementations. How we might do this we could use again a linked list or in array so far are linked list. We were for our Ray, my dad for array. We would have something like this here. And the thing is, this would need to be a circular array. This would need to be circular. Remember when we talked about the stacks that we could just use the normal right when we had to do was have one curse. Are one sort of Yeah, a cursor that pointed at the bottom, the back, and then it just sort of moved up and down. The problem is, is that the front and back are always going to be moving all around the place. So what we need to do is we need to actually implement. We need to implement a circular right so that the front and back can move around. So what we do is we have like, let's say, some information here, and we'd have a cursor that points to the front and a cursor that points to the back solicit would say, Maybe that's the front. Maybe this is the back. And now, when we added data in, it would be added in to the back or the front doesn't really matter as long as the sides are opposite. You'd added into one side and you take data from the other side. Then you'd move each one of these over. So if you took or if you added into the back, the back would then be moved. The back within me moved over to the new piece of data little hard right there. Yet it would be moved over like so, and then the front wouldn't move. And if you popped off the front, then you grab this one and you move this one over as well. So it's exactly how we talked about the circular array. We have to make it circular. For it to work otherwise will run into situations where we have to keep using O to the n toe, move all the data back and forth. And that gets really, really tedious end really resource intensive. The next option weaken Dio is we can use our linked list so we can have our linked list here and the way that we're going to do this is that we need to have both that starter pointer, the normal starter point that we have. We also need to have that tail pointer that we talked about. The reason we need the tail pointer is so we know exactly what's at the end of the list over here. And if we know what's at the end and if we can get to it immediately, then weaken, grab it, weaken, delete it and we can pop it off. But there's a problem here is that if we just have a tail pointer and we try to pop this off, this one is going not going to be reset and that's going to be a problem. We can't ever go previous, so this is actually going to typically need to be a doubly linked list. This will allow us to not only moved to the very end, but also move back one and reset all this to know so that we don't have problems where this is still contacting a no, that doesn't exist anymore. So we're gonna have to they have to use the most advanced linked list that we talked about , which is a doubly linked list with a tail pointer. And now every time we want to add something to this, we just added into the start. So we just added over. We just add it like so and redo the arrows and it gets added to the front every time we need to delete something, we just remove it from here and then readjust the tail pointer to the back all in constant time. And that's essential for all of these. We're gonna be talking about the very end of this, how the run times of the all of these are. But right now we just need to understand, and this is gonna be. The main point of that is that they all need to end up as constant time. Otherwise, it starts to get way too resource intensive and especially because if you just do it right , they all can be constant times, so there's no reason to make it over the end. If they all can be constant time. So that is a Q. And that is the intuition behind a que in the next lecture. Let's do the exact same thing we did with sacks and let's go over some examples
27. 4-4 Queue Examples: it's not. We've gone over the intuition behind Cues. Let's go to some examples so we can see how the status structure interacts with the data that it's given. The first example we're gonna do is gonna be very familiar. It's identical from one of the examples that we did for the stacks, and the reason is I want to show you how the data will come out differently so you'll see that right here. What we have is the stack output that this gave. And then down here is the exact same set of instructions that we did with the stack to get this output. So let's see what a Q comes up with. So all we have to do is basically the exact same thing that we've done with the stacks in the past, where we go one step at a time and by going one step at a time, we are able to, um, able to see how these stack is being built or in this situation, how the Q is being built, and then what the final output is. So let's go down here and we'll write down output as well. And so outputs gonna be down here. They're all numbered for us. So that we can, you know, create this grid over here of the steps. So let's get started. I'm gonna draw it sideways. It's time to make sure that it looks like a Q, and we're going to push on a 10. So that's our first step right here. The second step is going to be to pop, so we have to pop in element off. There's only one element here, so we pop off the 10. So now that the Q becomes empty, the 10 goes down here and then we can also right in the step that this happened at. And then let's go to three. So we're pushing on a to four were pushing on a five and then five. We are pushing on a C. Oops. We should probably make sure that we have the two there as well. And then five were pushing on a C, so it's gonna be too five. See, And then six now six were popping and in a stack we would be popping off the right side here, the side that the one that got in here the most recent. But this is accused. So we pop off the one that's been here the longest. And you'll see the difference right here is over here in the stack. We did. We popped off the one that got their the first the sea. But in this situation where we're gonna pop off the to So the two goes to right here and this now only has five and see left in it. And then we can go ahead and say, What number? The scientists, that Step six, this happened. And instead, seven, we're going to be doing another pop. So all that's left now on the stack is the sea. And so we put the sea here and we continue onward with an eight. So with an eight were pushing on a 14. So now it's 14. For my bad it is, See is on the left, See is ready to be popped next. So it's C and then 14 goes on to the back, and then we go to nine and we're pushing on dog as well. And so now we have C 14 and dog, and then finally, we're just going to be popping. So we have one pop. It's gonna remove one element, the next Pops could remove another element. And then finally, we have an empty string. So the first pop, we're gonna remove the 1st 1 here, the one that's been waiting longest, the sea. So all we're gonna have up here is we're going to have a four, and then a dog or 14 14 14 and dog. And then all we'll have left here is dog. And they will have an empty queue right at the end over here. So the C comes off, then it's the 14 comes off, and then the dog comes off. So what we have here is we're going to have Oh, I made a mistake. I made a mistake. This should not be See, There should be five because we popped off the five. Now this one is C and then it's 14. And then it's dog. Sorry about that mistake, guys. And then So this is happens at 10. This what happens at 11 and this one happens at 12. And so now you can see barring any other mistakes I might have made, you can see that these are actually very different than one another. Their outputs there's sort of a reverse of one another where you can see that the two is almost is over here, while the the two is on the very end. Over here, dog is completely different. So there isn't actually really that much of a pattern here, But you can see that the starting state was the same, because pushing a pop is always gonna be the same, whether it's a stacker que just one right after another. But what you start getting more data on there. They handled it out in a completely different way. So this is if this was like, for example, a computer processor or something, these task would be accomplished in very different ways. And in this situation, the two was pushed at the very top. But it didn't get executed until the very last statement. So this one would have been waiting a really long time. Well, this one was a lot more fair. Whatever came first was waiting longest. It averaged them out. So the waiting times were all relatively average overall. And that's why this is best for, like doing computer processing and stuff like that. So that is the first example of the Q it's pretty easy. All you have to do is Pop and then you push or you push and then you pop off the opposite end and you get this sort of cute just like how line would work. So now let's do another example, and you'll notice that over here the names have changed and this is because pop and push can be used. But also people use in Q and D. Q. And the reason for this is so that it wouldn't get confused with, for example, another stack in the program so the stack would have popping push, and then the the Q would have in Q and D. Q. And so what this does? Is it, like I said, separates those two. You might see that you might not, but I just want to introduce you to these concepts so that you understand. If you see an ANC, you're de que what exactly it is. And now on this one, I'm gonna do a little bit of a shorthand how I might solve this problem because this could get very, very ver boasts. Over here, it's It takes a lot of space on a lot of writing so I'm gonna show you a quicker way to do it where you get you where you don't have to write out the steps every single time and you can have exactly the same amount of accuracy. So let's go ahead and get started. First thing we're gonna do is we're gonna just gonna draw a really big Q right here. Like so we're gonna, you know, make this really, really big. And then we're gonna have an output down here and let's get started. So the first thing we're gonna do is we're going to in Q A 10 a 15 and a 17. So we include those three numbers. So now we're down to this up right here, and now we dick your 1st 1 So what we can do is just cross out the one on the opposite side that we're adding on and put it into our output. And then we could just move onto the next step in Q and l. So now we in Q to this side. And then now we're gonna do to Deke. Use ridicu one. We're in a d cute, too, So it's gonna be this 15 and in this 17 so we can just cross out 15 17 and write them down here. 15 came first, and then the 17. And so now we are here. And so what we're going to do is in Cuba 14. So we put a 14 over here, and then we have two more dick use, so we're going to go ahead and de que the L d. Q. The 14 l 14. Fix that 14 down here l 14 and then finally now, right at the very bottom. Here we are, right into the last two operations, the in q of four, and then we're going to de que that for. And so now what we have left is we have our, um and indeed, Q. And then this is our output and you'll notice that took substantially, you know, less time. And this is a good way to explain the concept, cause you can see every single step of the way. And if you want to go look back at it, you can see what happened between every push and every pop. However, when you're solving these problems, you might want to go with a quicker, faster hand version like this where you can just have one Q that you're going in and out of . So you want to keep redrawing cues and keeping track of things, and that will help you a lot. Because even in this situation, remember how I messed up right here? I didn't actually put a five and put a c there. Even in this, it didn't save me from that until the very end where I had to go. Actually, check back this. It has the exact same amount of accuracy, and it speeds up your workflow. It doesn't show you the concepts as well, but it'll get you the end result a lot quicker. So that is this example right here. Like I said, a whole lot quicker. When you do it this way and you can see that it's just like all the other cues you put in this side. And then you, um well, actually are putting into the right side over here. This would be like our our front and then your This is our back. So we popped from this side, pushed to the side, and this thing sort of moves down here as we slowly do that. So that is some examples on cues really, really neat data structure that you can build off of other data structures and that has so many real world applications and is really essential to learn a za computer scientist.
28. 4-5 Queue and Stack Run Times: So before we wrapped up this section, I wanted to discuss something that we've been discussing and all the other sections. And that is the run Times of stacks and queues and the run times aren't as cut and dry as, for example, thelancet lists or the array. And that's because these data structures aren't data structure than themselves. They're built off of other data structures, so that means that they're run times are actually dependent on whatever data structure you build it on. So if we go down here and we look at our stocks, have a push in a pop and our cues haven't in Q and D. Q. And we can either use for both of these we could either use in array. So you know the classic array, or we can use a linked list, so we have the choice on each of these. However, if you think about it, each of these the array and the linked list they all had the ability to if you chose one of the other toe have an odor, the one insert meaning constant time insert and a constant time deletion at the end, which means that we need to shoot for in our runtime of all of these, we need to shoot for constant time because it can be done in constant time. So since it can be done in constant time, we need to make sure that all of our ink used accused Pushes and Pops are also in constant time. If he's ran in linear time or in squared time or any other sort of longer time than our Q and Stack would be horribly inefficient because, like I just said, we've proven already that these operations can be done in constant time, so why make it slower than that? And so how might you do that? So let's go to the example of a stack really quick. So let's say we have a stack right here. And if we used, for example, a an array with an array, all we have to do is keep track of one end. So we have to do is let's say, three to we have to just push to the end, and then we can pop from the end, push to the end, pop from the end. We can keep doing that back and forth and remember an array is always constant, time inserting if you know the number of the spot here. So, for example, like 123 If you know the example of the spotlight, so then you can just go to that delete one off, go to that push one back on, which means that the deletion and the insertion are both going to be over with one. You can also do that with a linked list as long as it has a tail pointer. So just have the tail pointer point to the end. And then whenever you get there, it's probably going to need to be doubly linked list so we can make it a quick insertion and deletion. We just remove the end one and then move the pointer back one. And if we wanted to add one, we do the exact same thing where we just add one in over here. And then we move the tail pointer over one as well. And so both of those as we proved earlier, can be constant time as well. So that means both pop and push should be, Oh, to the one, and that's how you know that you have an efficient stack or Q. And then we can just do the quick sort of example as well for the, um for the Q. So let's go ahead and put queue up here and now this one is slightly more complex, and this is where you might have an air or something. If you choose the wrong design, you'll have something really slow. For example, if we had If we implement, it's just by a normal array. So implement this just by a normal array. We have to push and pop from separate ends, so we have to push to this end. And we had the pop from this end. But the problem is with an array. If it's not circular, when we pop from this in, we have to shift all elements back down to keep track of everything. And when we shift everything back down, that becomes an ode to the in operation. And like we said, that is not what we want. So we have to make sure that if we're using an array for this that it needs to be circular , it needs to have front and back pointers. We can just draw the circular rail right here. It needs to have front and back pointers so that, like we discussed earlier, it could be that circular where it could go either direction. And it doesn't need to move all of its data to maintain that integrity. So that's sort of a small pitfall that people can go into where they try to make a que, uh implemented through an array. And the problem is, is it needs to be circular. If they don't have it. Circular, they'll have a very inefficient or right here. And now the next one is a linked list, which is actually very common for a Q. And that is because it's very intuitive with a que so it needs to be doubly linked with a tail pointer, and then it has the normal starting point as well. And so if we are pushing, we're going to push to this end. So it's going to go, you know, over here and then if we're popping, we're going to pop from this end, or if you want, this is in Q. And this is Deke. You and both of these pointers are pretty common. Like I explained earlier, the tail point is very easy to keep it just a little extra code, a little extra sort of maintenance, and the start pointer is always there. So we know where are linked list is. And with those two elements we can push, we can Pushor in Q and Popper. Dick, you really, really simply and always in constant time. So that is just the basis of the runtime. Just understand that they should always be constant. Time to push and to pop in. Q and D. Q. If you come to a situation where it's owed the end, then it's an inefficient algorithm and it can be rethought to be faster, really pretty, simply usually only run into it in a case like this. But I just want to let you guys, you know, sort of understand that concept because, like I said, this is a computer science course and understanding the run times. Everything is very essential to understanding, win and where to use everything
29. 4-6 Stack and Queues Real World Examples: so that we have a good understanding of stacks and accused. Let's go over some real world examples of where we might have seen them in a neat application or two of them. So the 1st 1 that we've all seen before is the undo and redo. So, for example, on a word document, if I decide to go backwards, you can see that I can undo things. But if I decide not to make any changes, I could go back forwards and that works with two stacks. So right here we have, like, our activation or command stack. And over here it is the reduce stack. So we are currently at command fire. We've entered in something to the program and we want to go back to command for what we do is it pops that five off and inserts it. It pushes it onto this right stack. Let's say we want to go to commands back. It pops the four off and pushes it over here to the right stack as well. So now what we have is we have this stack on the left, which is our activation, or our command stack. And we are now at Command three, the third command that we entered Now this data was not deleted. It's just sitting in this reduced act. If we want to redo things, we just redo it. It pops it off this and brings it back over. Let's say that we don't want to read you. 55 was where a mistake. Waas We want to add a new 16 up to this. So what we do is we take and we add 60 here. But now if we clicked redo, it would take this 500 pop it on top of six and it would destroy the integrity of this stack over here. So whenever we add something, whenever we don't redo, So for example, right here, if we this d f g right here, if I delete all of those and type in something else, there's no way I could get back to that. What I do is, whenever we add a new change that isn't a radio, we actually just clear the right stack. So we removed everything from and make it like this again. And now we can begin our undoes and reduce once again moving back and forth. And if we ever make another change again, this will all be cleared and removed again. So that's basically how undo and redo works. There's also a neat little thing is if you've ever gone into, like, a word document or some other program, and you keep clicking control Z, which is undue or clicking the undo button and you reach the end of the undue. That's because you filled up the stack. So if we expand this all way up, you know, we put a let's say we have the command, you know, six and then five does actually go here. We didn't undo anything. And, you know, we put a seven up here. What was? If you want to put an eight Well, to put in eight, we don't want to, you know, just say, you know what? We're not going to keep track of that anymore. What we do is we take the one and we erase it, were removed from the stack, and then we take everything and shift it down. So now we have 2345678 And so you'll notice that command one is now no longer tracked. That means that we can Onley undo up to command to, and every time we do another command, it's going to take those and remove the bottom off of it and then push everything onto the top as well. So it's sort of Ah, modified stack in the fact that we can remove from the bottom, but that's only four overflows onto the stack. Just a little sort of a caveat of, if you ever wonder why you can't undo anymore is because your stack was filled up and has no more memory to store those commands. A neat sort of it use of scheduling is or neat use of cuse is scheduling. So what exactly is scheduling? Well on any sort of computer, commands are given to it, and a lot times a lot of commands are given to a computer, and it has to understand Win to use those commands. Let's think of an example of printing. Let's say we have 100 computers in an office or a library, all connected to one printer, and everyone is trying to print at the same time. How does it deal with which ones to print first? What it does is it has a que whoever gets there first gets their job completed first. So, for example, let's say that we are on this place in the queue. We have 345 The next job to be completed is on the bottom side, and this is sort of a que building up. So we have three. It's going to go ahead and get printed. And then now, during while it was being printed to, decided that they wanted to have a print job, and then one decided they wanted a print job as well. So what it does it just puts them in the back of the list, and then it continues down. Once it finishes one job and moves, though, I have a next job up for right here it goes head in Prince, and it goes to the next 15 and then, you know, people can go back on the other end. So let's say that this this Q keeps expanding or we can actually make it circular. So we put the three there it goes to to it prints that job four has got another job, so they put it on there. One is now going to be completed and it goes like that. So it's just a way of scheduling all of those commands in one nice sort of row right there . So it's a fair if you print first. The guy who printed 10 minutes after you shouldn't get their job before you stuff like that . It's just a way of scheduling to make it fair. There's also a way of traversing a maze with a stack. This is just sort of a fun sort of example. Here, let's say that we have a stack with this grid doesn't line up exactly on here, but it's enough to make the point what you can do with amaze if let's say that we move through the maze and every move, we put it onto the stack. So you know, we go through all here. We get to this point, we can either go right. We go down Well, if we go right, we're going to put on whatever this tile is. So let's say that this tile is seven and this tile is eight. So we're gonna put seven on here and we're gonna move up up now. We've had a wall. There's no more options and no more directions we can go. So what we do is we start popping off the stack. We start going backwards until we get to a point where we can make another decision. So we get back to this point and we're like, Well, that way wasn't good. So let's go down this way. Now we have three decisions. So let's say on the first decision, we decided, Let's go left, moves all the way down here. No reach dead in pop off, pop, pop, pop, pop, pop pop back to this one. Yet we can make more decisions here. Let's go down no pop doctor here and then move all the way this direction. We have to decisions to make. Go left. Nope. Dead in pop, pop, pop, pop, pop, pop. And then we go, Let's make this decision yet we made it out of the maze. It's a way that your program can sort of run through a maze and find all the the way through it. Your mate, Your program might be really fast. Let's say it makes correct decisions every time it guesses right. It's like going here. Let's go down. Okay, we have more decisions here Let's go right. And then we go all the way over. Let's go left. Nope. Let's go Bottom. We found an exit. If this wasn't an exit, I'd be like, OK, let's go all the way back up to here. Let's go here. Ran out of decisions to go here and let's say that we now pop back up to this one, and it starts looking on this one as well. So it's just a way that you can keep track of where you've been and what directions you can go within a maze. It's, ah, sort of a fun way of using a stack to do a sort of gaming thing. Those are some neat applications of stacks and queues. They're used all over the place on the Internet just because are the Internet and any program really just because queues are great for scheduling? Stacks are great for sort of putting things on and then popping them off in a way that works, for example, like with undoing and redoing great data structures and great things to learn. If you're going into computer science
30. 5-1 Sorting Algorithm Introdcution: So now we have a good understanding of some basic data structures, especially of a raise weaken. Begin discussing how to sort the data structures and the ones we're gonna be talking about in this particular unit are sorting different arrays. And you might be thinking, Why is it so important to sort an array? Well, remember how we're talking about the search algorithm, the search run time for our arrays, and we said that the search runtime is owed to the end. However, if it's sorted, then it's owed to the log of in. And then we even further said that over the log of it is basically constant time. So that means that roughly we can get oh, constant time from searching a an array, that is, that is assorted. Which means sorting will improve our runtime overall, the whole lot. So, for example, if we had, you know, a three a one a six and a seven here instead of it being like this, if we sort it, if we use a sorting algorithm on it, then we'll actually come out with something that makes a little bit more since 1367 And this allows us to again apply that faster search runtime to it, and it allows us to apply a couple of other operations that are slightly quicker as well. And so sorting a an array is extremely important. However, there are a ton of different ways to do it, and so that's what we'll be discussing in this unit is all the different sorting out rhythms, their run times when you might use one that complexities and just getting a better understanding of how we might sort something like an array to make it quicker on the on different aspects.
31. 5-2 Bubble Sort: So the first sorting algorithm we're gonna cover is a sorting algorithm known as Bubble Sort and bubble Sort is a notoriously easy to implement but very bad sorting out of them, and we're gonna discuss why that is. So let's go over the intuition of exactly how bubble sort works. So let's say that we have, for example, in a ray here and we'll make it a simple array. We'll make it six long six big here, so separate, thes and like, maybe like one here one here. Sure, that's six different little columns in there. And let's say we have a 7 to 10 153 And so what bubble sort does is it starts at one side of this, so it starts to the left side and it goes up and it grabs the highest number that it sees. So by default it's gonna grab the first number, and then it asks, Is this number larger than this number? If so, swapped their positions, and then once they're swapped, it moves on to the next one. Is this number larger than this number? If so, swap their positions and it keeps going up and down like that. So let's just kind of go over a couple of examples here. So what we have is we have this seven right here. And so seven is now are allergic. Number is seven larger than two. Why, yes, it is. So what we do is we go ahead and we swapped their positions to seven. And let me actually just put those in red. So we're pretty consistent here. So we swapped the positions to seven. Now is seven greater than 10? No, it isn't. So what we do is we keep these in the same place, and now 10 becomes our largest number. Is 10 greater than one? Why, yes, it is. So we swapped their positions and then we have the 10 over here and the one goes right here . Is 10 greater than five? Yes, it is. We swapped their positions so they five goes here. The 10 goes here and then finally, we do it one more time, where the 10 goes here, and then the three moves back. So you'll notice that now we have one spot. Sorted Tin is our greatest number. It's been sorted. It's at the very, very top here. And So what we did was we took the 10 which started originally over here, and we moved it all the way up. And so now we have the first number. We have one spot sorted. And so now what we can do is we can do another step here. We can do another step right here. We're going to redraw this really quickly. And then let's start with our are two is a greater than seven. No. So the two is going to stay where it's at, and then we'll say is the seven seven is now our biggest number is seven greater than one. Yes, it is. So we're going to swap the one backwards and we're gonna move. These seven up is seven greater than five. It is. So we're going to swap their positions. So now five is here and seven is here, and we'll just draw the rest. And right here is seven greater than three. It is. So we swapped their positions once more and now we have 37 So now we have two parts sorted . We have two areas over here sorted. Let's keep going here. Let's keep going right here. And I'm gonna be this up of being able to copy. We're gonna highlight this in control CV and move that down and then fix the little line right there. We should be good Dogo. Okay, so the next step is do again and again is one gray than to it. ISS So now we move these and we swap these around 12 is to greater than five. It is not. So now five is our largest number. Five is now our largest number. Over here is five greater than three they are. So we swapped those 35 and then now those are in the right position. And then now you'll notice we're gonna run through it once more and we notice one is not greater than 22 is not greater than 33 is not greater than 55 is not greater than 77 is not greater than 10. And we are done. Our little um our array here has been sorted using bubble sort. So the reason it's called Bubble Sort is because you're bubbling up the highest number to the very end and you're sorting it one by one at a time. Let me show a little bit more of an intuitive way of seeing. This is a great site called Visual Go and so I wouldn't be showing you is their representation of the algorithm right here. So let me go ahead and click the play button on this, and so you'll notice here is that it's doing the exact same thing. The four is bigger, and it just keeps checking and whatever the larger it swap. So the 47 is larger than all numbers so far, so it keeps swapping it up, and then 50 becomes large numbers. So now we have one area sorted that comes in yellow, and now we do it again. The 44 is now our largest number, so it's going up and it's slowly swapping one at a time, and then 46 becomes the largest, and now we have a guaranteed to spots that are sorted. Do it again. We keep going up with a 38 and you see that slowly. It's getting more and more assorted as we G o. Now we have three guaranteed spots, and it's going to keep going like this until it finishes so we can actually speed this up a tab right here and now you can see that when it does, this is gonna do it near instantaneously. Um, it's going to run through all of its really, really fast. And this is the essence of what bubble sort is doing, and now the entire thing is sorted. So that is a bubble sort in a nutshell. So now let's sort of break this apart and see why is this such a bad algorithm? Well, if we have just a simple sort of look at this, let's say that we have maybe four right here. If we just take a simple look at what it needs to dio, let's take a look at the worst case scenario. So the worst case scenario is that it's going to have to take one element and then move it up to the top every single time. So the worst case scenario would actually a completely inverse a completely inverse graph, meaning that it was sorted in the wrong direction. So in this situation, instead of it being, instead of it being 37 to 1, it would be it would be 7321 and so this is sort of the exact opposite direction. And that means that every time it bubbles, it's going to have to take this, move it all the way over, take the next one and move it all the way over. Take next, move it all the way over. And so you'll see that the number of operations required actually goes up pretty exponentially. So we have to take this and move it over. 123 and then now the three is over here. So I had to move it up. 12 then one. And then we're sorted. And that actually comes out, too, because the worst case scenario is we have to take this and we have to compare it in number of times and move it in number of places that comes out to an equation that is oh, of end times. Oh, of in. Which means that the worst case scenario for this is actually in squared. And if you actually do some math behind this, you'll actually understand that the average case scenario is pretty similar. The average case scenario for Bubble sort is also is also in squared, meaning that this is an in squared sort of sorting out. Remember how we talked about Exponential? If you got anything over maybe like, 1000 or 10,000 things you need to sort this can start taking minutes, hours, days to complete. And so if we got under a little table here, this the table will be filling out throughout this thing. Well, we could see our average sort here. We'll use the notation for it are average is actually in squared and our big ghosts or worst case scenarios in squared. Now, let's look at the best case scenario. So let's say that this came completely sorted. Well, you'd still have to make sure that it was completely sorted, so you would still have to go down and check. Is this greater than this? Is this greater than this? Is this greater than this? So if it was completely sorted, for example, let's say we got this one right here. What we come into this were like, Okay, time to sort this. Is this great in this? Nope. Nope, Nope, nope, nope. Ok, it's sorted. That's in. It still has to touch the entire length of the array at least once, so that's in. So that means our best case scenario is O to the end is actually Ah, this is best case. So we want to use the proper notation for this, which is the Sigma or the Omega. So what we have here is this is the runtime of bubble soared because of its nature, where it has to compare in number of numbers in number of times, we get ourselves is in square relationship, and on average, it's going to have to compare it. So if we didn't average here, we can actually calculate that really quickly. The average is in over two times in over two, which is gonna equal in squared over four. We disregard the exponential, and it still comes out to in squared just a quick little math there. If you don't, that's fine. Just understand that the average and the worst case scenario are in squared and the best case is in. And so this is one of our worst algorithms. However, it's very easy to implement in code, all you have to do is have a sort of is this greater than Let me just show you a little code snippet of it right here. It actually has the code written out, and you can see it's only like five or six lines here. Basically, it goes up the entire element. And if the left elements of the one that it selected is greater than the right, it swaps their place and then it moves it and it keeps doing that over and over and over. So it's very, very simple code to implement very, very simple. But because of that, this is where we sort of talked about complexity and time. Because of its simplicity, it actually runs into being one of the worst algorithms here. So that is bubble sort. We're going to start moving on to the different sorts as we go down, and I'll start showing you more and more efficient sorts. They might get a little more complex, but their efficiency will definitely go up.
32. 5-3 Selection Sort: so the next thought we're gonna cover is the selection sort. So what is exactly selection sort? So let's go over the intuition of it. And then we're also going to talk about the run times and then look individual representation as well. So selection sword is slightly different than bubble sort. It's actually pretty similar, but it is definitely different. So let's say we have an array here. So we got ourselves in a ray like normal, and it has four numbers in it. And those four numbers are, let's say, 17 18 2 and then four. So what selections or does is it has an unsorted portion and a sordid portion. So let's go ahead and draw a unsorted and sorted portion. And basically what we're gonna do is we're going to be moving this this portion right here , like it, select it, we're gonna be moving it over. So everything to the left of this, everything to the left is sorted. Everything to the right is unsorted. And so what we do is we go through this list and we look for the minimum value. Once we find the minimum value, we swap it to the first space that is to the right of this. And then we move this over and we keep doing that over and over and over. So let's go through an example right here. So we go to the 17. We say No. 17 eyes now are our minimum number. It's now the smallest number we have because we haven't compared it with anything else. So we take that 17 we can compare it to the next is 18 smaller than 17. No, compare it to the next is to smaller than 17. Yes, it is. So now too is our smallest number. And so now we're comparing two with everything else is to smaller than four. It is not so that means to is our smallest number. So now what we do is we take this number and we swap it with the number or the position that is to the right of our sordid portion. So we take this and we swap it with what's to the right, which is our 1st 1 So that means we're gonna take the 17 and the two were going to swap them. So the two goes here and the 17 goes here, so now what we could do is we can take this and we're going to move, are sorted portion over, right? Like so. And you know, everything that is left is sorted. We have the 1st 1 that sort of we found out it's the most. It's the minimum number in this list. So it's going into our first position. So now we do it again. 18 is now a minimum number. We get to hear 17. A smaller 17 is now a minimal number. We get to hear for a smaller force in our minimal number and there's nothing else to compare. So now we take our minimum number, which turned out to be four, and we swap it with the 18 over here because it's the 18 is the position to the right of the started place. So we take it and we swap it. So now foregoes here 18 goes here and then we take our sordid portion and we move it over to right here. So now we only have two numbers left to Dio. We take 17 we move it, we say is 17 smaller than 18. Yes, it is. So then we take this. We move it over 18 of their last number. So we take this and move it over, and now we are confirmed that it has been sorted. So it works just like that. It scans through for the minimum number. Then it swaps it into the first available space, and then the sordid portion moves over, and it keeps doing that until it completes. So then let's go ahead and look at a visual representation of this and you can see it kind of going over in fast motion, like so So we have this right here. It's a, um again, by visual go dot net because I have some really good material here, and we're just gonna click the play button and we can watch this thing, do its do its thing. So, three, you see, it is the minimum that it finds to is a minimum swapped into the first position. This over on the left is now the unsorted position. We could slow it down a little bit if you want, and you could see the four. Now it's comparing it one on a time. Four is the minimum, swaps it in. All the yellow are in the sort of position, and it starts over again. It starts at five, is the minimum, and now it's comparing it with every single one of these. Over time, five ends up being the minimum. Nothing changes compares, it found. 15 is not the minimum that is comparing with, and it's gonna keep going. 15 is the smallest in this less left. So is going to swap it with the next available position and market as sorted. And it's going to keep doing this over time until it finishes off the production right here , which will now speed it back up so we can see this kind of go in fast motion and you'll see that it keeps it keeps swapping, like so, finding the minimum, swapping it to the right of the sort of position until the entire thing is sorted. And there we go. Now it's sorted. So how exactly what is the runtime of something like this? So let's go over the runtime right here. Let's think about this sort of intuitively try to figure out what the runtime is just by looking at this. So we got ourselves one here and let's say we have ourselves this and let's drawling are sorted right like so And then some numbers here to a wine 17 for unsorted thing here. And so what do we have to dio? We have to run through this for every single number in here. We're going to have to run through this at most n times, at least in minus the rest times. So that means that we won't have to run through this on average, on average, in over two times. And how do we get to end over too? Well, this has to do it by in. This has to do it by end minus one. This has to do it by in minus two in minus three. M minus four. And so what you can see is that on the left side here, we're gonna have things that are weighted towards in and on the right sand side. We're having things that are sort of weighted towards one. And that means that in the center here, the the average time is going to take all of these two run is going to be in over two. It's the average of all of these combined sort of how average works and So basically, how this comes out is that the runtime of this is going to be for every single point in here. It's going to be in over two. And since we have in numbers that we have to do that, we multiply them together. So they're going to be in number of values here, multiplied by in over two, and that means is going to come out as in squared. So the average run time here is in squared, which is also the worst case run time. But you have to think about something else. This is slightly different in bubble sort, in the sense that it doesn't. There's no chance that it can actually just check and confirm that the entirety of the array has been sorted. It has to do this sort of move over the it has to take, you know, the unsorted portion, and it has to move it over and check for every single number. Even if we get in a completely sorted number, Ah, array here. So if we have, if this comes out sorted, for example, if we have 1 to 8, let me just put this in here. Let's let's do a little example here. Let's create a completely sorted array here. 12345 So we have ourselves a completely sorted right. How do we know that it's completely sorted? Well, our program, all it knows how to do is run through and check. This is the minimum number is a smaller Yes, Yes, yes, yes. Okay. It's smaller than all of them. That means it can say in its place that it moves over the sort of grade. Is this smaller? Yes. Yes, yes, yes. Okay, we move it over. Yes, Yes, yes. OK, we move it over. Yes, yes, yes. OK, we move it over. Yes. Okay. We move it over. And so we keep moving it over, no matter how many. No matter how sorted, it actually starts out being. And so that's important. Because that means that means that even in the best case scenario, even in the best case scenario, it's still going to be in squared. Unlike bubble sort, which can run through this and immediately check that it's sorted by saying, Is this smaller than this? Nope. Is this small in this? Nope. Nope. Nope. Once it gets the and it's like, Oh, we're already sorted. It ends the program. It can't do this. It has to move this sorted grid over. And so the problem with selection sort is that we get into all three values are in squared . So our best case scenario, our best case scenario, um, is going to still be in squared, the average runtime still going to be in squared, and then the worst runtime is going to be in squared or Big O notation is in squared as well. So this program is actually very, very inefficient. It's the most inefficient starting album that we're going to deal with, and you can see that the intuition behind it is actually pretty neat. You're creating assorted portion. You're moving numbers back and forth. It's, you know, it sounds like it worked really well. The problem is, is because of this sordid portion were having this case where it's always going to be in squared. And so in the end of all of this, we get another one that's in squared, in squared and square, just sort of like bubble sorts. These two are pretty bad. Selection are pretty bad sorting algorithms and these are you know they're good to learn because they show you intuitive sort of ways of doing it. You know, ways that you can, um, ways that kind of makes sense, but they aren't the best ways to do it. So these are the bad sorting out rhythms, and next, we're gonna be moving into some better sorting algorithms that actually have faster run times. And it's gonna be more complicated because that's just how programming work. So you can't just have this this, you know, trade off where you can just get faster run times and it be simpler. Usually it's gonna be a little bit more complex, but they still they still pretty much makes sense once we run through him a couple of times . So, yeah, that is selection sort. And let's jump into some faster sorting algorithms.
33. 5-4 Insertion Sort: the last lecture we talked about how we're going to be going into some algorithms that were a little bit faster and I lied just a little bit because we still have to go over Insertion Sort, which is still pretty slow insurgents. Or it is the logical next step because it sort of builds off of bubble short and selection sort sort of together into this new sort of starting algorithm. That sounds a little bit more like like it should be done. But it still has those slow tendencies. And so we're gonna be going over this one. We're gonna going over how it works than showing the visualization and then doing the run times like the rest of them. So let's get started with this. First thing we're gonna do is we're just gonna grow ourselves a little Ray right here so that we could sort of look at this intuitively like so and so I have this array right here and now let's put some numbers in here. Eight for 17 like so And so what insurgents or does is like I said, it kind of works off of selection where we have this sorted and unsorted portion. Except unlike selection. What we do is we actually take the number out and then insert it back into the sordid section. We don't look for the minimum over here. We just grabbed the 1st 1 and in sort it back over. So let's do just that. First off, we're going to take our sorted section. We're gonna move it over. Actually, we should take our number out first, so we save it like a variable or something. We bring it out and then we take this and we move it over. So now everything over here is the sordid section. So then we re insert this, but we insert it into a point where it should go. So what we do is we insert it back into this new empty space. Um so technically, like in theory, we take it out and we put it back. But in all actuality, if you saw what happened there, it doesn't actually move. So we're just moving this portion and because always right of the sordid portion, all we have to do is move this over, then grab this and it's already in the position that it should be So what we do is we take this and now we compare it and sort of do a bubble sort backwards to figure out where it should go. So well, say, does it go? Is this smaller than this? Why? Yesterday? So we take this and we swap it out with this, and then we just move it back over like so? And then what we do is we do the exact same thing. We grab this, move it over and we quote unquote, we take this out and then we inserted back into this sordid array. But in code, we just keep it right here, and we swap it down until we find the spot it needs to go. So we take this and were like these to swap like so, Is this still less? Yup. So we take this back down, swap this up, and then swap this one right back over to here, and then we do the exact same thing to finish it off. Grab these, move it over. So, no, everything's in our sordid portion. Grab. This is this lesson. This it is. So we put it down and then we take this swap. It and this is not less than that. So we're done with our sorting algorithm. So then let's take a little look at a visualization of how this is done on a little bit of a larger scale. So just click the go button and you'll see that it grabs it, takes it out, and then it basically it inserts it into the sordid portion. So it's looking for where it should go. Inserted in takes it out. If it's in the exact it is in the right places already, sort it just puts it back and it says we're not sorted, and then it swaps them until it gets to the point where it is sorted. And so it does this all the way until it is done. Just taking the 1st 1 every single time and then doing sort of like a reverse bubble sort to put it in the right place and so you can see that it kind of combines like I said, selection sort because we have a sorted and unsorted portion and bubble sort, because bubble is how we're putting it into these places over here. And so there we have it. Now we have assorted algorithm right here. Let's go over the runtime. How long does this take? How does it compare to the other ones? Well, basically, this is going to be about the same as bubble Sort. And you can kind of think of why? Because it runs off of bubble Sort. But we're gonna go over just one key difference, which actually makes it sort of like that. So what we have here is we have assorted portion and remember, with selection sort, it became out to all in squared and the reason it came out to all in squares because you had to find the minimum. You had to find the minimum value over here to the right, and to do that took quote unquote in amount of time, on average, about in over two, which comes out to end time. So we had to search for the minimum. And then we would put it over into the sorted out assorted place without having to search it. Because it's once we find the minimum, we know it's the next logical step. But doing that over here made it so that no matter if it was sorted or not, we always came out to end squared. But now if we have some numbers here, what this does is it allows us to actually get to in, ah, slightly faster, best case scenario. So if you think about it, we have to take in amount of data. So we had to take in amount of data in the amount of data, and we have to compare it in amount of times. Um, because we have to insert it over here. So the maximum it could have to do is swapping in amount of times. And this actually like in the other examples, comes out to end over two. But the two gets removed and we just end up having in squared right here. And so that comes out to about the worst case. And the average case here is that the average in the worst case, the worst case being that it's completely reversed. And it has to do the maximum out, which is this end by in over two. And the sort of average case is that it has to do maybe less. Ah, work. It doesn't have to always go to the farthest amount of swaps. Maybe it gets lucky a couple times and places them right then it's slightly less than in squared, but it's still in the order of in squared, so we can put those two in our our chart right here. So the average is in squared and the worst case scenario is in squared. However, it fares a little bit better on its best case. And let's take a look at that right here. Let's erase. This does not draw here. Let's erase this. This and this and let's make this sorted. So we got this algorithm and it already came sorted already. It came sorted. So what we do is we grab the 1st 1 and we inserted into here. Well, there is nothing to do. We don't do any swaps. So how long did that take? Well, it took this operation to move it over and this operation to check if these two if this one is greater than this one. So it came out to two operations would do that exact same thing again. We move this, we grab this and we move it over. Grab this and we move it over and we check notes still sorted. Grab this and move it over and we check note, still sorted. So each of those times it only took two, um, to sort of cycles. All we had to do was we moved the move the cursor. So we moved the cursor and then we check, if greater than and so every single time. That's just two operations. Which means that it's always running into operations. No matter how many ends there are, it's going to be 222222 It's never going to scale based on ends, so that means it is technically constant time. So now we have in amount of transfers right here we have in amount of transfers, um, or in amount of data. I guess you could call them with in amount of data multiplied by multiplied by constant time. In its best case scenario, which means our best case scenario is going to actually be in just like bubble sort. We're going to get this best case scenario of in, and that all comes down to the fact that when it's sorted, we only have to do constant operations. To change this over this right here becomes constant, wherein selection sort. This was always in over two, which means that it would always come out to end squared. And so that is insertion sort in a nutshell. Um, like I said, it combines Theus specks of bubble sort and selection sort, having the unsorted and sorted and then bubbling up the minimum number. And then it combines to this sort of new one called Insertion Sort. And it comes out to these run times right here, which is identical to bubble sort. Um, so it's still pretty slow. So these are the the base sixties alone. Every computer scientists basically nose off the top of their head thes three sorts, and then we get into the faster ones over here. And these are where some, like, really intuitive really like, sort of clever ideas come up and we start actually improving the comparison sort algorithms to make them really fast up too fast, assed fast, as in log in, and things like that
34. 5-5 Quick Sort: Okay, so let's go in or discussion about the first algorithm that's going to be faster than those other albums that we talked about. So when we look at those other algorithms run times, we noticed that in the worst case scenario, they're all in squared. And even in the average case scenario, they're all in squared, and one of them is slightly faster in the best run time. But that's if it's already started. So that doesn't even really count, because that's one out of an infinite amount of cases. So all we really want to look at is over here, and we see that they're all pretty bad. And so what we're gonna do is gonna look at the next step in the sort of sorting algorithm evolution on. We're going to understand a much quicker way of starting on the average time. You'll notice that a couple of the other parts are still the same. But our average time is gonna improve, and then we get into merge, sort. We're gonna be able to improve everything with that. So what we're gonna be going over in this lecture is quick sort. We'll be talking about the intuition behind Quick Sort. How it actually works and operates. And I will be doing an example in showing you step by step how you know, exactly quick sort works. Then we'll do a little visual, and then the next lesson will go over the runtime because both of these parts are a little bit complex and in their own rights. And I don't want to make you know, like a 40 minute video on this on a break this up so we can understand each part individually. So let's go behind the intuition for quick sort. So let's go ahead and create ourselves a nice little ray up here. We're going to make it a slightly larger A. It's a little too close to the top. Let's bring it down just a little bit here. So we got ourselves in a ray and let's make it, um, eight big. So we'll just keep splitting it down the center here and now we have eight different spots and let's fill this up with these numbers right here. Let's say 15 22 13 27 12 10 20 and then 25. Let's go ahead and fix this 10 right here so it's noticeable. And so now we have our array. And so what quick sort does is it works on this idea of something called a pivot. So what exactly is a pivot? A pivot is going to be the the place how it's going to separate. So what we're doing is we're creating sub A raise from this array. So we're applying the quick sort algorithm to different little sub arrays, and then we're gonna keep applying it down until we only have one or two, um, pieces of data in our raise, and then we're gonna bring it all up. What this does is it creates a divide and conquer approach for the program that's going to speed up the overall run time. And this will all make sense as we start going over this a little bit more and you start understanding sort of how exactly this works. And then the runtime will explain why it's quicker. So let's just go along with this for right now and then hopefully it makes sense as we go through it. So what we have here is we're going to choose a pivot. Everything less than the pivot is gonna go into the left sub array. Everything less is gonna go on the left and everything greater than is going to go into the right. So what is our pivot? What number are we going to choose as our pivot? Well, we have to think that this is completely, um, sort of hidden to us. We don't know what any of these numbers are. We can't find averages or medians or anything like that because it's unsorted. And we could, you know, try to find the middle number. But like I said, it's unsorted, so we'd have to know a lot more information and we have to sort it to find the middle number so it doesn't make any sense. So what we can dio is weaken. Just choose the pivot based on the left, Most element. So in this situation, what we're gonna do is we're just going to grab the pivot right here. We're going to say it's the left, most element. And now what we're gonna do is we're going to put all of the numbers that are less than our pivot to the left, all numbers that are greater than our pivot to the right, so What we do is we take our pivot and we draw down like so And so now it is in the center . Now, this pivot is in the center, and by definition it's going to be sorted because everything to the left is gonna be less than and everything to the right is gonna be greater than it. So we're done with that number. What we need to do now is separated so that we have that left sub array and we have that right summary. Let's go ahead and drawl are sub arrays in and let's look for which numbers are greater or less than so, What we have over here is we have the numbers 10 13 and then 12 put into the left over here because those are all of the ones that are less than it. So we have 10 13 12 and there's a bunch of different algorithms to find this to search this and figure out what is less, you could go just left to right, looking for the numbers that are less than it. So in that situation, this might come out to 13 12 10. Or you could go from right to left or from the pivot going left. And then from the pivot going right, There's really a bunch of different ways to do this. Um, where is gonna go with whatever this algorithm comes out to be? And this is just because it's going to prove a couple of things As we go through, it'll make the most sort of sense. So now we have on the left. Here we have the 20 to 27 20 and then 25. So all these numbers on the right are greater. All these numbers on the left are less then. And now what we do is we apply that algorithm to both of these once again. So we find the pivot and in this case, is going to be this number on the left again. It's going to be this number right here and in over here. It's gonna be this number right here. Let's go down the left sub tree first, and this is the idea of a recursive algorithm. It is going to tackle the left sub trees first. Then it's gonna come back up, is gonna tackle these these all the way down. Then it's gonna come back up and it's going to tackle the right sub tree next. And some treatise means that it's the separate summary, a summary. So what we're gonna do is we're going to find ourselves a pivot point. It is that 10 that we just picked. So now this is going to come down and we're going to have a 10 right here. And then we're going to put all of the numbers greater than it to the left. All the numbers less than it to the right and your letters that 10 is actually the minimum here. Sore left sub tree has nothing technically, in it. So that means this are 10 in this situation is going to be our left most number, and then in the right sub tree, where we're going to end up having is the other two numbers, the other two numbers. So we're going to have a 13 and 12 over here, and then over here, we're going to split it up once more, with 22 being at the center. So we draw when down and 22 is going to be our new number. And until left of that, we're going tohave 20 and 25. Or actually, my bad 25 is greater that so we're just going to have 20 gonna have 20 over here, and then to the right is going to be the rest of the numbers to the right is gonna be 27 and 25 and then you'll notice that we still have a spot open in both of these. So twenties 27 25. And we still have to a raid arrays that are greater than one. So we're going to choose another pivot point for both of these here and here. And so what we're gonna have is gonna have ourselves a 13 that goes down and then the 12 ghosts left. There's nothing on the left sub array. So we have the 12 here, and then let's do the exact same thing on the right. Here we have the 27 that goes down the 27 that goes right here, and then the 25 will be to the left of it over here, which 27. 25 will be over here and then that will technically be its own sub array. Like so And then Now what we have is actually a sorted array. It doesn't look like it. But we have a sort of ray, because now what we've done is this all sort of takes place inside the array up here. So now if we trace this, we have the the array sorted from smallest the greatest. We have the 10. Then we go down and we have 12. 13 when we come back up, And then we have the 15 2022 25 27. And so are Ray is created through this method right here. And so let's split it down into the original eight that we had. And so let's do that trace just one more time. So we go down the left side over here, and what we're gonna do is this is a recursive algorithm. So we split it in half, and then now we split this one in half and we check. There's nothing in the left. We checked. There's only one element here, so we grab that on that, we write it down and we check the right side. There's still two elements left, so we do it again, we check the left side. Well, there's an element there now so we can write that down. We checked the middle. There's an element there so we can write that down. And now there's nothing else to check. There's no more steps. So then it retraces back up and then we checked the middle Now so we've checked the left sub tree. Now we check the middle and we have ourselves a 15 and then we go ahead and we check the right side. And the first thing once we get here is we're always checking left first. So we go down here and there's a 20 by itself. So we grab the number one and it's in our ray. We go when we check the middle, there's a 22 by itself and then we check the right sub array and there's still stuff here. So we apply the algorithm, we go down the left and we see that this is all by itself. So we have a 25 and in the middle is also by itself 27. And at the very end over here there is nothing left. So that is the end. And we have a sorted array. So all of this works off of this pivot at the very beginning because we're splitting this problem up into sub problems. And with these sub problems, we split it up into more sub problems and more sub problems until we get down to units of one. And then once we do that because of how the algorithm is working as a whole, they're all going to be sorted. And then all we have to do is trace this new tree that we have created, and we're gonna be going over like, sort of tree traces a little bit later. But imagine that all of this is happening in place, meaning that it isn't going in. Creating these subways is actually just doing it all within this array. Up here, you'll notice that once it does all this, it's placing them in the right place as it goes down. So it's going to start this whole left one doing this sort of thing. Then it's going to sort this whole right one doing this sort of thing, and at the end, it's going to look like this as a final sorted version. So let's take a look at this visually, and I think it will make a little bit more since, so these great people at visual go dot net have created these sorting albums that we've been using. If you want to check out any of this, make sure to go to their website because they provide all this for free. So we've created are sort. Now let's begin the sort right here, and you'll notice that what it's doing is it's allocating a sub array. So it chose the the uh three over here as the pivot point. And what it's doing now is it's going and it's sorting the right sub array and then out sorting the left summary of this new pivots will win through this and it found a pivot. And then it's sorting the left side, sorting the right side of that little crate place that creative left finds that yellow pivot sorts it. Now we're sorting the right side, so we just sort of the left. Now we're going into the right side and we're sorting it over here, and, you know, this is a little bit more complex. In the example, I went over, and that's because when you actually implement it in code, it has to be this sort of way so that it Onley swaps because Onley swapping is very, very important. If we create all these little summaries, is going to take extra space, extra time and then we'll actually have to do like traces, which will add into the final time. So what it's doing here and we can play this again one more time. We can create a new one here, and we can actually enter in the numbers that we went with right here if we want to. So give me one second and I will do that. It was 15 than 22 13 27 than 12 10 2025 10 2025. And then we can click. Go right here. And so now this is the array that we had originally, so let's see how it does it. We're to click the go and then go to sort and click. Go on these sort, and so you'll see. What it's doing is it's picked the 15 as the pivot, and it's grabbing all the numbers that are less than and you'll see that it comes up with this 10 13 12 as well, because it's doing swapping so it's just grabbing all of the numbers. Let's do this one more time. You see the 15. It's grabbing all the numbers that are less than and swapping it into the next available space. So it grabs the 13 1st and then it swaps it into the space. That's to the right of the 15. It grabs the 12. Next it swaps into the space that's next on this sordid place, which is 12. And then it grabs the 10 and it'll swap it into this place and then it grabs this and it's swaps it to the front of that list. So, like I said, a little bit more complex and how the code actually implements it. But it's doing exactly what we've done. It's created this left sub tree this 10 13 12 and it's created this right sub tree this 22 27 20 and 25 and the pivot is down the center and you'll notice that the paper went orange , which means it's completely sorted. And now what it's tackling is the left sub tree rate here, and you'll notice that it just what it did there was it created that left sub tree and nothing actually happened. The 13 or 12 say in the same place. So it went down and it did one more variation. And then it's swapped the 12 and 13 to put them in the right spot. And now it is tackling the right sub tree over here, so you'll notice it is doing the exact same steps as we did. It's separating it down into this variant rate here, swapping the positions and making sure it's sorted. And then now you'll see that the entire graphics sorted. So this is basically how quick sort works. It works off of splitting into smaller and smaller problems. It gives you a quicker way of doing it, and we're going to see exactly why this is quicker in the next lecture, where we're going to talk about run times.
35. 5-6 Quick Sort Run Times: So now let's go over the runtime of this algorithm that we just talked about the quick sort algorithm. So how fast is this algorithm? What? We need to think of it like this and this is going to be the same intuition for the next one where we discuss Merge Sort. And so what we're gonna do is we're gonna break this apart, and this is gonna be like Like I said, the next one, we do merge, sort. It's going to be sort of the same intuition. But I feel like if we go over this two times in a row, that is going to really sort of cement, and it'll start making some sense. So what, this is we're baking, basically taking this problem, and we're breaking it down into a bunch of sort of sub. Siri's sort of problems. And the way that this helps is that it breaks it down into these levels right here. And you'll see that in this level, when we break it down and we have to do comparisons will touch at most in elements, and I believe it comes out to end over to comparisons. But we'll just leave it at n. So this level takes in this level right here will take in as well. And you'll notice this because of the, um, sort of the way that this works here is that it's going to have to touch in. And it's slightly less than in as we go down here. And then this level down here is going to take in as well. And so what we have here is we have a program that takes three in well, what we need to look at. What we need to examine is how does this grow over time? How does this number, the amount of ends we could even call this X? How does X of in grow? Because once we find the answer to that, once we understand how this number grows, then will understand the runtime for this algorithm. And so what we can do is we can just think of this logically. How many do we have here? We had eight different variants here. So what we did was in the best case scenario, we had ourselves eight ones at the top. So, like eight different sort of parts to this, or we can actually do this reverse kind of how this works. We started with eight and then what we did is we broke it down into two groups of four and we broke that down into two groups of two. And then you broke that down into ones and you'll notice that that's what this emulates right here is that we had the eight. We broke it down into roughly two groups of four. And then because the pivot is involved, it slightly less on the left side, and it sort of goes down from there. But it keeps that general shape, that general sort of shape where it goes to these different levels. And so, how many levels we had? What? We had one too Three levels before we got to all ones and the program terminated. So that means that at eight we have ourselves three. Now, how many will we have if we started off with 16 for example? So that we can actually do here is we can highlight these and let's go ahead and remove them. And then let's duplicate this right here. So let's grab this whole thing right here. Control CND. We're gonna grab this. We're gonna move it to the right. Actually, move it down to here and we'll grab this one and move it down as well. Just selecting all of them, trying to just save a little bit of time here. And so what we have is we have these two little subsets of of eight. And so this is just a 16. That added. So all that did was we just added an additional level. We sold it off after all that work there of copying and pasting. And all we really did was added one more level. And that's the level of eight. So we have in in, in in as well. And so what we have here is, we have If we had 16 we have four. And imagine if we did this again. If we copy and paste this both to the right and the left and then just added the next level of 32 well, that would get us 32 equals five and 64 would equal six. And 128 would equal 12 or my math 12 with equal seven and so on and so on and so on. So what that means is that the number of levels remember, That's what we're trying to find This X rate here, the number of levels actually grows. At a certain rate, it grows when this doubles every time. So it grows at a reverse exponential rate, meaning that the distance between these two the distance between how Maney numbers we have is slowly going to get larger and larger and larger. For example, in here it took eight up here. It took eight Dittemore numbers for us to get another level here. It took 16 to get a Level 32 to get on that level 64 to get in a level 128 to get another level, and this will go off into infinity. So what we have is we have a sort of chart that looks like this. A chart that looks like this over time. And we've talked about this before. We're going to talk about it in merge sort. This chart is log in, so the amount of levels the amount of times we have to apply in is log in. So that means that our equation remember our equation was that X, which is how many levels of in we have. That's the X Times. How many times we have to do in is actually just going to come out to log in of n or better way. It's usually put is like this. It's usually written the standard notation for this is in law again or log linear time. So we have log linear time. And so this was for the best case. We thought, if it was split right down the middle, we have this log linear time, and that's exactly how this comes out is we have for the best run time we have in log in. And then now let's think about this for the average run time. Well, the average runtime is going to assume, on average that we're gonna choose a pivot point that somewhere near the center. So on average, if we do all the math out, it's gonna come to the exact same thing it's gonna come to in log in. But what about the worst case scenario? What would be the worst case scenario? Well, the worst case scenario would be us creating this graph right here, except that it doesn't split evenly every time it's 15 over here, which means that this is for example, Ah one and it's over here. We have to split it down again. So let's just try to look at that. Look at the worst case a little bit here. Let's look at the worst case here. So we have a 16 and we split it wrong. We chose a pivot point where everything was greater than the pivot point or everything was less than the pivot point. So we chose this. And instead of having this summary over here, we have nothing over here. So we just have and in minus one, we have the exact same array accepted just one last. I didn't help us at all. So if we keep doing that, if we keep choosing the lowest or the most amount of the sub arrays were not going to get this relationship where we break this problem up into these little sub components and get it down to the minimum number of end levels. And let's kind of go over what exactly that means. So we chose a pivot point where 15 of them were less and only one were greater than well, this one can't split anymore. So we have to add a level here. And so we chose a pivot point where 14 on the left side or less. And one over here was it. And 13 and one and 12 and one. And we can keep this going. We could go 11 and, um, 11 and one, 10 and one. And I'm doing this this sort of exhaustively here to show you something. Eight and 17654321 And how many levels did this end up being when we split it down this horrible way? What we got was got in in, in, in, in, in, and just keep it going. We have got all these ends every single time because it's not like it. It went faster because of this. It still has to touch all of these elements. So what we ended up getting here was 123456789 10 11 12 13 14 15 or roughly because of the 1st 1 16 ends. So in this situation, we actually get to right around between in minus one and in levels, because we can kind of just forget about the N minus one part. So that means that in this situation, if we go in the worst case scenario, we go in the worst case scenario. So this up here is the best and the average case, the worst case scenario is going to be ah, whole lot different. In the worst case scenario are x the amount of levels equals in. And so now you begin to see what we're going on here. If X and in are the relationship here and our ex eagles in, that means we have in times in we have in times in in our worst case scenario. And that means that we're getting in squared. In our worst case, we've chosen a pivot point where every single time it didn't break up evenly or even close to evenly, because in our average case, this could be a six, and this could be a 10. You know, this could be instead of fours, it could be two and six or something. You could break up pretty close, and we might just add one or two ends to this. It still is basically like a log in, but in the worst case scenario and things next to the worst case scenario it's basically in . It's basically this end times and relationship, and that means in our worst case, we get in squared. And so at the very end here, you'll notice that our worst case did not improve. Our worse case is still in squared. We got a lot better of an average case, and this is kind of very important. This one looks, um, looks slower, which I guess it technically is. If the array is already sorted, however, we don't deal with a raise that are already sorted. There's a reason it came into the sorting algorithm. So usually this is Onley, for if it was already sorted, it would still work like this. Or if you chose the perfect pivot point, I guess, is in this situation. And this is the chose a kind of close to the pivot point, so this might look worse, but it's still perfectly fine in law again. This is still a perfectly fine one, and these two don't really make sense where they're both perfectly sorted. But our average time is a lot quicker here. This is This is a huge difference right here in squared an end So if you over the long run , if you employed this, you would save like I said a ton of time because the average case would be what would come out most often. Sometimes you jump into here, sometimes you jump into here, but it all average out right into the middle here so you can see that this is a faster run time. But we don't want this toe happen, because even one time where it could happen in the in squared, it might be enough to crash our program. It might be too large for our program. A handle maybe will run on a memory in that situation. Maybe the processing power would hit 100 wouldn't be able to execute. So what we want to do is we want to guarantee that this is going to be in log in, and that is where we're going to jump into merge, sort, which is going to allow us to have this guarantee. And I'll explain that when we get into the next lecture
36. 5-7 Merge Sort: So now let's go over an algorithm that is even faster than the other algorithms that we have covered. And that is merge, sort and Merge Sort does a sort of intuitive Ryker Zhen algorithm to make it faster. We're going over exactly how it does that. We're actually gonna be breaking this one up into two videos, one on the intuition behind, because it's a little bit more complex and one on the run times, because that as well, is a little bit more complex and needs its own sort of explanation. So let's get started. Let's just go to the intuition, and then we'll look at a visual model of it running as well. So the way that merged or works is that it takes the data so it takes the array, and then it splits it into little sub arrays. So, for example, what we do is we take this Ray up here and let's say that there's just four elements of 7 10 9 and eight, and what we do is we separate those down in to a left and a right column. So what we have here is we have a seven and it 10 except what's keep the color consistent right here. And let's go like that. So we have 1/7 a 10 and then on the right here we have another sub array. You'll notice that we don't do anything without doing any sorting at this step. We're just splitting it down into its little sub arrays like so. And then what we do is we split it down again until they are just at boxes that are completely and totally independent of one another. So seven, 10 98 And then what we do with this is that we re combine it. We re combine these two into a new array of two, and we re combined these two back into an array of two. And you might be thinking, What? We just split them up. What are we gonna do when we re combined them? This is where the sorting actually takes place. So stay with me. Here, we're gonna do is we're gonna create ourselves a cursor on the left and a cursor on the right. And at this step, it doesn't make it a lot of sense. But when we get into larger ones, it will start to make sense. So we have a curse on the left. We have a cursor on the right, and what we're gonna do is we're going to take those and we're going to compare those numbers. So right now, the cursor on the left is pointing to a seven, and the cursor on the right is pointing to a 10. So which one of those is smaller? Well, the seven is smaller, so it goes first, and then the 10 is larger than that. So it goes next. We're into the exact same thing on the right. Over here, the eight goes first or and then the or in this situation, the eight was on the right. So it's actually blue. The eight goes first, and then the nine goes right after it. Eight goes 1st 9 goes right after it. And I know that didn't make a lot of sense. So let's actually make it make sense. It makes more sense when we actually have mawr elements to sort of track right here. So now what we have is we have it combining it back down into its final array. So we're gonna go ahead and create those cursors once again. We have one pointing to the left and one pointing to the right, like so. And what we do is we take these numbers and we compare the two cursors. So the first cursor is pointing to seven at the current moment. So it's pointing to a seven, and the next cursor is pointing to an eight. We take these two and we compare them well. These seven is the smallest one, so we put that into the first spot of the array. Now what we do is we cross that number out. We no longer using it. We take the cursor and we move it to the next element of whatever array was just used. So since he left, Array was used, were taking the red cursor, and we're moving it to the right. And when we move it to the right, we then put in the new number. So in this situation it's a 10. So we put it into the new number. We grab both the numbers and we check again. Which one? Smaller? Well, now it's the right side. So we go ahead and we right in the right side. We crossed this number out we move the cursor over and then we redo the comparisons. Now the right side is a nine. So we put in a nine and then we put in the last element of the 10 on the left over here and you'll notice during this step we ran. How many times in times and during this that we ran on Lee in time, we didn't run in squared or anything like that, because all we did was we just moved the cursor. We touched the seven once, the eight once, the nine once and the 10 once. And then we touch them again. So it was just two of these ends, and this is gonna tie into the runtime later. But just keep that in sort of your mind because it's gonna like I said later on, when we do, the runtime of all of this is going to start making a little bit more sense. But let's do a bigger example. Let's say that we've gotten down to the final step of 24 by four arrays, so we have to four by four raise here and we've got into the final step, which means that these are all sorted at this point, just like here. These are sorted from smallest to greatest, but they aren't particularly sorted entirely. So what we're gonna do is we're gonna fill in these numbers. Let's say this is a 13 15 16 on let's say this one is 10 11 14 17 like so And so what we're gonna do is we're going to actually do the exact same thing we've done before. So this came in. We had to sub arrays combined down into this, and now we have to sorted a raise. And so we're gonna do is we're going to combine them right here, and we're gonna make a final array. That's all the way big. So it's the original size here, which was the 19 total, or the night in the 19 the eight total 123456 that we needed to buy this again. And so now what we're gonna do is we're into the exact same method as before. We're gonna grab a left cursor, we're gonna grab a right cursor, and these are just parsing the arrays, and we're comparing for whichever one is the smallest. So we have right here we have the one and the 10 being compared to the current moment. And so let's move this up. So we have some room whenever these disappear. But that, like, right here. So we're comparing the one in 10. Well, obviously one is smaller. So the one goes into here into the first spot. We go ahead and cross out the one and then we move the cursor over by one. And then now we compare the three we compare the three and the 10. Well, three is still smaller, so I put the three in there. We cross it out and we move the cursor over to the 15. Like so and now we could do a comparison once again. Well, now the right side is smaller. The 10 is the smallest, so we go ahead. We put the 10 in here, cross out and move. The 11 is now the smallest because the next number over here is 11. 11 is now smallest. So I put the 11 in there. We're gonna keep doing this process over and over until we get to the very end. But I want to kind of go through the entire example. So you understand what exactly we're doing here? It will really help you get the intuition which will help you understand the run time a little bit later. So now we have 14. 14 and again the smallest. So we go ahead and do the exact same processes as before, crossing out, moving the cursor over. And then now we have a 17 over here. And so then we do a comparison on the left side. Well, the 15 is now the smallest of the 15 goes in here. Cross out, move. Now we have 16 Verse 17. The 16 is the smallest, so the 16 gets placed here. It gets crossed out, it gets crossed out, and then the cursor moves on. But there's a special case right here is that when the cursor moves past and it understand that it's at the end of the ray, there's nothing past year, you know, there's nothing it can grab over here once it gets to this point. What it does is that it just copies the rest of whatever is over here. So in this situation we only have one element. It just copies the 17 over. However, in some situations it won't be that easy will have. Maybe a 17 18 1920 over here. And so we don't keep comparing it. Once we run out of one of the sides because there's nothing to compare anymore. The rate is finished, so what we'll do is we'll actually go down and we'll just copy the rest from the other rate because already sorted. So we know that the rest in this side are going to be greater than any of the numbers over here. And such is something to keep in mind is at some point you'll run out on one array but still have three or four elements on the other way. And that just means copy the elements all over into the final right. And now what we have is our sorted array. So that is the intuition behind house merge sort works. You take a number and you split it downward and you're cursive. Lee. Split it until you get to a point where you're just at ones down here and the algorithm itself doesn't usually actually split it like so it just understands that 012 and three are all going to be separate entities but the human way of doing it is we have to do these splits. The computer has a slightly faster way of just guessing all of this stuff, but that doesn't really matter. That's something that's very actually easy to implement code whenever you look up the code for emerge sort. But this is how it works and the ones we get to our final merge step. We create our little cursors, and then they just compare back and forth and you'll notice that even in this where they're ate, it only took eight different tries because all we did was we went one than we moved over three. So it won't 13 10 11 15 16 17 It only touched everything in a number of times. So even at this step it was in. And like I said, this is gonna come in important sort of a little bit later when we actually look at the run time. But let's now look at a visualization of how this might work by a computer. So let's go ahead and create ourselves a unsorted array here and let's start it. And this is using merge sort, and we're gonna go ahead and click the sort button and then go and then you'll see that what it does. Is it first separates thes image. These sides and what it's doing is it's actually separating. Let's let's slow this down a second. What it's doing is it's separating into two, sorting them, separating into the next to sorting those and then four. And then we'll do the exact same thing the next. So too two makes it into four right here. And then it's going to sort the eight right here. So this is recursive Lee going in. So what it does, it actually started by splitting this in half, and then it went all the way down until it got to these two right here, sort of like over in this example. It got all the way to right here. Then it started sorting it upwards, and that's what it's doing right now. Now it's sorting the left half of this, and it's making it all sorted. And then it begins the right half. So it takes the 1st 2 numbers. It sorts them. Then it takes the next two numbers and it sorts them. And then now it has two sets of two. So now they can set these because now they're exactly the same. So it compares them and it stores them again, just grabbing the least one on each side puts it back up, and now it has four that are sorted so that it moves into the next spot. And even though there are three, it doesn't always have to be perfectly symmetrical. You can have an odd number and still have this work. It goes ahead and it separates thes it sorts of back down, and then it chooses the least amount and creates assorted site on the right. And then it does. One final after this is done, puts these all back and then now we have two sets of eight with this 17 but that's as close to eight as we can get. And then it's begin sorting this for the final sort, and now you'll notice it grabs the least amount from each side. Go and speed this up. If I can actually click on this, you'll right there and then it just go a heads and puts it all back together. So that is the intuition. That is how merge sort works. Let's jump into the next lecture. We're talking out the runtime and why, exactly? This is faster than the others.
37. 5-8 Merge Sort Run Times: So in the last lecture, we talked about the intuition behind Merge Sort. In this election, we're gonna be going over the runtime. What is the runtime of this sort and why is it faster than other sorts? Well, to do that, we need to understand what I was talking about in the last lecture, which was at each one of these steps. How many times or how many operations doesn't need to do? How many comparisons are we doing? And that always comes out to end because these cursors working in two different places here will only go 1234 And then we go exactly the same way who grabbed the seven than the eight than the nine. The 10 is gonna be in every single one of these times, so we know that the basis of our our run time here is gonna be in. But how many times do we do in what is in going to be multiplied by? And that's the key to this run time. So let's take a look at the one we had here and we can just see that in four with four with four. It took two separate end to get to that point. So how many ends were there? Will it? Four. There were two, and so how many would be at eight? Let's do that. Let's say that we have. We get to the part where it's combining, which is eight different little ones. Let's see. Is that 812345678 And then we're gonna combine those into a bunch of twos and then combine those into force and then combine that back into an eight. So each one of these steps is this step over here just with eight instead of four. So it comes down to something that looks like this, and then we can see that there's going to be in in here is going to be an end here, here and then here for the final combination. So with eight, we have three ends that we have to use. Well, how many would there be for 16? Well, if you think about it 16 all that's going to do it is going to add a separate layer to this . So instead of this many ones we're going to have. So instead of this many ones, we're gonna have a duplicate of this. So let's see if we can't duplicate this. Let's like this and let's move this out of the way and let's grab. Actually, we could grab this entire thing. Let's grab the entire thing and duplicated and then move it to the right over here. So now there's 16. Now we have two sets of eight, and then we come down and all we have to do is one more calculation on here to get the original 16. So what did doubling the size get us? Well, it only got us one more in and only got us one more in. And this intuition goes throughout the entire process. If you double this again, the 32 it's only going to take 5 64 is only going to take an additional in, which means 128 is gonna only take seven. 2 56 is only gonna take eight, 5 12 is only gonna take 9024 is only gonna take 10 and so on and so on and so on. And so what you see is that there's this difference here. This key difference is that the larger the number, the less amount of steps is going to take in an actual, predictable way. It's going to be Remember that reverse exponential we talked about earlier? That log of in well, that is exactly what the runtime turns out to be is that over time, as in gets larger and larger and larger the amount of times that we have to do this in operation amount of times. We have to do this because we're splitting it in half on Lee goes up by a small margin so we can get to a point where we're doubling this and you can see that there has to be 512 additional numbers that add another in. So that means that the amount of ends that we're using is gonna be multiplied by not question mark, but by log in. So that all adds up to that all adds up to the fact that we are going to have an end result of in times log in. And so that means that no matter how big or small this array is, whether it's sorted or unsorted, we're always gonna do the exact same operation. This could be coming. Come to us completely sorted or as unsorted as possible, And it wouldn't matter. We'd still do the exact same operations still be in at every one of these steps. And that means that not only is the best case log in log in is the average case in log in. The worst case is going to be in log in because, like I said, it could be completely and totally, um, unsorted. And it doesn't matter because we're going to the exact same steps every single time. This could come out to us at 789 10 and we'd still break it up and do the little comparisons. This could come to us as 10 987 and we'd still break it up into the exact number of comparisons. There will be no change depending on the numbers, which means that all three of these are gonna be identical to one another, and therefore this makes it the fastest algorithm because now not only is our fastest and log in is our average in log in, But for the first time, our worst case is in log in. We now are able to check this and understand that Even in the worst case, it's always gonna be in log in, which is substantially faster than these over here. Remember that graph that we showed in the very beginning of this? This thing's course is that in squared will go up to a straight line, will in log in will be the angle between these two will get less and less. And so we put a 1,000,000 into this one in a 1,000,000 into this one, the results will be night and day. This one will take weeks, while this one might take a few minutes to maybe an hour. And that is the power of just thinking about this. Because now we can use these other sorting algorithms. Why would we ever implement these when we can implement a sorting algorithm? That's even faster rate here, one that's guaranteed to go in log in. And so with these comparisons source, we can actually come up with one final sort of, um, piece of information from all this and that. Is that the fastest you'll ever be able to do a comparison sort a comparison sort, which means that you are actually comparing the two numbers to each other comparison sort that the fastest you'll ever be able to do it is actually in law again. There is no faster comparison sort out there and by comparison, sort I mean, you know, you have 2468 And to sort this you're comparing Thies this number with this number or this number with that number or this number with this lumber, you're comparing them within itself. And so the fastest you'll ever be able to do that in is in log in. So this becomes the fastest of those algorithms. There are other faster sorting out rhythm, but they take a lot more intuition and they go into sort of analyzing the number itself to figure out where it goes without actually comparing it with other numbers. It's a little complex, and we may touch on that a little bit later. However, like I said, it's still pretty complex. So all we have to really understand for the base of the computer science is comparison sort and we have to understand that the fastest we can ever get is in log in and merge. SORT gets us that by using this intuition of breaking everything into halves so that our our program only scales of by ends, times log Evans and that gets us this fast sort of algorithm of in log in that is merge sort really, really sort of fun, sort to go through and something that's really powerful in the computer science round.
38. 5-9 Stable vs Nonstable: So let's discuss one final thing with sorting algorithms, and that's whether an algorithm is stable or non stable. So what exactly does this mean? Well, what it means is, if we started an algorithm that looked like this, let's say that we have are an array that look like this. Let's say that we had one too five to that. It would always happen that this to that. This two on the left would come before this, too. So it we could think of it like this. We could say that this is to a and this is to be And so it needs toe always happen that when the final array is built, when the final array is built, that it turns out that it's one to a to be five like that in no other way, if it comes out that it comes to to be and then to a that is an unstable sort, that means that the two identical values are not kept in the same place, so that we would lose the original sort of importance of the position but relative to one another. And this might be important. For example, thes could be maybe people that are waiting in a line or something like that and to have one person jumped in front, the other person, just because they have the same first name doesn't make a lot of sense. And, you know, it could get into more important things as well. But that's a sort of the example that came to mind. So a lot of times, or at least sometimes, stability matters. So we look at our algorithms which ones are stable? Well, that actually comes down to this. The bubble sort is stable because whenever we swap, values were only swapping values relative to one another meeting that a two is never going to swap with another two, and it can't jump or fall behind the other two or the other identical value in any way. So bubble sort comes out to be stable. Insertion store comes out to be stable and merge. Sort comes out to be stable. And I won't go over exactly why these are. These three are stable, but just understand that in their sort of implementation, it works out that they're stable. What we're gonna be going over is the to that aren't stable. These two right here are not stable. And the reason for this, for the sort of the explanation of this means that these two will you know, they can remove order. They can destroy this. Where to be will come before to a now, in this sort of graphic unit said 1225 It doesn't matter which air be it is. It's still quote unquote sorted. However, if it's one to be to a five, then it's non stabilise sorted. So let's go over our first example here. Let's go over selection sort. Remember how selection sort grabs the lowest number and then swaps it with the first available space. So let's go over a counter Example an example that shows the lack of stability. And all we need for this actually is just three array columns here. And so what we have here is let's go with two and we'll put the little A here to B and then one right here. And so what we do is we look for it. The one with the at least number, the minimum. So we're going down. We're going down. 01 We swap it with the 1st 1 So then we recreate this array like so, and what we get is an array that looks like this where the one comes in the beginning. And then it is to be still here. But the the one in the to a swapped. So now it's sorted. But if you'll notice because of selection sorts, the way that it handles data, we have a problem right here that the to be comes before the to A. And that can happen with any numbers that are identical if they're given in a certain pattern. Which means that overall, we cannot guarantee the stability of selection sort. So selection sort is not stable. So let's take a look at another one really quick as well, the other one that is not stable. And that is quick sort. So let's look at why Quick store is not stable, and that comes down to the exact same thing. So remember with quick sort were choosing a pivot. So with the pivot in mind, what we're gonna do is we're gonna put everything to the left of the pivot that's less and everything to the right of the pivot. That's more what if we had this same example here. And let's say that our pivot is chosen to be, too. So let's use this exact same example. So we have ourselves a to a A to B and then a one on the right side over here. So what if we chose this? Pit it or let's go with this pivot right here. Let's say that we chose to be as our pivot point. So what we do is we bring this down is to be, and that is, are locked in pivot. And then we brag everything that's to the left of to be over here. So it's everything that's less than so we put over here one and then everything that's greater than or equal to on the right side. And you'll notice a problem here. Whenever we do that, whenever we grab and go to the right over here, what we get is to a is greater than equal to to be so to a goes to the right, and now we're down to all of the lowest possible values. Our first step. It's an end step, and so what we do is we just go ahead and grab it and we write out our final array down here. And like we were expecting because it is not stable. What we get is the to A and the to be swap places because of the pivot that was chosen. So on this one, we cannot guarantee stability as well. So that is just a quick touch on stability. Whether a algorithm is stable or not, it's important if the location of the numbers is important. So meaning that the A should come before the B should come before the sees. That the left to right association matters and just sort of something else that we could tack on to these algorithms is that we can study that if we use merge or or insertion or bubble sort that we will keep our stability. However, if we use selection or if we use quick sort, we will lose our stability. We could have potential names that air swapped potential values that are swapped that should not have been swapped, and so that is stability
39. 5-10 Sorting Algorithm Real World Examples: So now that we have a good understanding of sorting algorithms and we've seen a lot of different starting algorithms, let's go ahead and look at some real world examples of sorting algorithms. So there are a lot of different uses for sorting. Algorithms are