Optimizing Travelling Salesman and Vehicle Routing Problems | Dana Knight | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Optimizing Travelling Salesman and Vehicle Routing Problems

teacher avatar Dana Knight

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

28 Lessons (2h 17m)
    • 1. Introduction and Course Scope

      1:16
    • 2. Optimization

      5:00
    • 3. Metaheuristics

      3:44
    • 4. Search Techniques #1

      5:12
    • 5. Search Techniques #2

      5:18
    • 6. Search Techniques #3

      6:51
    • 7. Simulated Annealing #1

      2:23
    • 8. Simulated Annealing #2

      5:10
    • 9. Simulated Annealing #3

      8:08
    • 10. Tabu Search #1

      1:24
    • 11. Tabu Search #2

      5:00
    • 12. Tabu Search #3

      3:15
    • 13. Constraint Handling

      6:26
    • 14. Travelling Salesman Problem

      1:49
    • 15. Travelling Salesman Problem , Coding in Python #1

      5:07
    • 16. Travelling Salesman Problem , Coding in Python #2

      5:15
    • 17. Travelling Salesman Problem , Coding in Python #3

      5:31
    • 18. Vehicle Routing Problem #1

      1:49
    • 19. Vehicle Routing Problem #2

      4:21
    • 20. Travelling Salesman Problem , Coding in Python #1

      3:54
    • 21. Travelling Salesman Problem , Coding in Python #2

      4:21
    • 22. Travelling Salesman Problem , Coding in Python #3

      7:08
    • 23. Travelling Salesman Problem , Coding in Python #4

      4:10
    • 24. Travelling Salesman Problem , Coding in Python #5

      10:30
    • 25. Travelling Salesman Problem , Coding in Python #6

      6:15
    • 26. Travelling Salesman Problem , Coding in Python #7

      4:14
    • 27. Travelling Salesman Problem , Coding in Python #8

      5:40
    • 28. Travelling Salesman Problem , Coding in Python #9

      7:59
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

60

Students

--

Projects

About This Class

In this course, you will solve the Travelling Salesman Problem (TSP) and the Vehicle Routing Problem (VRP) through Metaheuristics, namely, Simulated Annealing and Tabu Search. You will also learn how to handle constraints in optimization problems. You will learn how to code the TSP and VRP in Python programming.

Meet Your Teacher

Teacher Profile Image

Dana Knight

Teacher

Hi! I'm Dana. I'm currently a PhD student in Industrial Engineering. I finished my B.S. in Architectural Engineering and my M.S. in Industrial Engineering. Lean Six Sigma Green Belt certified. I enjoy learning new things. My research interest is Data Science including Deep Learning, Machine Learning, and Artificial Intelligence.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction and Course Scope: Hello and welcome. My name is Donna, and I'll be your instructor for this course. So the schools will teach you how to implement simulated a kneeling and taboo search to solve two well known problems, which are the traveling salesman problem and the be corrupting problem. First off, you learn what optimization is. The kinds off search techniques such as Medicare sticks and why we use them. Sometimes, instead of traditional methods, will learn what simulated a kneeling is and how it works. Same thing with taboo search. We'll also learn what constraints are on a problem and how to handle them. A quick example off constraints is the amount of resource is a company has. Let's say workers can only work eight hours a day, so you are constrained by that and cannot violate it by going above eight hours a day. If you have that in your problem, then we'll learn. We'll learn about the traveling salesman problem and the weaker outing problem and how to solve them and implement them in Piketon. So I hope you'll enjoy this course, and I look forward to the next lecture. Thank you 2. Optimization: In this lecture, we learn about the two thoughts of problems continues and combinatorial. We'll also learn the difference between P and NP problems, So optimization is finding the optimal solution to any problem in hand. There are maximization problems such as maximizing the profit off your company and there are minimization problems like minimizing 13 costs. So basically, optimization is about finding the best combination of variables, whatever, whatever they are to optimize a certain problem, let's start off with continuous problems. There are several continues problems such as the levee function, the rest Regan function, the actually the rose and broke and the three humped camel function. Continuous problems mean that their decision valuables take on continuous numbers. That is an infinite number of possible solutions. This picture right here is the levee function number 13. Where below is the objective function that needs to be minimized. You can see here that the global minima lies at X and y equals zero, so X is equal to zero. Y is equal to zero and you minimize the subject to function where you find the minimum f of X and y, which is at zero no for the traveling salesman for the combinatorial problems. Uh, they have discreet. The combinatorial problems, however, are discreet problems. They're called combinatorial because their solutions take on a set of combinations. Now the most widely known combinatorial problems are the traveling salesman problem where the salesman has to go through. City is visiting each city once and only once, where the objective function is to find the best route that would minimize the distance traveled. The vehicle routing problem is a variant off the traveling salesman problem, where the objective is to find the optimal set of routes for a fleet of vehicles to travel in order to deliver to a given set off customers. The knapsack is a problem, given a set off items, each with their weight and value. The objective is to determine the items that go into the knapsack that would yield in the highest value but would not violate the constraint off the way that the knapsack could hold . So here you can see that the knapsack has a constraint that he thought the Napster knapsack can only hold a certain weight, and you cannot violate that. The quadratic assignment problem involves a set of facilities and a set of N locations for each pair of locations, a distance is specified and for each pair off, the facilities await or flow is specified. The objective is to assign all facilities to different locations with the goal of minimizing the cost, which is the distance multiplied by the corresponding flow. Now the two most commonly types of problems are the P problems, which is short for polynomial problems, and the second is the NDP problems, which is short for non deterministic polynomial. An algorithm is said to be solvable in polynomial time. If the number off steps required to complete the Ark algorithm for a given input is end to the power off C, where n is the complexity off the input polynomial time algorithms are said to be fast. The most familiar mathematical operations that are polynomial problems or addition, subtraction, multiplication and division. However, non polynomial non deterministic problems non polynomial they the increase exponentially. The more inputs are given, the more the problem becomes complex. So finding an optimal solution for the P problems are relatively easy. But for NP problems, finding the optimal solution is actually much more, much, much more harder and complex than p problems If the problem was np Andi, it was too complex. We would not be able to find the optimal in polynomial time as it has increased exponentially exponentially. But if we did have the optimal solution for the NP problem, we already have the optimal solution. We can verify it in polynomial time, but we cannot find the optimal solution and polynomial time. So I'll see you in the next lecture. Thank you. 3. Metaheuristics: Welcome back. Everyone from this lecture will talk about midday here. Six. So here sticks is a technique which seeks optimal or near optimal solutions at a reasonable computational cost. Medic heuristics, however, are here sticks, but they're inspired by nature, and the beautiful thing about them is that they're not problems specific. You can apply them to almost any problem you want. So in the last lecture we talked about P versus NP problems where MP is much more harder to to solve them. Few problems and as the size of the problem becomes bigger, the more complex and the more harder it is to solve now talking about the local Optima or minima or versus the Global Minimum or Optima. Let's see, for example, this is your objective function will say for a certain f of X and you and this is a minimization problem and you ended up here. Let's say in your search space, so if you go any more above you, you would go higher, which is you don't want that. So you want somewhere where it is low because it's a minimization problem. So you have a local minima and a global minimum. The global is the best overall on the local, where sometimes it's very hard to reach the global minimum, especially for hard and complex and very large problems. It's hard to reach the global minimum, so sometimes we are satisfied with the local one, especially if it's a good local one. Now. The commonly used magic heuristics, or genetic algorithm Tabu search on call any optimization particles warm optimization and simulated a kneeling in which this lecture will cover simulated an element taboo search. I have I have another couple, of course, is where I talk about genetic algorithm. If if you are interested. So imagine if we have a certain US a minimization problem where the sets the it's a small problem, but you can do is you can calculate all the combinations off solutions, the answers, and you can just pick the the minimus, the minimal off that right. But the problem is, in real life, the sort of solutions are. The combinations of solutions would be much, much more bigger and larger, so it's hard to calculate all the combinations off answers. So that's why we need meticulous ticks because mediocre sticks can help reach this local minima because if you use traditional tools on deterministic tools to find the global minima, I don't official for a large problem. I don't think you'll be able to reach it. But many heuristics help reach this local minima or a global one sometimes depends on the size of problem and complexity of the problem. So when if it was a small problem, you can use the brute force method where you calculate all the combinations off every possible answer, especially if you have a combinatorial, because continuous if you have an infinite number of solutions. So if you have combinatorial problem, just calculate all the possible solutions and pick the one that heals in the minimal, minimal solution. But if you have a large problem, then the number of combinations would be in millions or billions, so that would be extremely, uh, that would be very impossible. Thank you 4. Search Techniques #1: Hello and welcome back this lecture. We'll talk about the search techniques. So we talked about in the last lecture about local minimus versus global minimums where you would rather be here than here. But sometimes it's hard to get here, so we're satisfied with a local minima. Imagine that this is the editor function, for example, So you want to minimize theater and this is called your search space. Unless you have a parameter here. If you're familiar with neural networks, let's say at wait a certain weight, then you get the lowest that are possible, and at another weight, you get higher at our So talking about local search versus global search, local search does not account on the random of on the element of randomness to Castaic counts on randomness dormant deterministic do not. Local search is neighborhood search on global searches. Population based search were population based. Search gets a solution all over the place. Many, many solutions and then you can choose the best Out of those local search is neighborhood search. You start at the point and you go to your neighborhood, your neighbors. Global search is that you can have a solution here here, here and there everywhere. So global search is considered to be better because you are looking beyond your neighborhood. And also stochastic search is better than deterministic because sarcastic it will help you . For example, if you are stuck here with deterministic, it's hard to get out unless you have. Ah, an element of randomness With stochastic, you can't get out off the you can get out off the local MINIMA, for example. You are here and your order is here. You don't want to go to a higher better. So with metta heuristics, it can help you get out of it by accepting worse moves in order to find about remove a better solution. At the end, there was an example to a deterministic. Let's say you have five cities. This is a traveling salesman problem. You want to find the root where it yields the least the least amount off distance, and the traveling salesman has to visit each city once and only once. So, for example, you can go from a from D A, B, E C. Or you can go from D E. C. Be a. All these combinations have different distances, so let's say that you start city D on. Do you want to choose the closest city to you for let's say, city, see where the distance is six and you want to choose the closest city to that which is e will say the distances for and then be a and e. So you have a total of 35 units off distance. Think it was 35 13 10 and yes, 35. So this is one way in getting ah solution. It might not be the optimal solution. This is called a greedy search. Greedy is because you started one place and then you search for the closest city and then from there, the closest of them from there, the closest. This is also called the construction technique because you start at one solution at one place and then you construct your solution based on where you start. So you are construction constructing by adding and let's say so. This is a five city problem, right? So you have 12 combinations like the A, B, C e or D C E, be A or C e. The AP. So you have 12 combinations. But imagine if you had 15 cities this is how many combinations you'll have now. You know why we used to hear sticks? Because, for example, if we had five cities weaken, get all 12 combinations and calculate the distance for all 12 combinations. And then from there, choose the combination with the minimum amount of distance. Right? But if you had 15 city and this is how many combinations you'd have to go through, which is definitely impossible Now, for example, if 20 cities to find the solution takes an hour, for example, finding the optimal solution for 25 cities would take over six centuries. So this is why we use money heuristics medical wrist X help. If it can't find the global optimal global minimum matter, her sticks help find a good local one. Thank you. 5. Search Techniques #2: Hello and welcome back. So we were going to continue the talk on search techniques. So first of all, what exactly is a solution? Let's say you have dysfunction right here, where it's the Himmel blouse function. So if you want to map this this function, this would be it's search space you can see here it has four minimus. A solution is any output to a given function from certain inputs, and any solution can be feasible or infeasible. And we'll talk about feasible and infeasible solutions in the next upcoming lectures. So any solution you can put X is equal one wise equal zero. This is your output for the function 130 60 36 and if you want, you can put X is equal to negative two and Y is equal to eight. You'll get 3000 and 26 so you can have an infinite number off solutions to this problem. But there are a few solutions where it deals in the minimum in getting this to the minimum amount for the minimum solutions at zero, and these are four. Thes four will give you the minimum point the least amount of F of X and Y So, for example, F at three and to which is here three. And to you have the lowest point, which is a zero since the optimal answer to this zero. Then you cannot go beyond that. You cannot go. This function would not take you to negative one or negative two or negative three does the minimum. It can take you to a zero. So with optimization problems, it would ask. You minimize this objective function right here, find X and Y that would minimize dysfunction. Now this is for continuous problems. Talking about gun combinatorial problems. Like we said in the last lecture, any solution is any combination of the discreet elements in a problem. For example, if you have Five Cities ABC, the any let's say the minimum route is at the E, C and B. This is a solution, but another solution could be a D, c E B or even a B C D. Any so. This is for combinatorial problems, and also you have either minute minimizing the route or the distance. Troubled or maximizing some kind of profit, for example, depends on your problem. No talking about neighborhood search versus population search with neighborhood search. You start that as at a random solution. But then you look to your neighbor and you go to that solution and then from there, go to that solution and so on until you converge to either a local Optima or a global Optima and you converge, it stops when you're stopping, criteria is met. So he heuristic methods for solving computational e hard optimization problems is neighborhood search Onda an example. Examples are simulated a kneeling and taboo search. So you start at a random point and you search through your neighborhood so you can converge at the end to ah, local or global Optima. Now, if you can see here, you look through your neighborhood so it does not count on any element of randomness. Because if you with population based search and we'll talk about it soon, there is some kind off randomness to it. But here you just care about your neighborhood. So, for example, let's say you start out here. You want to look at the minima. Go here here to your neighborhood until you converge to a local old growth global minimum, and there's ah, local method called Grady and dissent were based on the slope off the function you descend through where you want to find the minimum. So if you end up here, going above you would get the slope would to get bigger. So you want to find where the slope becomes? Zero. So this is great. TNT sent you go based on the slope of the function that would take you to the minimum point . Now, for population based search, it uses a population off solutions set of solutions that were created at random. And then you pick out of, you know, pick completely random, pick a couple of solutions or several solutions, and from them you create a new set of solutions through using operators like, for example, in genetic algorithm, you use cross over a mutation to transform your set of solutions to a new set of solutions . So you evaluate each potential solution, Andi, basically, you take the good solutions from here and you generate other possibly good solutions. Chelsea in the next lecture. Thank you 6. Search Techniques #3: Welcome back. This lecture will continue talking about search techniques. So how do Monte Heuristics work Exactly? First of all, you start out. Let's say this is your search space. You start out with a random initial solution. That's number one. And then, first of all, like if you have neighborhood search or populism based search based on your algorithm like , for example, if you're using taboo search, that's neighborhood search. If you're using genetic algorithm, that's population based search. So if you're doing the neighborhood search, let's say you started out here. You work out, you work from the initial solution and move around to the neighborhood locations around you and you evaluate each step. Andi, A huge helpful hint tip is that you keep truck off every solutions. Like, for example, if you were here, keep track of the object of value. Off this off. This solution anyone. You move here, keep track off this. Like, for example, this is the cost. Keep track of the cost, then keep track of the cost. Keep track of the cost because at the end, let's say you converge to Let's say, for example, here, when you look back at all the solutions that you saved, you might have saved something from here. So keep track of everything because if you didn't converge to a good local Optima or a global optima or minimum in this case, you can check back on all the solutions you saved and and get the minimum out off that because here is the minimization problem. Now for population based search. What it does is it creates solutions, random solutions everywhere on their search space. Andi, Based on some solutions, you end up creating even better solutions Better between, uh, quotes, meaning you hope to get better solutions. But it doesn't happen every time and the beautiful thing with population based searches. Because since you have everywhere solutions, potential solutions everywhere you have diversity in your solutions in your search space. Now, for example, simulated a kneeling as a quick, quick ah discussion. While similarly, the kneeling we said, Dr, you start out that a random solution on there's an initial temperature. It's called simulated and healing because it's based on a cooling off metal which is Thea kneeling process. So you have you have an initial temperature. Let's say you're here at an initial temperature you look to your neighborhood, you move and then when you hit a certain criteria, satisfy certain frontier off the number of how many moves you can go. What? Let's see you end up here and then you take down the initial temperature. You cool it and then you do it again. You move end times and as a neck and times which is how maney in your neighborhood. Now let's say you ended up here. The thing with body heuristics is that it helps you get out off the local minimal so you can find a better one. So the way it does that is there is a probability e to the power off, minus delta over temperature. Delta is the the current solution. You have the cost function of the current solution minus the solution off the Put the potential like if you want to come here, you have cost function here, and you have a cost function where you already are. So you deduct them from each other over the temperature of where you're already at and take the exponential off that so you get a random number. If the random number was less than this probability here. You take that were small. If not you stay where you are and so on. So this is how simulated any link works. You'll understand it more when we go deeper into simulated the kneeling. But I just wanted to give you a brief introduction off how it works. So basically have temperatures and you move through your neighborhood. And when you hit, when you satisfy the end, which is the number of neighborhood like you moved 15 times, for example of and was 15 you move 15 times, then you stop, you take down the temperature. And then if there was a worst move, you do around the number and check it to this probability if it was less, you take that worse move. If not, you would stay where you are. And of course, um, when you finish in, uh, you do this M times Emma's in mark m times. So you'll understand this more when we, uh, talk about simulated a kneeling. So let's say you start here, okay? This is a projection here. You start here, Then you move to your neighborhood. Here, here, here. You keep on moving to your neighborhood until you convert toe a local minima or a global minimum You can see here This is the global minimum here. But sadly, you did not converse There You converse to a local one. But you could have, for example, in population based search. Since you have diversity, you might end up here. Population based search is considered to be a little bit better then, um, than local search. Because you have diversity. You're all around the search space with Tabu search. Let's say you start off here. You looked to every neighbor neighbor in your neighborhood to every neighbor you have and you pick the best solution out of that. It's cold taboo search because once you make a move to a certain location, that becomes taboo that you cannot take it again. Like, for example, if you came here and then here you cannot go back to the previous step. So it looks through the entire neighborhood and it takes the solution the best solution from there and again, you keep truck off the best solution. Uh, you keep track of every solution so that if at the end you did not converge to a good solution, you can look back at every solution you had and choose the minimum out of that. If it was a minimization problem. Thank you. 7. Simulated Annealing #1: Welcome back, everyone. So this is the first lecture we'll discuss about simulate to de kneeling. So simulated any legal developed by Kirkpatrick and others in 1983 which was based on Thea algorithm by Metropolis and others in 1953. And we talked about it in the last lecture that it's a local neighborhood search because it searches around the neighborhood off the search space. On day, there's a random element. Remember, we talked about the probability having a random number and the probability off E to the power of minus delta over temperature. So it's a probabilistic materialistic. So the way it got its name is if the solid material is heated past its melting point and then called back into a solid state, the structure of properties off the cold solid depend on the rate of cooling the A kneeling process. And basically that's called the A kneeling process, which is cooling thesis olive material or metal. So the healing process can be simulated by regarding the material of the system of particles. Now Metropolis is algorithm simulates the change in energy off the system, once objected to a cooling process until it converges to a steady state which is frozen state. It's okay if you didn't understand any off this. Basically a kneeling is a process off cooling metal. And that's where Kirkpatrick and others and his colleagues after 30 years off Metropolis. They suggested that this type of simulation could be used to search the feasible solutions off on optimization problem with the objective of converging toe an optimal solution. So it was showed that the Metropolis algorithm could be applied to optimization problems by mapping the elements off the physical cooling process onto the elements off a combinatorial optimization problem or even continuous optimization problems. It's all see in the next lecture Thanks. 8. Simulated Annealing #2: Welcome back, everyone. This is our second lecture off simulated any link. So as we talked in the previous lecture, a kneeling is the method off heating metal or glass and allowing get to cool slowly in order to remove internal stresses and tough in it. Andi simulator kneeling is based on that process where it takes temperature and it takes cooling great into consideration, and it applies it to optimization problems. And we also talked about accepting a pill moves. Let's say this is a minimization problem where this is the cost function you want to minimize and this is your parameter. Let's say W, which is weight. Let's say your weight is currently here, so your cost function is high and you simulated and healing to move through your search space. Let's say you ended up here now, going any further, your cost function would increase. So wouldn't be a bad move, so it accepts worse moves. Based on this probability, you check, you get a random number. If the random number is less than this, you take the uphill Mel's else, you stay where you are. So if the random number was less than one to the power off one over E to the power off F of X off the temporary solution, which is what you want to take minus where you are at this moment, like you're here and you want to take You want to move here over the temperature of your where you are right now, because for every state, for every M, you have a certain temperature and as you go through each m, the temperature goes down. So, like we said, we're f of the temporary solution is the new move on f of excise. Your current solution ondas M becomes larger towards the end, the temperature becomes lower and since the temperature becomes lower, the probability of accepting new worst moves is reduced. So when towards the end of your search, when the temperature is lower, what happens is this number becomes lower. So accepting, having the random number less than a low number, it be the chances off that would be low. So accepting bad moves would be a little because we said, in order to apply this formula, it'll be a worse move. And towards the end, when the temperature is lower, accepting worse moves is harder is like bless. It wouldn't happen as much. Now you might say, Why do we want to accept worse moves? Is because if you do accept worse moves, you might end up going to somewhere where, ah, the the other function is at least like it was good here. But you could find places words even better. So this is what this would be a local minima, and this would be a global minimum. Depending on this function right here only what we see here there could be more global minimums, even better. So, as an example, let's say your current solution, the better function or the cost is 100. Andi, think the move that you want to consider. It's worse because it's a minimization problem, so it's 120 So you choose around them number, and you do one over Delta, which is 120 minus 100 over. Let's say the temperature current temperatures 1000 so you get 0.98 Now when you put around them number. When you get a random number, it's most probably going to be less than 0.98 so you're going to accept the move. However, let's say now that your temperature is much lower and you do the same problem that you are at 100 your possible move is 120. So one over E to the one over E to the power of 20/5, you get 10.18 So getting around them number, it will most probably be higher than the seal. You would not take the potential move. So as you can see here, the lower the temperature, the lower chance, Doc, you will accept worse moves. This is worse because it's 100 20 because we're looking for minimizing, minimizing it. So the temperature it goes down when you are converging towards the end of the search. So, m, here's a 10. I'm here. Is that 50? Let's say you're stopping criteria is at m equal 60. So at the beginning of the search, this is your temperature towards the end of your search. This is your temperature. So this is how simulated any link works and then the next lecture will go into more details with how it works. So I'll see you in the next lecture. Thank you. 9. Simulated Annealing #3: Welcome back to the third lecture. All seem really did any link. So as we talked, we talked about the search space. You start off a two random initial solution and then you move through your search space until you converge to either a local minimum or a global minimum. So we have the notations. We have the initial solution, which is egg zero. Let's say this is your initial solution solution I which is where you are at this moment on the final solution which is xlf on the objective function evaluated at X off Um X of I, for example, here it has ah cost function of objective function here it has an object to function, object to function and so on. So you want to choose the minimum object to function, which is, if you have the formula, that would be your formula that you get your objective function from, or in this case, let's say this is the cost function. You want to find the minimum cost function, so it's simulated any ling. These are also notations. You have temperature, the initial temperature that you start act the temperature a stage t where you are at the final temperature were you end up there's a cooling parameter which let's say, if it's 0.9 for every M, how many times you want to decrease the number of temperature, you decrease it by the cooling great, which is for example, let's say your initial temperature is 3000 so you must apply it by the cooling parameter, which is 30000.9 in order to get your new temperature at Stage T and then you have N, which is a number off moves at temperature T and you have your move operator. How exactly you are moving in your search space. For example, if it was a continuous problem, the way you move through your search space is buying for example, adding, ah specific value two x and why that's how you moved through your search space. For example, if you're if you want to minimize, um, for example, X squared plus one so you add or subtract from X in order to move through your search space . So the methodology behind simulated a kneeling is you set your intial temperature. You said how many times you want to decrease the temperature, which is m. How many times you want to move through your neighborhood, which is an Alfa is the cooling great or cool and parameter on your type of move? Operator, If it was a combinatorial problem, you can choose your operator as to opt Swap. So you swap two cities or two, um, two elements and we'll get to that soon. So you start at em is equal to one, and you started to random point in the search space. You move to another location in your neighbourhood using the move operator. And then you look around the neighborhood which is points around your location and move toe one of them. So you increase and buy one. Now you check if the objective function it is better if yes, you take it. If no, we said you'd take around them number and check this random number two this formula. If the random number was less than you take it. If it was greater, don't take it. Stay where you are and look for another point and you increase and buy one. You do this M and time, and then you increase Mbai one on you multiply the temperature burned by the cooling great and then you repeat steps five, Which is Look around your neighborhood 29 a. M minus one times. And then you get your ex final. And like we said, ah, couple of lectures ago we said that keep track off every solution so that at the end, if you did not converge to a good solution, you can choose from the ones that you saved. So to reiterate what we just said, let's say you select the your initial solution at random your T m and an Alfa tease. Your initial temperature M is how many times you want to decrease the temperature and is the number of neighborhood You want to look around you and out physical and great. So you set X one is equal to the initial solution. T one is equal to the initial temperature on day X finals equal to the initial solution. Now, for each m, you go end times and for each end you move through your neighborhood, your search space. Now, if the potential solution is better than the current solution, you take it so your potential solution becomes your current solution. If not, you check this formula right here with a random number if it was less than you take it and your potential solution becomes your current solution. If no, then you stay where you are. Andi Here. Same thing. Ah, if your current solution is better or worse than your final solution now, if yes, the final solution or what you have the best so far would become your current solution. If no here. After this you increase and by one and then if n is less than in capital, you repeat end times. If no, you repeat this end times if yes is m less than M capital. If yes, you end in your turn xff which is the final exit from where you converge. If no European this m times. So you increase and by one repeat m times and you decrease the temperature by Alfonso Alfa Times the temperature So you do this m times end times. So if this was set to 10 this was set to say 50. You do this 500 times each time each 50 you do it 10 times. So, for example, let's say you start off here at the random solution. Unless your temperature is 1000 M is only Let's say two for just simplicity and is three the number of times you want to search your neighborhood on Alfa is 90. So your current solution you want now to move to using the move operator. So let's say you moved here. Now you want to check your neighborhood, right? So you moved to a current solution on Do you increase end by one, Then you check the solution. Let's say it was worse. Ah, sorry. You came here and you came to the solution. Let's say you want to choose this solution and it was worse. So you put around the number and compare it to this. It turns out that your random number was less than there. So you end up taking it so you end up taking it. Now you decrease because now you've went n is equal to three times you went 123 You've went three times. So em is now decreased or increased by one. So em is equal to one plus one and you decrease the temperature by Alfa. Now you look through your neighborhood and you increase and buy one and then you want to check this solution? Let's say this solution was worse. So you want to apply this function to it? Let's say your random number waas higher, then there. So you will not take it. So you will choose another point. And then another point And then which was better? So you take it so you'll end up from here to here at just two interational. So I hope you understood, Simulated a kneeling. And I hope you enjoy it as much as I do. Okay, so I'll see you in the next lecture. Thanks. 10. Tabu Search #1: Welcome back, everyone. So this is our first lecture in tubal Assert. Our trouble search is actually a very simple concept. It was developed by Fred Glover in 1986 and it's used for the combinatorial problems, which is not continuous. They're economical, which is the original taboo search method. There's no element of randomness, so it's deterministic where every use you start, you always end up in the same location. If you start from the same place now, it's a local search because it's neighborhood search, it looks around the current solution. There's something called a taboo list where each solution you were act. You won't go back to the solution for a certain period off. It orations. So there's it has an adaptive memory, which is a taboo list described the solution recently visited, and it it prohibits the move for you to move to that solution. You already talk. So if you went from point A to point B, then point A would become taboo, so you cannot return to point A unless a certain number of it orations has passed. So this is the concept is quite simple. And, um Wilton thing ume or explain Maurin The UNIX lectures. Thanks 11. Tabu Search #2: Welcome back in this lecture we learn about the terminology is used in no double search, which are taboo list memory aspiration criteria on search terminology such as intensive intensification versus dies of diversification or exploitation versus exploration. They're kind of the same thing into certifications like exploitation and diversification is like exploration. So Taboo list is a list where you store previously vested moves in which you cannot visit them again as long as they're in the taboo list. This helps avoid cycling by avoiding to go back and forth between the same moves. The two common types off Kabulis are static and dynamic or static. Lists have the same list. Sling throughout the search and dynamic lists are the lists where it changes in where the length changes during the search. As an example, let's say our taboo size is our tabbouleh sizes. Three. Onda. Let's take this five city traveling salesman. Example. One solution could be a D, E B and C on its objective. Value is 100 and 50 distance units, so this move will be top board, meaning you cannot return to it while it's still in the list. Now, from the to opt move operator, where two cities are swapped, which are cities E E and B E and be worse, Walked E and B. This move also becomes top would on entered the list and then for third move, let's say D N A gets swapped and also d n A. This new solution entered the list where the object of value is 140. So the what happens now is that the oldest move that entered the list first will be removed , and then these would get shifted down. And the fourth new move, let's say, would end up entering the list at their beginning. So now the oldest move left, so you can take it again if needed, and you cannot take any off these moves right here. The Taboo list is an example of short term memory, where it forbids a list off solutions that were recently visited. Intermediate term. It lets you break from the cycle and exploit the area around you. Long term memory gives you the ability to go beyond your search space and move to diverse and other regions to explore in. We talked about that. There's no random element in actually, we will get to that an expiration criteria aspiration criteria. So frequency based memory is a mechanism to taboo. Andi untapable moves based on frequency off times they have been vested here, aspiration criteria it determines when taboo restrictions can be overton to common methods of inspiration. Criteria are the best so far solution where you can remove the best solution from the taboo list if needed, and the other method is influenced where more far away moves have higher influence. Therefore, desire to have some diversification in the solution space. Also, the frequency based list is an inspiration criteria. Now we talked about the economical, which in the original Kabul Tabu search there's no random element. There's no random numbers, so wherever you start, you always end up. But you can introduce ah, the element of randomness as an aspiration criteria to kind off move you throughout your search space to kind of diversify your solutions. Intensification versus diversification, also known as exploitation versus exploration, were intensified. Interest intensification or exploitation means to exploit and search the area around your current, surrounding which is your neighborhood. Diversification or exploration is where you explore different regions in your search space by moving to a different area. This is commonly used you to how it helps avoid premature convergence, as it's a type of global search. So this is where you introduce some kind of aspiration criteria where it will get you out of your neighborhood and kind of move around your search space. So you diversify your solutions, so I'll see you in the next lecture. Thank you. 12. Tabu Search #3: welcome back. So now we're going to dive into how exactly Tabu Search works. So your notations you have how many runs or it orations you want? Which them and then and the size of your neighborhood, which is dictated by the move operator and how much how long you want your Kabul list. So, first of all, you set the number of iterations or runs the your toppled lists size or length on your move operator. Now the number of neighborhood is based on the move operator chosen. For example, if you wanted to choose a to opt swap, let's say you want to choose UH, two cities or to department speaking. Swapped. That's your move, Operator than end would be every possible combination from two swaps. Now you're randomly generate an initial solution, and then you started a musical toe, one for every eye in your neighborhood. You evaluate the solution off every, uh, every solution in your neighborhood, and you sort from best to worst objective value. Now you take the best, which is the top of your sorted solutions, and if that solution is not taboo, you move to the solution and then added to the taboo list. But if it was taboo, if the taboo list was long, then you remove the oldest solution and then you take it. Now you're Pete steps 5 to 6 five until 69 m times. Andi, like we said, tow, avoid cycling and premature convergence to a local Optima or minimum. You introduce an aspiration criteria. You diversify, meaning you explore the search space now for the flow chart. So you select M l and your move operator, you select around, um, initial solution anywhere on in your search space. You create neighborhood and based on your move operator from your current solution, you sort and by value. So from best to worst of your neighborhood, you pick one solution. Let's say you picked the 1st 1 which was the best one. Is it in Taboo in is a tubal. If yes, then go to the next solution. If no, Then go to that solution. Is taboo list full? If you just drop the oldest solution and put that solution in the taboo list? If no then just added to the taboo list and then your current solution becomes that solution you took is, um, equal to capital M if no European this m times. If yes, you return the, um the last X off final, Or like we said in simulated the kneeling keep track of every solution so that at the end, if you did not converge to a good, ah toe a global minimum or a good local one, you can check back on all your solutions and just pick the best out of that. So I hope you understood. Tubal search. Andi, I hope you will enjoy it. So I'll see you in the next lecture. Thanks. 13. Constraint Handling: welcome back. So now we're going to talk about how to handle constraints. So constraint handling constraints is optimizing an objective function with respect to some variables in the presence of constraints on those variables. So you have variable if you want to optimize. But they're constraints, constrained by some constraints, like you cannot violate certain constraints on them. As an example, let's say a company sells to products, Let's say, wooden chairs and wooden tables. The profit from selling tables $10 the profit from selling the chair is $12. Now for simplicity, let's assume the company uses to resource is board and board feed to produce the tables and chairs and the cost of labor. In hours, it takes 24 board feet and five hours to make a table and 15 board feet than six hours to make a chair. There are 300 board feet off what available and 120 hours of labor. The company wants to maximize the profit. So how many tables and how many chairs should itself to get the most out of its resource is and maximize the profit. So let's say that s so we said that the chair needs 15. Board 3 to 6 hours of labor, and the table needs 24 board feet on five hours of labor, and you want to maximize this $12 times the number of chair sold, plus $10 times the number of table sold where it's subjected to. You have 15 board treat off chair. How many chairs you want to sell, plus 24. Portrait of how many tables has to be less or equal to 300 because you only have 300 board feet off. Would you don't have any more and 40 chairs, six hours of labor for every chair, plus five hours of labor for every table has to be less or equal to 120 because you don't have more than 120 leap labour hours. So how many chairs at how many tables can it cell in order to satisfy these without going overboard? So now, talking about the constraints, this is considered the heart constraint because he cannot violate it. You don't have more than 300 resource is or more than 120 hours of labor, so these are hard constrains. You cannot violate it soft constraints or constraints you can violate. But it would be better not to. For example, let's say you can let your workers work an extra shift for that to be more hours. For extra pay you would draw. They're not. Give them extra pay and not let them work extra hours. But you can, but it would be rather not. So that's soft, constrained solutions in feasible solutions is that let's say we want to sell Ah 100 tables and 100 chairs. You're 100 tables and 100 chairs would violate the constraints. So that's an infeasible solution. Just 100 shares and 100 tables, for example, feasible solutions. Our solutions that are feasible, Let's say, one table, one chair to tables and launcher, two tables and two chairs. All these are feasible. It's not optimal, but it's feasible. Optimal solutions are solutions that would satisfy this the most, like it's the optimal number off chairs and table that you can get to maximize your profit War. Still, satisfying these near optimal is, of course, through its name near optimal. It's not the optimal answer. Parts near optimal, So optimal is like what we talked about global Optima or minima. Near optimal is what also we talked about local, local, often more minimum now for constraint handling. You have three commonly used ways. Any infeasible solution, you can discard it. You can discard any solution that violates the constraint. A simple is that or you can repair the solution that violates the constraint, or which is most uh, like it would be better to use is penalized the solution and add the object added to the object of function to make it less desirable. For example, let's say, uh, let's say, for example, here do you have 10 10 shares and 10 tables, so this would be 152 140 so that would be, um, 490. Sorry. 390 chair. So it's above this, but we keep the solution. We keep it, but we penalize it, meaning we. So you want the maximum profit. So if we keep the solution, we deduct we penalize this, we deduct from this a certain amount so that it make it less desirable. So you might say, OK, then why do we allow, um, in feasible solutions? Why not just discard it because with metta heuristics, we talked about diversification. Looking through your search space with metta heuristics from a bad solution, you can end up converging or you can end up getting a better solution. Remember when we talked about a pill moves so sometimes we want where solutions, even if they're infeasible because feasible and better solutions could come out of that. So we keep them and we just penalised the objective function so that they become less desirable to consider of the final solution. But we still keep them in order to kind off, make them. Maybe someday something, maybe some sometime or throughout your search they will number two better answers. So I hope you understood constrained handling. And when we talk about vehicle routing problem, we'll see how exactly we can handle the constraints there through penalizing them. Thank you. 14. Travelling Salesman Problem: Hello and welcome back. So now we're going to talk about the traveling salesman problem. We already talked about it in the early lectures off this course. But basically you have ah, traveling salesman where there are cities and you have to find the optimal route where they only visit each city once and returned to the same city. Basically, that's the traveling salesman problem is very straightforward and very simple. However, finding the optimal sit set off cities in which yields the minimum distance that is kind of hard, especially when you have when you're cities, become larger. If you remember, we said that a five city problem you have 12 combinations over to because it's symmetrical . I going from D to a saying from D from a to D eso 12 combinations. However, if you have 15 cities, look how many combinations you have. So this is why we said this is why we need Mattie heuristics So traveling salesman. Although the concept is very simple, finding the optimal solution is not, especially when you're cities, are you have a large number of cities. So again, the traveling salesman problem is you have a salesman and he has to go through cities. This thing each that he wants on coming back to the same city. So you could go from DB C e A. For example, or C B a e d. We have 12 combinations in this 24 but because it's symmetrical, you end up with 12. It was very simple, very straightforward. So I'll see in the next lecture. Thanks. 15. Travelling Salesman Problem , Coding in Python #1: Welcome back in this lecture, we're going to, um, solve a traveling salesman problem. So imagine we have seven cities on. These are the distances between city one and city to a 75 units between City one in city 3 99 Units between 65 city to for examples. 88 units. So seven minus 1/2. We have 2520 combinations. So definitely we cannot calculate every combination and find the least amount of that the least distance. So we need magic heuristics. So let's go ahead and code. Okay, so the code is ready, and I'm going to, um, go through it with you. So first of all, we import important umpire because we need the new miracle package off fightin, and we import import panders, which is ah, data frame package for python on dmat plot lib dot pie plot. We imported for the oh, the image. We want to create the plot and we create an alias instead of every time calling an umpire which is called NP instead of lump. I. Now we create a data frame for the ah, 40 cities for the city created date of rain. So we enter the first roll, the second roll, the third girl and so long. Let me go ahead. And so this is the Matrix, the data frame you see from City B to be a zero C two c zero. So this is the same as this. 5120 5120. All right, so we give out an initial solution. Any initial solution, you can have it a B c d e of N g. But I just gave it a complete random solution. Now we want to have a distance on empty list so that we can create. We can find out the distance off this. So we want the objective function for the initial solution. So what we do is we find the location between so we started. T is equal to zero. We find the location between, um let's see. Let's say, for example, we want the location between A and C, so A and C so we want the location that would intersect this with this A would see So 99. So data one. I called this data one on the intersect between a and see so X zero, which is initial solution off zero index zero, which is this and ah, zero off index one which is there, so a N c. So you have 99 and then Ah, here's the thing. After we get the last element, be an F, for example, because this you will keep on adding we want f to return to a right because you need to return to the initial solution. So we go from X minus 10 index of minus one, which is f and find the Intersect on the data frame with index zero, which is a so after it finds from a to C C to G to D and so on. After it finds B to f, you go from after day. So the last city to the first city, you upend the distances to each other. And then you got the list from this. So you end up with ah, let's print print this. We go head to run into gun. So the distance off these are 381 now, why are you forgot to mention that the optimal distance off this problem is 158? That's the optimal distance, the minimum distance. So that's how you get the distance. So basically, you append each distance. So you get the location, the Intersect location of A and C 99 independent to this list and then from C to G. You got the location and appended to this list from G to D. Got the location and the distance. I mean depended to this list. And then at the end you have the list, which is called distances. So 99 28 67 so on. And this is from F to A, which is the last one, and then you sum it up. And that's how you get your distance off distance traveled from this location a c g d e b f and then a goes back to a So let's stop here on DWI Le will carry on something in the next lecture. Thank you. 16. Travelling Salesman Problem , Coding in Python #2: right. Welcome back. So let's continue where we stopped in the last lecture. So, Ford, similar to the kneeling, the prompter less initialize the temperature. How many times it's the temperature going to decrease. And for each m How maney neighbor neighbors are we going to look for Onda alphas, the cool and great in which each temperature is gonna be multiplied by it. So this is only to store each temperature on each. This is mostly distance. This is not a cost. And let me change it here. Distance All right and stuns. Okay, or I looks good. I'm in distance. All right, so this is only story the distance for each temperature, the minimum off each temperature, the minimum distance in order to plot it at the end. So for I in range and for each m for Jane Range. And so for each m, we're going to do it end time, which is the neighbor. So we now want to swap to generate random integers in order to swap cities. Was it deals? Um, so basically, from the cities these cities now we want our move operator to be to swap these any two cities. So what we do is we get around them integer This is 1234567 So we get the random integer and then to random integers Let's say we got two and four and then we swapped two and four . So from zero, because index starts at 00123 important index started zero So from zero until the length which is 1234567 So from zero until seven and also from zero until seven. Now, if the two random numbers were exactly the same while Ron one is equal to run to pick again for on to so that we don't want them to be each other. So now we have random numbers. Let's say you have. This is one. This is three. So you want to find what here is one What is three? So you go x initial at index one on X initial at index three, for example. Now we want to swap them for I in zero. I'm just going to copy and paste this. Put a tear be easier. All right. So let me just actually put it, okay, so if it will iterating over through these if x zero. But let's say let's say a one is at 182 is up. Three. If this is one, then this is where we have an empty list than append this incentive one put instead of it a two, Which is this So So. If this waas one so one on DWI want one and three. Let's say we have a one is one and a two is three. So if x 00 is equal toe a one than append to this empty lest a two else If it was a tooth, an append, anyone else if it was not a one or a two, then appended to the less. So since we are at the X zero here, it does not want it is not three. So it's not C or D than put append a to the list. So now we have in the list A which is in this list x temporary. We have a then it goes. It adds to w one. So it comes to this. If x zero at W is equal to a one which is one and then append a two, which is D. So now we have a d and then it goes again. It goes to this. If this is equal to a one or a two. No, no. And then it goes in a pens. It two x temporary. So now you have a d g. And then you come here and C d If do you here is equal to a two, then append a one which is See So now we have a d, g and C and then you go e f e be enough. So this is how you get your, um, your list. So let's stop here and will continue in the next lecture. Thank you. 17. Travelling Salesman Problem , Coding in Python #3: Hi. Some dinner. The last lecture. We ended up creating a temporary lest where the two departments where the two cities were swapped. So now we want to find the objective value, objective function value for the new solution. The temporary solution. So the way we did it here instead, off having for X zero x initial, we do it for X off. Okay, this is the initial zero here x temporary. This is for ex temporary and this is for zero, which we had the initial solution, which is the current solution, the solution in hand. And this is the temporary solution. So we got the sum of distances. Now we want to see if it was less or more or equal, so we create a random number and we have this Formula one over exponential ah length. Where's length? The sum of distances, which is the length of distance off the temporary on the length of the distance off Thean intial solution X of zero on X of temporary. So if x off the temporary solution was less or equal to X off the current solution, then it's better if it was less so. We take it so the X initial big will become the X temporary. So if the objective function off, the potential solution was better, which is less We take eggs here, Toby X temporary else If it was not less If it was more than we check the random number with this formula, If the random number was less or equal to the formula, then also the current solution will the X temporary would become the current solution else . If the random number was greater than this formula, then we don't accept the potential solution and we stay where we are. So eg zero is equal to zero and, um, the temperature we upend the temperature to the list temp which we created here for visualization Onda We append also the minimum distance which is the length off the temple, the temporary solution which is the current solution on we decrease Alfa by. Of course, this is you can see it's outside and so you see it's outside. And so after you do this here, this n times and as the Nick end times and then it comes here and it depends the temperature. It depends the minimum distance so that we can pluck them and then it multiplies t zero, which is the temperature by Alfa and assigns it to t zero. So now you have a new temperature which has decreased by Alfa and then you upend you upend the cart. Actually, this is not important. We can delete this. Actually, delete. This is not important. Okay. And here we plot we print the final solution, which is at zero, and we print the minimum minimize distress, that final solution, and that we want to plot the temperature by which is X by. Why, which is the minimum distance on the font size is 20. It's bold and temperature font size 18 and bold distances 18 m pulled on. We want the X access to have limit backwards instead of 0 to 1500. Actually, this should be We have 3000 here. Okay, so from 3002 0 and also minimum temperature, it will ah arrange for minimum temperature to max temperature or 100 on a scale of 100 on 100 times from zero until 3000 in intervals, off hunt like 100 intervals on bold. So let's go ahead and run that and see if we get the minimum solutions. Ah, 100 on day 58. So let's see how much we get. We could run it a couple of times because it's it's random counts on the element of randomness. So we don't get 158 on the first try. We will try it again. We got 158. Here you go. This is solution At 3000 we started out this distance where the initial solution and then it went all the way until we come 1st 200 Andi 50 eight. So this is our final solution. The minimum cities instead of this is our initial solution. We went a c g d e b enough The final solution, which is the optimal route for the traveling salesman. He has to go from City G two b two f to eat to C to D to A and then back to G on the minimum distance is 158. So this is traveling salesman problem using simulated Daniel ng. I hope you enjoyed it and I hope you understood. If you have any questions, please post it on the board. Thank you 18. Vehicle Routing Problem #1: Welcome back on this lecture will talk about what the vehicle routing problem is. So it's an NP problem, and it's a combinatorial problem where you have to find the optimal set of roots for a few fleet of trucks to go to where it starts at a warehouse. On several variations, you have capaci ated capacity stated to be a corrupted problem. Be crafting problem with time windows, and there are others. So you have a warehouse here and let's say you have three trucks, and for each customer there's a certain capacity that they want or you have to deliver a certain capacity to each customer. And for each truck, it can carry a certain capacity. Let's say 100 units on this record is 2020 2020 20. So this is how, uh, you got ah 100 divided by these, for example. So basically, what is the optimal route each truck should take in order to minimize the distance, minimize the overall distance Andi, for it to travel to each customer and give them the capacity without, of course, uh, violating the capacity of each truck. So if each truck can only carry 100 on each customer had a certain capacity. So each truck and only carry 100. So it has to go through without valley violating the 100. So cannot carry 101 or 110. So this is vehicle routing problem. It leaves the warehouse. Then it goes back to the warehouse and one leaves the warehouse. It goes back to the warehouse and so on. So this is the view corrupting problem. Thank you. 19. Vehicle Routing Problem #2: Welcome back, everyone. This is a second lecture off vehicle routing problem where we're going to start with an example in order to implement it in Python. So we have five trucks, one warehouse on 31 customers. The problem is that we need to minimize the distance traveled by all five trucks using the Euclidean distance, which is, um, this what A Q one minus P one squared plus pick you to minus P. Two squared over the square root of it. So basically, we need to minimize the distance travelled for all five trucks together, so we have to constraints first of all, the number of routes which is the number of trucks, has to be five, not more, not less on each drug can only carry ah, 100 capacity. So here's your warehouse. These are the 31 customers you have. So you want to find the five trucks, the optimal truck, uh, the optimal route for each truck. 45 trucks in order to go to the from the warehouse to each customer and then come back. So we have. These are the nodes for each customer. We have 31 nodes, actually, 32 because node One is the warehouse where there's no demand and no to this is the court and a co ordinates on and the demand that No two, which is one of the customers, is, um is 19 for example, Andi know three. This is the co ordinates on the demand is 21. So this is this is the optimal route. This is the optimal route that is, that should be traveled. You can see this is like for example, truck one truck to truck three, truck four and truck five. So this is the optimal route where it's has If you add this distance to this distance To this, to this, to this distance, you will get the minimum answer off. I think it was 784. So this is how we're going to, um, code it for each string for each chromosome. Sorry for each string. Ah, you're going tohave warehouse 123 then warehouse. Because truck one will leave warehouse, go to this customer. This customer, this customer that go back toe warehouse and then four truck to its leave the warehouse. Go to this customer, This customer, this customer, this customer and this customer and then come back to the warehouse and drug through the same thing. So this is how we're going to encode our problem. So you can see here is that we leave the warehouse. It goes to each customer and then comes back to the warehouse and then the other truck leaves the warehouse and come back to the warehouse on the other truck, leaves the warehouse and come back comes back to the warehouse. So this is how we're going to encode our string. So, um so let me show you. So this is the file where you have 32 nodes. One is warehouse. This is the co ordinates of the warehouse, and these are the coordinates off the other, uh, notes on this is the demand for each node for each customer, and your capacity for each truck is 100 on. This is the optimal route. 784. The minimum distance your truck one should go to know 2131 1970 13 7 and 26 then back to the warehouse. And the second truck should leave the warehouse, go to customer 12 on 16 and 30 and then back to the warehouse. And the drug three should leave the warehouse, go to 27 24 then go come back to the warehouse. So if we see here, like, for example, Truck three only had two notes 27 24. So it leave. The warehouse goes to 27. 24 then comes back or 27 24 then comes back. It depends which is 27 which is 24. So this is how, uh, be a corrupting problem. The problem that we're going to solve in Piketon. So I'll see you in the next lecture. Thank you. 20. Travelling Salesman Problem , Coding in Python #1: Welcome back, everyone. So let's start by showing you the ex scored in its on light co ordinance. Now what I did here is we have five trucks, right? And we said that each truck would leave the warehouse and come back to the warehouse. So if you remember here we have five trucks. Each truck would leave the warehouse, come back to the warehouse and they leave the warehouse, come back to the workhouse and so on. So the times, the number of times the warehouse would be in the solution is 10 5 trucks. Any truck would be leaving and coming back to the warehouse that we have 10. So the way I did it here that we have done warehouses. Remember the X coordinate at the Y coordinate off the warehouses at 82 76. So 82 76 is 10 times, and then we have the co ordinates off the other 31 points. So let's go ahead and run that or before running it here, we stock the coordinates on top of each other. Let's run this. So now this is the co ordinates. We have 10 co ordinates for the warehouse. And then this is the co ordinates for the rest off the, um for the rest of the nodes. And then now we want to find the distance matrix, which is the co ordinates between the distances between the's all these between one and 211 and three and one and four and so on. So we go ahead, and, um before that capacity is, the capacity for each warehouse is gonna be zero capacity for no do. One is 19 for No. Two is 21 for node three or six and so on. And then the data frame before the distance is gonna be of the distance matrix. Let's go ahead and show that first. So the distance matrix, this is the distance matrix. You have the warehouse with the warehouse zero warehouse with warehouse zero. So these are the warehouse is, of course, the warehouses. Only one location. But the warehouse with the first location, the first note is going to be 34 the distance, and the first, the warehouse with the second location is gonna be 72 just here and so long. So this is your distance matrix for the 31 nodes and the warehouse. So 32. But again, the reason why I made 10 warehouses. Because that's how I'm going to implement the solution. The solution is going to be is gonna have 10 warehouses and 31 nodes. So the distance data frame is going to be We're just going to transform this into a distance data frame and also capacity data frameless. Go ahead and run this. You can see the distance data frame, is it? No, sorry. Distance state frame is the same thing, but I just made it. A data frame in pond does on the capacity data frame is going to be for note one is gonna be 000 all the roads for 10 nodes, and then it's gonna be for node. One is gonna be 1921 6 19 and so on for each 31 notes. So we have 10 warehouses and 31 notes, so we have ah, total of 41 uh, notes. And again the reason why I put 10 warehouses because that's how I want the solution to be. So when the next lecture, uh, will will continue in the next lecture. Thank you. 21. Travelling Salesman Problem , Coding in Python #2: Welcome back, everyone. Now we need to find the distances between each note so that if we had a solution, let's say the distance. Ah, for a certain solution is this much? Is this array? Then we need to find the distance for battery. So we did a definition of a function. Complete distance is gonna be the liquid. Didn't the traveling salesman problem? We find the location. Where does the distance data frame? Let's say you have a node 11 and 12. So basically 11 and 12 it's gonna be this one. So you find the location between distance of tea and distance of our so zero and one in 10 0 and index. So basically, if we have, let's say solution is equal to, let's say, um, you love on and then 12. And then let's say 10 for example. I'm just giving out random numbers, so this would be index zero. That would be index one. This would be index to So now we want to find the index zero and index one the location between them. It will be the intersection between the X in and why. So basically, um, the intersection between this and their soul essay at 11 and c un 12. So it'll be this. So that's the location. And then we appended to the distance we appended to every distance truck I called it and then we summit we some we some the distance for every truck, and then we sum that distance up so that we can get the complete distance. Um, as an example, let's see, we want to find the initial solution for here. You We called the function since this is called objective value. And I saved that in the file off the, um off the, uh, this python file. So we call it important objective value as Obi. So we imported this. So now we have it. So let's say this is our array, that we want to compute the distance. So we have total distance. Initial is we called the OBE and then dot complete distance, not random, as we called it here. Complete distance, not random, because we defined it as a function in the object of value and we imported the object of value. So you went ahead and ran this. So we have this. The distance traveled is 1000 558. Ah, units for, um, the truck. So if this this would be the distance traveled for 1558 Basically from this. So we got the distance from here. We found the intersection between each element of the element afterwards. And then we upended that distance into a, um, into a um Oh, is it? Let's see Vince here. No, it's OK. And then we upended doctor stands to ah, this empty list here. And then after we upended it, we some them together. So that's how we returned to some off the distance array that we gave it here. Oh, be complete distance and then random solution, which is zero this. So that's how we get the total distance on this is the initial solution that we're going to start with. So that's enough for this lecture. I'll see you in the next one. Thank you. 22. Travelling Salesman Problem , Coding in Python #3: Welcome back from the last lecture, we talked about how we get the distance for the array. Given, for example, of this is theory. A. How we can calculate the distance travelled for the trucks based on this array. It doesn't have to be five trucks in this array because everything's jumpers like, for example, one to 10 is the warehouses, the one up until Tennessee warehouses. But it's all mixed up here. Uh, so basically, here it's not ordered. You don't have on ordered solution. So we found the distance traveled by this, and we got it a 1558. So basically, now we're going to set the penalty functions. If you remember, we talked about the constraint. Handling the way we handle constraints is by penalizing them. So, for example, if a truck is carrying more than 100 we don't want to discard that solution because we might end up getting better solutions from it. So we want to penalize it by this penalty value that we're going to get so based on the array off the solution, we're going to penalize it by this much. So we have to empty lists here now, Only the values between one and 10 which is the warehouses. Now we care about the warehouses. So upend the truck capacity based on the where house So, um Value Index, Where the array where I is in the array, the index of the warehouse where it's in the array. So, for example, here, let's say this is 0123456789 10 11. So the index for the first warehouse we saw was 11. And then what? We did waas We upended the indexes. So now we want the indexes to be together for the warehouse. So we have all the indexes were the warehouse. So we have 11 12 13 14 15. So we have 11 and 15 16 17 18 1920 2122 23. So we have 11 15 23 and 24. So we want all the indexes for the warehouses. And then now what we do is, um Index plus one index off second element. Now we want to find Ah, If two warehouses were next to each other, then there would be no truck capacity, because if you have like, let's say 10 and nine. These are warehouses there next to each other. There would be no truck capacity between these two. The truck capacity would be, for example, this would be a truck capacity on. And this would be a truck capacity until it hits another warehouse. So basically there would be no truck capacity. Now. If the two warehouses were not next to each other, then the truck capacity would be from index Want to index to So Index one is the warehouse index tunes the warehouse. So, for example, nine and ton, let's say nine is a warehouse and Ford is the warehouse. So if nine and 10 2 consecutive ah, elements were not warehouses than the truck capacity would be from one warehouse toe another. It will be this because the warehouses at nine, the warehouses seven. So this would be a truck. Like we said in the, um, here was that warehouse this would be a truck and then warehouse where house this would be a drug and then warehouse and so on. So this would be the truck. And then we say, for each item or location in the truck, we want to sum sum up the capacity. So we go to the capacity here, the capacity index, the capacity data frame. And now we want to add this capacity for the snowed plus this capacity for the snowed plus this capacity. For this note, we find the location between the capacity and the note, which is here, the node and the capacity of the node in the capacity so we can add them at the end. We can add them up, add them up, and then we upend all the sums from every truck. So we do this in length off the array, and then we add them up. So we add up this truck to this truck to this truck and so on from warehouse to warehouse. And then we increased t by one, and we do it for all the trucks. Now we want the difference for I incapacity off one truck. If the difference was over 100 let's say the difference was 2100 and 20. So the difference is 20. So we want to find the difference for each truck. So let's say one truck was carrying 120 another truck was carrying 140. So we want to upend thes differences else. If the difference was zero, then we just append zero else. If the capacity was not greater than 100 then we attended zero. And then how much extra for the trucks we append we some. All. The difference is like if one truck was carrying 100 40 so that would be a 40. We would upend 40. Another truck was carrying 120 so we would depend 20. Another truck was carrying 90 so we would depend zero because it's not violating the constraint. So we append. And then we got the summation off all the violations and we must apply it by the penalty value that we give. If we say we want a penalty value to be 20 for each extra, a truck carrying an extra capacity in extra unit will be most applied by 20. So if the trucks were carrying in total an extra of 200 we multiplied by 20 then we have 4020 times. 200 is 4000. So that's the 4000 and we added to the bomb the total distance traveled. So if this Waas 1500 plus 4000. So you will get a distance off 5500. The reason is because we want to make it less desirable. So 5500 is way big and we don't want that. So that's why we penalize the objective functions by this much. So let's stop here and will continue in the next lecture. Thank you. 23. Travelling Salesman Problem , Coding in Python #4: Hey, welcome back, everyone. So the last lecture we talked about how we can penalize if each truck is carrying more than 100 by calculating the capacities of each truck by seeing each warehouses two consecutive warehouses on from that the truck, which is between one warehouse on the other warehouse calculating the capacity from this based on the capacity data frame. And then adding these up with these and for every truck we see if it's violating 100. If this, for example, this truck was carrying 120 would be penalized by 20 times the amount you put here by penalty value. So the other two penalty functions that we're going to have basically since, basically, since we need the first the solution at the beginning to leave the warehouse and solution at the end to come back to the warehouse because remember Truck Wan warehouse to warehouse truck to warehouse to warehouse and then dr dot truck five warehouse to warehouse. So we need the first solution. The first index to be warehouse and the last index to be a warehouse on the second index, not to be a warehouse, and the second to last index not to be a warehouse. So what we do here, if the first element is not in range 1 to 10. Because remember the warehouses We put them one to tell 123 up until 10. So if the first index off the array, the solution is not a number between 1 to 10. So if the first element in the array was not a warehouse, we add the penalty by this penalty value depends on what you want to give it, like, 200 or 300 on def. The last element also was not in range 1 to 10. Then we also penalize it by penalty one. Andi, Um, if the second element wasin range 1 to 10 we also want to penalize it because we don't want the second element to be a warehouse. And if the second to last element wasa warehouse, we also want to penalize it. Now, here we also What we want to see is that we don't want to have three consecutive warehouse and we don't want warehouse warehouse warehouse. We want to see warehouse warehouse. So what we do here is that if three were consecutive Ziff, three consecutive elements where warehouses if a one which is if t is t zero t wan t to is in range 10 range, 10 range ton from zero until 10. Then we want to add the penalties because we don't want three consecutive elements to be warehouses. And then we add these been dependencies up from the's penalties. We add these up this and this and this and this we are the summation and that's how much penalty we are going to give for this for violating this constraint. Now we want to emphasize that the penalty for the first index, the first index should be a warehouse and the last index should be a warehouse. So we do it again. The if array at index 10 isn't is not from 1 to 10 which is not a warehouse. We want to penalize it on The last element is not a warehouse. We also want to penalize it. So this is exactly like this. But I just put it again to emphasize how much we care about this. Basically, what you can do it. You can increase this value here, this penalty. But I went ahead and did another one so that we can emphasize that we need the first element in the last element to be warehouses. So, um, does you know for this lecture and will continue in the next lecture? I'll see you there. Thank you. 24. Travelling Salesman Problem , Coding in Python #5: Welcome back, everyone. So this lecture, we're going to continue talking about the big corruption problem. So this I showed you this in the this and this file right here where we get the X and Y co ordinates and then we start them on top of each other, and we get the distance data frame and the capacity date of rain. Now, this is an initial solution that we randomly generate in order to give it to the to the file to the point on boil. So the variables let's let it run for 2500 times and we have something called mutation, which is the is actually an idea from genetic algorithm I'm going to use. This is for aspiration criteria. Basically, I'm going to swap departments based on this probability right here when we come to it when we get to it, and then we'll talk about it more. But basically there's going to be a probability off swapping too, Uh, two cities or two nodes. So we'll start out with a length off the taboo list off. 20 on This is the taboo list we wanted empty for No. And, um, and the total distance here from the objective value, If you remember, got the total distance for, um for round of social for this for X zero or the initial solution to start with. No, he we we print essential costs, which is the initial distance and the total distance, which is the total distance initial and the random solution. So if we go ahead and run this, you will get like we said the 1005 158. So we want to create an empty array called safe solutions here, where we save all the solutions for each the best off each inspiration we save the solution , the best solution off each iteration. And for the runs you we have 2500 and we saved the best solution for each run on the reason why it's from zero until this length is from zero until it's going to be the same length as this. But the reason why I put plus one is because we also want to add to it. We want to add the distance for each solution. So string would be, um let's see. So the distance the string would be this solution right here plus the distance. And then this is for keeping track off the best solution and indexing the distance for the solution. So Index were best born an index were bussed to. We want to keep truck off which it oration. The solution was achieved. That and we want to keep track off the distance now for I in range runs for I in range 2500 . We get the list of combinations for the random solution. Want here around them solution. Which is this. Remember, we talked about the to swap up as the move operator, and we also sent for Kabul. Search what dictates and is the move operator. So if we have, um, if we have a move operator off to swaps, we need to find all the list off swaps off to. Let's put runs as two for now or one. Let's go ahead and I'll show you what what? I mean, go ahead and run this and let's show you a list of and so, basically, for this, it's going to show each pair that could be swapped. So 24 1 26 24 1 20 to 24 with 31 24 with 11 and so on. All of them. You have a total of 820 moves. 820 possible moves. So at the end of 27 with 41 So we're going to create, uh, we're going to create a list based on these swaps right here. So what we're going to do is we're going to have store all combinations, empty array to store all possible combinations. And then now we're going to do like we did in the traveling salesman problem swap. If you remember, we swapped the department. Sorry. The city with another city, It's swap. So first of all, we want to find each come here, each one. We want to swap it with the with the random solution with this solution right here. So what we do is like in traveling salesman We do. If this waas we iterated for the elements in the random solution and the initial solution, if it was the first element off this basically a one we have a counter is the first element here, which is 24 26. A one is 24. A two is 26. So we go through the random solution and find a one which is in this case 24. And if we found 24 we swap it with a two, which is 26. And if we found 26 we swap it with a one which is 24 Andi else. In case we didn't find 24 26 we were put it as it. And so we end up with a new string off. Ah, the same the same here. But instead will be 26 24 22 31 11 16 and so on. So we'll do this for every combination here, so we can get 820 solutions. And then here we we want to store all of them together. So if I go ahead on and run this so basically here we called store old combinations. So here we have 820 solutions, each with with ah different solution. For example, here, the 1st 1 would be 26 24 22 31 11 16. So, basically what's happening here is after for the solution. We are finding every neighbor. It has the neighbors. It has the neighbors Veritas is basically all possible combinations off solutions, basically all possible combinations off to swap up to up slop for this solution right here . And so we store them all in this, Basically, now what we do here is that, um where is it? Okay. All right. So basically, we create to empty lists. And now we want to find the distance for each of these solutions for each social we need to find the distance, because now the different solutions being created. So we need to find the distance for each solution before I in this list right here. We need to also apply the the penalties. The Remember, if you remember here the penalty one if the truck is carrying more than 100 penalty two. If the first and last element, we're not warehouses and so on. So for the first penalty, if the penalty function for carrying more than 100 if it was carrying more than 100 for every one unit it's carrying more, we're going to penalize it by 20. So if it was carrying 100 more, we'll penalize it by 2000. Yes, ah, 100 more by 2000. And for penalty to will penalize. Um will penalize this by 200 penalize this by 300. And the last one will penalize this my 400. So if the first element and the last element we're not warehouses will be 400 close for hundreds will be 800 then we'll add them to the total distance. So the total distance would be the distance plus penalty War plus penalty two plus penalty three. And then we want to swap changed access to horizontal. And then now we stack them on top of each other. So if we if we go head to run this, go ahead and run and come back here. Where's object of function? Okay. Objective function here for each solution. We showed you the 820. Now we have the distance for each solution, including the penalties. So if a solution, if a certain solution was violating a certain constraint, it would be added to this. So it's for certain solution. Was violating a constraint. The first constraint by 400 will be added the distance plus 400. So this is for every solution, you see. Okay, So, um so we stopped the month top of each other. And this is what we got. So this is for one solution for the last solution. And then we start them on top of each other for every solution created for the neighbor. All the neighbors off the solution right here. So let's stop here and will continue in the next lecture. Thank you. 25. Travelling Salesman Problem , Coding in Python #6: welcome back. So let's continue where we left off. Remember here we left off with creating all the neighborhood, all the neighborhood for this solution right here and storing them in in this list right here where we also calculated the distance for each solution. Now we want to order them from lowest to highest from lowest because we want to order them from best to worst. So let's go ahead and run this. Okay? This is taking a little bit more time. All right, so we're okay. Objective function ordered. So now they're ordered from lowest up until highest. So from 2000 and 200 to 4800 including penalties. Remember, if a solution waas, um, if a solution was violating a certain constraint than this would be the distance plus those penalties because we don't want to encourage the taboo search to go for those bad solutions . So now, for safe solutions here, where we stored all the solutions, it's an empty solution. So now if only if the safe solutions had data. And it's so far, we don't have data, But then we're going to store them here at the end. We're going to store gum here, so for now there's no data in it. But if there was data, we need to find the minimum, which is the best answer. Let's say I'm just giving an example. Let's say this is your safe solutions here data frame. So the best solution would be the minimum distance. So this would be the best solution because it has the minimum distance. So we find the best solution off the safe solutions here from this data frame only when it has data in it. So when its length is greater or equal to bond and we call it best so far. But now, since we're starting out, the data frame of safe solutions here is empty. But when it's filled, then we'll get the best, the minimum, and call it best So far. So now the solution in hand is the first solution off this Now Solutions have t is equal to zero. Is this solution right here? Now we want to check if this is in the taboo list. So for the aspiration criteria, we want to stick with best so far solution. So if the also length solutions greater or equal to one same thing here if solution in hand , which is this value right here? The first value that we have? If it's in taboo list, If it's the solutions already in the capitalist, then the distance off the solution is the same as the one in the top of the list. If less then it becomes the best. So far. So we want. If the distance in in this solution so far is in taboo list, then we need to make it the best so far solution and then break else. If it was not the best so far, then we go and tease equal to t plus one and we take the other. It wasn't last. The current solution becomes what was chosen in the list else. If the tabbouleh solution was not in taboo list, then we take a solution in hand. The best so far becomes solution. Hands off. The solution was not in taboo list. The best so far becomes the current solution here. If length off taboo list is greater or equal to the length of tabbouleh, which is what we put it here 20. So if the list was full, then we want to drop here delete the last element off the list. So we delete the last element off the list. So basically the difference. We want to delete the last elements of length off tabbouleh minus one. So delete the the 19th element at the role. Access is equal to zero. So the role and taboo list is Basically we got the solution in hand and we stuck it in the tabbouleh is because we deleted the last one else. The solution in hand becomes instead of the first element on def. The taboo list was not full if it was not full. Then we put the solution in hand what we have so far in the taboo list, and we start the solution. Safe solutions here is now going to be stocked with solution in hand. And this is we're going to keep track of which alteration this solution was achieved up on . We're going to keep truck off, basically stock them on top of each other, so let's go ahead and run this. All right, So what are we looking for? Let's look for safe solutions here. It's taking a little why. Okay, save solutions here because we only rock around it for one time. So we have the best solution we found. Remember, 2200. So it's the best solution, and the taboo list is empty because we only ran it for one time. Let's run it for two times and see if the taboo list would be, uh, actually, no. Because at the end, the troubles will get, um, would get populated. So let me change this tour. All right, So let's stop here and will continue the next lecture. Thank you. 26. Travelling Salesman Problem , Coding in Python #7: Welcome back, everyone. So let's continue where we, uh, left off. So here we want to make some kind off aspiration. Criterias, go ahead and delete this. Okay, So for every 20th orations, every 20 runs, we want to change the tabbouleh sides for every 40 runs. We want to change the best solution so far for changing the best solution so far, for every 30 runs in case we don't, uh, the solution in hand doesn't improve to the best so far. We need to return to the best so far, because we need we need some kind off change. So basically, let's start with returning to the best so far solution. So here, mod three every 30 runs to return to the best so far, in case you kept getting worse solutions. Now, if you were worse than the best so far. So if the distance you have is greater than the best so far, we need to return to the best so far. So the solution hand becomes the best so far. So we returned to the best so far solution else. Then if if we were not, if the best so far was worse than the solution we have, then the solution in hand remains solution in hand. Now we want to remember we did hear a probability off mutation is equal 2.2. So if we get a round of number, if this random number was less or equal to the probability off mutation for probability is lessen the probability of mutation. What we do is first of all, we let's get three different round of numbers. Let's get to different round of numbers where it will be an integer we want to find. Let's say, for example, if we had a string, um let's say, uh to five, six, nine on one, for example. So we want to find the random integer around them integer for this length right here. The reason why we started one not zero, is because the first element would be the distance. Remember? So we won't start at one, not a zero, so one up until the length plus one so that this would be included. So this so basically already normal. Let me keep it So basically we want to find two random integers for example, one and three or one and four or or two on for. So we find these two now while this random number is this round of number than we pick again because we don't want them to be alike. Now if if the random number, the random into your sorry, the random integer one, which is this was less than this random number. So basically, if one was less than four, then this would be a one. This would be a to And then what we do after we get these this and this, what we do is we reverse what's in between? Let's say we get this. It becomes 2009 65 and two, we reverse it. So if a one which become is before a to what we do is we reverse the solution between a one and a two and that will be the mutated solution. So we have a new set of solution where between a one and a two is reversed else. If a one was greater than a two, so this would be a two, and this would be a one same thing. But we reversed from a two to a one instead of a want A to Onda. Um so and then We got the solution. The mutated solution where the reversed would be the reversed order. So let's stop here and will continue in the Lex lecture. Thank you. 27. Travelling Salesman Problem , Coding in Python #8: Welcome back, everyone. So the last lecture we did if the probability of the random number is less than probably of mutation we reversed the order off several elements. The reason why we do this is because we want This is an aspiration criteria. We want to introduce some kind off element of randomness on DSO that we can diversify and go beyond the neighborhood. So we want to kind of play with solution. Andi, reverse orders. Now what we do here and this is again for four diversification Mutation is a part of genetic algorithm, so you can think off. This is a hybrid between genetic algorithm and taboo search. Although we did not cover genetic algorithm, I have a couple, of course, as regarding genetic algorithm, if you want to check them out. So here we at the end. Now we have them. You the mutated solution, which is the reverse solution. Now what we do. We also want to swap departments because we want to play with the solution so many times so that we end up with slight, a very different solution because we want to diversify. So what we do is we get to random integers Andi like we did in the traveling salesman. And like we did here, here, what we do is we pick tool random numbers on and we want to swap those two random notes. So making a new list of the new set of notes. So what we do in this mutated solution, we want to swap the random integer off a one with random integer of a two. So we end up with a new solution on we do it again. So we do it two times so that the reason why we do two times this is because if you do it once, you will most probably end up with, ah, same solution in the neighbor in the neighbor that you have. So, basically, after we swap one note number one with no number two, we want to swap no number one with no number three. So we want to swap twice. So no, do one will get swamped with no to which will get swamped with no. Three so that you can make sure that your solution is different than what you have because we want to diversify. We want to create new solutions. And this only happens if the random number here was less than point to the probability of mutation. So basically, after we do this, okay, the notes get switched And then now we have a new solution called random one. The reason why I call that random one so that now we call that random on random solution. One is thes solution in hand. After we get it, this will go back here and we'll get all the combination and do this all over again all over now. The aspiration cartooning for diversification to encourage the search for diversity to diversify. So basically, if more determination to here for every runs, every 40 runs if for every 40 runs what we do is we want to swap departments. So what we do is we get to random numbers so best so far solution at this index and so far , solution at this index, we'll swap them so it'll be become ill. Be the solution and decision. This is exactly like this, but it's in one line. You could we could have done this in one line like this, but this in order to understand the idea and also we do it for best solution So far, random one becomes, um, random to on random three and random four also becomes random three and random, for they change so here, so that you can change it several times. So basically, round three and round for you Change that and then you change round three again and change around three and four again so that you can make sure ah, 100% that you will have. You will end up having a very, very different solution. Because when you substitute one with two and then you call a new random integer to one in the substitute one with two again and the same thing with three and four. So you end up with a very, very different solution. So because we want to diversify, then we increased it orations by one Andi, if more determinations want is equal to zero, which is every 20 runs, we change the size of the taboo list. So we got the random interview between 10 and 25. So the new length of the taboo list would be any round of number between 10 and 25 which is between 10 and 24. This is our so for a breed is is should be for every 20 runs between 10 and just do 25. So this will be 26. Because it does not. It does not include the last number. Andi. So we do this this many alterations times, which in our case, let's do 2500. So let's stop here on will pick off, pick up on the next lecture, So I'll see you there. Thank you. 28. Travelling Salesman Problem , Coding in Python #9: Welcome back, everyone. So if you remember, we saved the best solution we bust, we saved all the solutions, the best solutions on the index, off the interaction of each solution we saved them and safe solutions here on index for best to. So we need to find the best solution on in what it oration that achieves. So we need to find the minimum distance and we will put it in something called Final Solution. And this is index for best, so that at the end we can adulteration index for best and final solution is the distance that we achieved. So this is what we did here. Basically, we, um from the safe solutions here we are getting the best solution which is has the minimum distance. So we're just finding the minimum distance and also the minimum distance at what situation it was achieved in So here, for some reason. Ah, it did not allow me to use temporary capacity of the final solution. So what I did waas, I had to, um, get it through applying the temporary capacity. Basically, let me show you here temporary to save capacities. So basically, this is our array. The last array that we achieved Warehouse 22 11 26 36 42 6 Which is where houses And so, so far, so on. So we need to find if you remember here an objective value. Uh, we needed to find the capacity of each truck, right? If you remember. So we needed to upend the capacity of each truck together in order to penalize it. So here, we need to find the capacity of each truck so that we can find, like, let's say, for example, here from this on till it it's a warehouse. Andi. So from each warehouse to warehouse, we need to find the truck, the capacity for each truck. So, for example, here we need to find the capacity for this truck and then for this truck and for this struck and so on. So we need to find the capacity, the total capacity and the capacity for each truck. So we go through iterated through each truck from, um, from indexes like we did here, basically, from, um, hear from index one and the index of second element. If the two how warehouses were next to next to each other than the Ruby. No truck capacity. You could please, if you could go back to the first or second. Ah, lecture where we talked about this, you would understand it more so. Basically between two warehouses, there would be no capacity. But between warehouse and another warehouse else. If the warehouses, if you don't, if you deduct them and it zero, then there would be no truck capacity else. What you do is you take the truck capacity between those two warehouses. So, for example, here, let's say Ah, you have here. Let's see. Warehouse one and warehouse two. So between warehouse one and warehouse to these are the nodes. You want to find the capacity between these nodes. So this is how we do it. If you remember, please go back to that lecture. And then we got the, um the total capacity for each truck. And now we want to find the deep penalty without penalty. So remember we added the penalties to the solution of remember, we added penalty one penalty to pendency three. So each solution we have has already added the penalties. So we want to find the distance without adding penalties. So we just call this we just call this so on the on the current solution we have in order to get it without the penalties which is on the final solution. So we end up with a final solution here, and then we want to find the distance without the penalty. And then we print what we did was dynamic Kabulis with aspiration criteria. But so far on multi restart with random number for mutation. So we want to print the initial solution on initial this stunts. Andi in what? In minimum of alterations, Who sold the final solution? This is the initial solution which we had here this on the total distance for the initial solution. And then now we want to find the We print out the minimum off each the minimum solution that we found. And of course, the lowest distance on the lowest cost Onda at lowest distance with the penalties on the total distance without the penalties. And of course, if it was a good solution, then it should. This should be equal to this because there should be no part penalties. If it was a very good solution, then it shouldn't be violating any constraint, which is that means that it does not have any penalties. So this should be equal to Gus. The truck capacities, we sum them up. Onda. Actually, this is without the some. We just get the truck capacity for each truck. Each of the five trucks at it, oration at what it aeration. We found the best solution. And this is order to plot the, um, the solutions. So I've run it for 202,000 and 500 times. And since it takes, it took a very, very long time. Almost, I don't know. Maybe 10 hours. Oh, maybe 10 hours. So I went ahead un run it. Here, you end up with a final solution off. 793. The best solution off that problem was 784. So we're only away by nine units, which is not bad at all. So we're only nine units away from the optimal solution. So these are the 2500 runs and at each run, the best solution was found. So if we go back here, you can see that at alteration 2005 we achieved 793 the distance. This is without the penalty is also 793. So you can see that this definitely does not violate any constraint because, um because it it isn't there. There is no penalties added to it. This is equal to this on 45 trucks, each truck, this is the capacity it's carrying non is above 100. So you can know that it is not violating any constraint. This is below 100 below 100 below 100 so on. So it's not violating any constraint. So you have five trucks. Each has a capacity off less than 100. So this is how you, um, how you do vehicle routing problem with Tabal Search is actually a combination between taboo surgeons. Heuristic algorithm. So I hope you, um, understood it. And please let me know if you have any questions by posting it on the discussion board. Thank you.