Transcripts
1. Algorithms and Data Structures in Swift 5: Algorithms are essentially in computer science. They are literally everywhere, even if you haven't noticed them yet. When you plan your trip, the GPS uses route finding algorithms. And when you watch a 4K video on YouTube, it plays flawlessly thanks to audio and video compression algorithms. How about the amazing 3D game scenes? They wouldn't be possible without sophisticated rendering algorithms. Are you a software developer? What if you could write better, faster code? Maybe you are considering a career in software development. Wouldn't it be great to pass the job interview with flying colors. Hi, I'm Carter. When you store, I'm a software engineer, MOOC author and instructor. I'm happy to welcome you to my course, Introduction to algorithms and data structures in Swift five. In this course, I teach you how to figure out the time complexity of a function. You will understand how built-in containers work and you will be able to explain with confidence why Quicksort is better than bubble sort. If you know about algorithms and data structures, you can write better code. Your programs will run faster by applying the right algorithm. I will demonstrate everything through life. Swift coding will run performance tests to compare the running times of different solutions to the same problem. You will learn about the Big O notation. This will help you in finding out the complexity of functions and algorithms. Then I'm going to show you the power of generics. Generic stand at the core of the swift standard library, will see generics in action and you will become quickly familiar with them. Next, we'll talk about the built-in Swift's collection types. I show you how to implement some of the fundamental data structures using Swift. Then we'll delve into the sorting algorithms from basic to advanced. I am going to explain and implement the selection sort, insertion sort, bubble sort, merge sort, and the famous QuickSort algorithm. I use visualization techniques throughout the course. And I break down complex concepts into simple, easy to understand chunks. By the end of this course, you will know how to implement faster and better programs. You will also understand how the most popular data structures and sorting algorithms work. You learn how to implement more efficient code along with Swift programming best practices you can apply instantly in your apps. Discourses along time investment. See you in the first lesson.
2. Section 1: Overview: Hi, I'm cargo in the store and welcome to Introduction to algorithms and data structures in Swift. I created this course to help you write better, more performant code, you will learn about some of the commonly used algorithms and data structures. We start by learning about how to analyze the efficiency of algorithms. First, we'll talk about the big O notation, which is a mathematical model for measuring and classifying algorithms. It estimated the time efficiency of running an algorithm as a function of the input size. We're going to talk about constant time, linear time, polynomial time, and logarithmic time. To understand these concepts, we're going to implement, analyze, and compare various algorithms. Then we talk about recursion and how to avoid the problem of infinite recursion. Many algorithms and data structures rely on recursion. Thus, it's imperative to understand how recursion works. We move on to show the practical benefits of using algorithms. We'll compare un-optimized samples with ones that rely on algorithms. If you had any doubts about the importance of algorithms, these examples are going to convince you. Next, we dove into generics. You must understand generics before we start studying data structures and algorithms. Then we talk about the built-in data structures. You will learn about the array, the set, and the dictionary before moving on to the sorting algorithms. Next, we study and implement the selection sort, insertion sort, and bubble sort. We're going to analyze the efficiency of each of these sorting algorithms and we represent them visually. We'll continue with two more advanced sorting algorithms. We implement the merge sort and the QuickSort in Swift. Finally, I am going to share some useful online resources that will help you sharpen your coding and problem-solving skills. After finishing this course, you will know all the fundamental concepts related to algorithms and data structures. You'll have a good understanding of concepts like recursion and generics. You will see how the popular sorting algorithms work and the way they can be implemented using Swift as the first part of a series on Swift programming. This course is an essential steppingstone for delving deeper into algorithms, data structures, and programming in general.
3. Is This Course for You?: Now let's see if this is the right course for you. Although I explained almost every line of code I write in this course, you should know the basics of programming. Things like methods, loops, or conditions should be familiar to you. It's not a problem if you're relatively new to programming. Don't worry. If you haven't heard about say the big O notation, recursion or generics. I'm going to explain each concept from the beginning. Are you new to Swift? Not a problem. Have you worked with any other programming language like Java, C-sharp, or even planes C. In that case, you will be able to follow along with me. You may find this course useful even if you are using a different programming language. Getting insights into Swift and learning how to optimize your code will elevate your career and professional skills. Now, does this describe you if yes, this is the perfect scores to learn about data structures and algorithms in Swift.
4. Why Should You Learn Algorithms?: So why should you study algorithms in the first place? Once we got past the basic hello world beginner applications, we begin to realize that implementing complex software systems requires a different approach. The software which used to work nicely during our tests, becomes incredibly slow and frequently crashes in production. We learned the hard way that we haven't prepared our system for real life usage. While it ran without issues with the test data, it fails miserably when the reality kicks n. Computer algorithms have been developed and refined over the last couple of decades. The study of algorithms and data structures is fundamental to any programmer who plans to develop scalable and performance software systems. Algorithms are indispensable to building software capable of managing large amounts of data or solving complex problems. And data structures allow us to access and store the data efficiently. We can't avoid data structures while studying algorithms. Besides, you will encounter algorithm and data structure related questions during job interviews.
5. Prerequisites: Now, before we start digging into algorithms and data structures, let's have a quick look at what you are supposed to know. This course is beginner friendly. Some basic programming experience is required, but you need not have actually worked with Swift itself. To implement the exercises in this course, you will need a Mac with MacOS Catalina or Big Sur and Xcode 12 or newer. You can download Xcode for free from the Mac App Store. We're going to use 55 to implement the source code in this course. Are the samples are compatible with the latest Swift version. I'm going to update the source code as changes in the language makes it necessary. The projects are available on GitHub and you can download them from the following repository. The Swift language is available as open source. Visit Swift.org for everything's fifth related. And without that said, let's dive in.
6. Section 2: Big O Notation: welcome to the second section of this course on algorithms in Swift. In this section, we're going to talk about computational complexity. The beagle notation is a mathematical model using computer science to describe the efficiency of algorithms as a function of their input size. The best way to understand the Big O thoroughly is via code examples. Therefore, we illustrate each concept using swift coding here as the common orders of growth or complexities we're gonna talk about. In this section, Constant Time describes an algorithm that always executes in the same amount of time, regardless of the input size. Linear Time describes an algorithm whose performance grows linearly and in direct proportion to the size of the import data set. Quadratic Time represents on algorithm whose performance is directly proportional to the square of the size of the input data set. This behavior is typical with algorithms that involved nested iterations over the data set . Deeper nested iterations result in cubic time, aquatic time and worse logarithmic time logarithmic time represents ah, highly efficient algorithm. For example, the popular binary search technique executes in logarithmic time know that we only cover the basics of big Oh, this knowledge is sufficient to understand the efficiency of the sorting algorithms presented in the scores. This graph visualize is the running times of some of the most popular sorting algorithms. As the input size increases, the performance differences become more and more prevalent when the Input County small all algorithms perform almost equally. Actually, when testing with small data sets, we may even have the impression that the algorithm with the logarithmic complexity has the worst performers. However, as the size of the data says grows will clearly see the differences in the next videos. We're gonna analyze the various algorithm complexities through practical, according examples. We start by showcasing the constant time and will end up this section with the logarithmic time.
7. Constant Time - O(1): constant time describes an algorithm that requires the same amount of time to execute regardless of the input size. Here's what we get If we visualize the running time of an algorithm that executes in constant time, the input size doesn't affect the execution time. In other words, the execution time remains constant on algorithm, which has a constant time. Complexity runs for the same amount of time. Whether it operates on one or several 1000 or 1,000,000 entries in the following demo were gone. Implement some examples that executing constant time No, that we're going to rely on swift playgrounds throughout the scores will implement a utility class for measuring. The performers will rely on this utility class to perform measurements in the current demo and several other upcoming projects. To illustrate the constant time, complexity will create a function that were forms Check Honoree. The second example relies on for hash map. Look up. We're going to measure the times require to retrieve values based on their keys from Steve dictionaries of various sizes. If implemented correctly, the hash map look up should happen in constant time. All right, now, let's switch to Ex Corde if you want to follow along with me. Download the repository from Get Hub opened the bigger playground from the big old SRC. Further, you can find the source code for this demo in the constant time playground page. First, we implement the benchmarking utility. The bench timer class has a single method called measure block. The measure block type method has a closure argument. The measure code gets executed 10 times and we store the individual run times. In Honore. We rely on the quarts core frameworks. See a current media time function to retrieve the current absolute time. Unlike any estate or CF. Absolute time. Get current. See current media time is reliable since it's not subject to changes in the external time reference, the measured rock gets executed between creating the start time and end time. We store the execution time in the execution times array. After 10 iterations, we calculated the average execution time, the reduced or a function cause of the closures eventually on all the array elements which in our case sums up all the items in theory. Finally, we divide the result by the number of iterations, which gives us the average execution time. Next we implement a function which takes an array and checks. Whether the first element is zero. The simple function executes in constant time. There is the execution. Time should be the same regardless of the size of the in pottery. Now let's prove our theory really invoke the starts with zero function with three arrays, the first rate has only three elements. The 2nd 1 has 10,000 entries and the third array is huge with one million items. Our benchmark shows that the size of the input a rate doesn't affect the execution time. There are only negligible differences in the order of microseconds. Another algorithm, which runs in constant time, is the hash map. Look up. We rely on the generate dicked helper function toe. Great custom sized dictionaries generate dicked takes on import argument called size. This argument provides the size of the generated dictionary. The function returns a dictionary with keys of type string and values of type integer. For us, we create on empty dictionary. Next, I validate the size input parameter, using the guards statement. If the size is not bigger than zero, we returned the anti dictionary. Otherwise we generate the keys and the values. I use a for in loop toe literate from zero up to, but not including size varies. We either it from zero to size minus one. The key is the string representation of the index. Then we said the value for the given key, which is the index itself. In the end, we get a dictionary field with key value pairs in the form string integer. Now let's create dictionaries of different sizes. The 1st 1 called Small Dictionary, contains only three key value pairs. Medium dicked has 500 huge dick contains 100,000 keys and values. We rely on the bench time or measure block method to find out the execution time of the dictionary. Look up after running the demo. We indeed see that the time it takes to find an element doesn't depend on the size of the dictionary. Our test shows that dictionary search has a constant time complexity. The differences in execution times are below the microsecond. Fresh world. Constant time algorithms are ideal because they are not affected by the size of the input. Unfortunately, it's not always possible to come up with a solution that runs in constant time.
8. Linear Time - O(n): in this lecture, we're gonna talk about the linear time complexity. Linear Time describes an algorithm whose performance grows linearly and in direct proportion to the size of the input data set. This chart represents the running time of an algorithm which executes in linear time. There's a 1 to 1 correlation between input size and execution time. The execution time oven algorithm with linear time complexity increases at the same rate as the import data said grows. For example, if 1000 entries took one second to process than 10,000 would require 10 times as much, that is 10 seconds. 100,000 entries would get processed in 100 seconds and so on. As the import data said grows, so does our algorithms processing time increased to That's why it's called linear time. Next will implement some examples that execute in linear time. First, I'm going to show you a function which sums up the elements of Honore. The next function we implement returns the number of ought and even integers from Honore, both functions literate through the entire Ari. Therefore, the computation time is directly related to the size of theory. We have linear complexity in both cases for performance measurements will rely on the bench timer. We be it in the previous lesson. If you want to follow along with me, download the repository from Get Hub. Open the big old playground from the big O SRC Further, you can find the source code for this demo in the linear time playground page, I will say the linear time complexity will need a raise of different sizes. So let's forced implement of function, which generates custom size the rays with random content. The generate Render Marie function takes two arguments size, which gives the size of the generated dory and max value, which provides the biggest allowed value for the elements. We use our for end, um, uniforms to great random content for theory. Next, we create a function which sums up the elements of the input. Tory. This function interest through the elements of the Imperatori. The some of the elements get stored in the variable called result. Finally, the functional returns the sum of all integers from thean pottery know that we could have used the reduce or a function toe, calculate the sum. But implementing the custom solution makes it evident that we reiterate over the entire Ari , we're gonna test the some function with three arrays of different sizes. The first Ray has 100 items, the next has 1000 and the last one has 10,000 elements. After executing our tests, the performance measurement values displayed in the council proof that the execution time is linear, the sum function illiterates through all of the elements of the array. Thus, it's normal that the execution time increases proportionately with the size of the array. Here's another function which needs to iterated through all the elements of the input list . The count are even function checks each item to find out whether it's odd or even. I used the module operator to check if there is a remainder when dividing the number by two . The number is even if the remainder zero. Otherwise the number is odd. Our tests confirmed that countered even is indeed a function which runs in linear time. The execution time increases at the same rate as the input data said grows. We encounter linear time algorithms quite frequently. More data usually means more processing time, reducing the time complexity of an algorithm from linear to constant time is not an easy task and may not even be possible in all cases. But as you'll see in the upcoming lectures, there are even worse time complexities.
9. Quadratic Time - O(n2): could. Resting Time represents an algorithm whose performance is directly proportional to the square of the size of the input data set. As you can see in this chart, the runtime increases sharply faster than thean put sizes. The runtime of an algorithm with quadratic time complexity will go up as a square of the import data sets size quadratic cubic and quadratic time complexities are a result of nested operations on data set. You should try to avoid them whenever possible due to the negative impact on the overall performance. Let's say your quadratic time algorithm processes 100 entries in 100 milliseconds. For 2000 entries, the algorithm would run for 40 seconds and 4000 entries would be processed in slightly more than 26 minutes. The runtime rose even more sharply with the import size in the case of cubic or aquatic time complexity. In the following demo, we're gonna build a function that great multiplication tables. The function will use two nested loops Because of the nested iterations. This algorithm has a quadratic time complexity. If you want to follow along with me, download the repository from get up opened the Big O playground from the Big O SRC folder. You can find the source code for this demo in the quadratic time playground page. Now let's reach two x scored. First, we implement a useful addition. Toe are benchmarking utility. We had a formatted time property toe the CF time interval type. This property provides a concise string representation of the time interval value, which also includes the right units of time, which ranges from nine seconds to second. Next will implement a function to demonstrate the quadratic time complexity. The My table function takes an integer argument, which gives the size of the array, which holds positive integers in the range one to size. The function returns the results of multiplying each element in theory with every other value. We use two loops to compute the result. The outer loop interests through the indices of theory. The internal loop takes the value found at the outer index and multiplies it with every item from the same array. The L put is a multiplication table. Let's check out what we get for the input value. 10. Now let's analyze how the two nested loops influence the processing time for a two elementary. The outer loop iterated two times the internal loop illiterates two times for every outer insulation. This gives us four iterations in total for a three elementary. The total liberation count is three times three. There is nine for 10 elements. The function will look 100 times. The number of iterations goes up is a square of the input data size. Let's run some performance tests to prove that our function has a quadratic time complexity . We called the marketable function with the rays of different sizes, and we love the execution times. So the council, the numbers are convincing. We can clearly see the quadratic jumps in execution time. Now I'd like to add that, especially for smaller input, the measurements may not always reflect the quadratic time complexity because of under the hood compiler and hardware optimization.
10. Hints for Polynomial O(n^k) Complexity: nested loops within the method or function should always be a warning sign, and you should avoid them at all costs. Whenever you encounter two or more loops that resemble Russian nesting dolls, ask yourself whether the message iterations are really required to solve that particular problem. First, document the questionable code using dedicated Commons or even better at a warning if supported by the given compiler. Then implement unit tests. Toe. I, like the performance issue caused by the Nested loops, finally tried to solve the problem by replacing the necessary durations with a more efficient algorithm. Polynomial time complexity is usually the result of rushed coding produced by developers who like time or expertise, or maybe both. More often than not, you'll find a better solution in the section called The Power of Algorithms will see examples of replacing inefficient approaches with solutions that bring huge performance gains. Sometimes there really is no better way to solve a particular problem, and you can get rid of the nested loops in such cases. Document the affected court parts thoroughly and describe why it works that way. Also describe the performance issues it may cause with larger data sets
11. Logarithmic Time - O(log n): advanced algorithms like the binary search technique execute in logarithmic time. Logarithmic time means that execution time goes up linearly, while the input data size grows exponentially. For example, if it takes one millisecond to compute 10 elements, it will take to milliseconds to compute 100 elements three milliseconds to compute 1000 elements and so on. Buying a research quick sort and dividing conquered type of algorithms usually run in logarithmic time. First, let's take a closer look at the luxury thumb. In mathematics. The algorithm is the inverse operation to explain NC ation. The logarithms of X to base B is the unique real number. Why such that B to the power? Why is X, for example, a luxury? Them of 1000 base 10 is three as 10 to the power three is 1000. The base for longer them of 16 is too, as four to the power to is 16. The base three logarithms of 27 is three as three to the power three is 27 and the base two logarithms of 1024 is 10. As to to the power, 10 is 1024. In other words, the algorithm of a number is the exponent toe, which another fixed number, the base, must be raised to produce that number in computer science when measuring the performance of algorithms, the base of the logarithms is not specified because the result Onley changes by a constant factor when we use another base. Therefore, a constant factor is usually disregarded in the analysis of algorithms.
12. Big O Summary: with dedicated this entire section toe the bigger notation. Understanding time complexity paves the road toe Working with algorithms. We've talked about constant time complexity, where the execution time is constant and doesn't depend on the import size. Checking the first element of an array or retrieving an item from a dictionary are good examples for the constant time complexity. Linear time complexity describes an algorithm whose execution time grows in direct proportion to the size of the input. For example, enumerating through the elements of honorary works in linear time, the execution times of quadratic time algorithms go up as a square of the import data, said size quadratic time complexity is produced by a loop nested into another loop, as we've seen in our multiplication table. Example. Try to avoid polynomial time complexity like quadratic, aquatic or cubic as it can become a huge performance bottleneck. Logarithmic Time describes complex algorithms like the quick sort and shows its benefits when working with larger data sets in. The upcoming sections with out into the world of algorithms will export the difference between the naive brute force approach and the one we implement with efficiency in mind.
13. Section 3: Recursion: in programming. Repetition can be described using loops such as the Four Loop or the While Loop. Another way is to use Riker Shin. We encounter Rikers infrequently. Why? Studying algorithms and data structures. Thus, it is important to understand what Rickerson is. I'm going to show you how incursion works through life coding examples. Rickerson is a useful technique, yet it doesn't come without pitfalls. I think it's this section by showing you how to avoid common issues when using Rikers on his fifth projects.
14. What is Recursion?: Let's clarify what Ryker version is. By definition, recordation is a way to solve a re occurring problem. My repeatedly solving similar sub problems in programming we can have recursive functions. The function is recursive. If it cause itself, the car can happen directly, like in this case or indirectly. If the function cause another function, which in turn invokes the first function well, often encounter recursive data structures to a data structure is recursive if you can describe it in terms of itself. For example, a linked list can be described as a list node, followed by a linked list. All right, let's implement a recursive data structure. So here's a simple note glass. Each note can linked with the next node through the next property. The note also who has a value of type string. Finally, I provide an initial Isar, which set the value Property. Now that we have are no type, let's be a the linked list. First, I create each note and assign it of value that matches its name. Then I said the next property of each note to form a link list. The next note for node one is no to for no to the next note will be No. Three. And finally I end the linked list by setting the next note for Note three to kneel. Now let's implement the function that reverses the linked list and prince the value in each node. I'm going to call the function parse nodes, the personal. Its function takes a parameter of type node. If the input No, this Neil I exit the function. Otherwise I print the value of the given node. Next, I called the personal its function recursive Lee by passing in the next node. Finally, I call the bars notes function with the first note as input parameter, the consul shows the expected values. Since the data structure is recursive, I was able to use Riker shin toe iterated through it, using a recursive function. Records in won't necessarily produce faster or more efficient code, but it usually provides an elegant alternative to either. Dave approaches and requires fewer lines of code
15. How Does Recursion Work?: So far, we've seen some examples of recursive functions and data structures. Now let's check out how Rikers in the works. I'm going to calculate the factorial off a positive integer. And this is a problem that can be solved using Ryker Shin, since the factorial is defined as the product off the integers from one to end. So here's a swift factorial function that calculates the factorial of a positive integer. The function takes on unsigned integer as argument. If the input parameter is smaller than to the function returns one. Otherwise, the function returns the product off the input and the result off calling the function with an argument that is smaller by one. The recursive costs continue until the function gets called with a value that is smaller than two. To understand how records in works, here's a graphical representation off what's going on when calculating factorial of three. Whenever a nested call happens, the execution off the former call is suspended and its status stored. The snapshot of its context, that is, it's good input parameters and local variables is persisted. All this information is stored in a structure known as caustic or execution stake. The caustic is a stack structure that keeps wreck off the point where control should be returned after a subroutine finishes its execution. When the nested call is finished executing, its context is destroyed and the control is returned to the collar. Eventually, we get back to the very first function call. All the nested contexts are destroyed by then, and the final result is returned to the collar. Now let's switch to Ex Corde. I've gone ahead and graded a playground project. Let's start by implementing a function that cock is the factorial of a positive integer, and the function takes a 64 bit unsigned integer as argument, and it also returns this type. This problem can be solved using recordation sees. The factorial is defined as the product off the inter jurors from one toe end, the factorial off one and zero is one, so we turn one for input. Smarter than to. Otherwise, the function returns the product off the input and the result off calling the function with an argument that is smaller by one. All right, let's try out our function. And indeed, factorial off three is six. Now let's try it with a bigger number. Let's say with then, All right, now, let's try it with 20. And how about 30? Oops, we've got an error. This is because the result would be bigger than the maximum value that can be represented. Using on unsigned 64 bit integer, I print out on site in 64 Max toe the council. So this is the biggest number that we can represent using a 64 bit unsigned integer, and the factorial of 21 is already bigger than that. Now there is no support for big integers in Swift. Yet there is a big in prototype in the official swift repository. You can find it in the master suite. Rep. Oh, under test prototypes, I select wrong, then saved the page as a swift source file. Next, I'm going to add it to our playground project. We don't need this import, and I also get rid off the tests except these type alias. Now I can use the big Intertype. So let's reflector r factorial function. I'm going to use begin for the input argument. Also begin for the return type. Now let's try out our new function. It already works with 30 so we can even increase the very further and the result will be a huge number as you'll see in the moment. Hopefully, we'll see the begin type soon in the official sift library. Until then, you can use it, but it's at your own risk.
16. Recursion Pitfalls: Rickerson is great, but it doesn't come without beat. For us, the biggest problem is infinite. Ryker Shin. I'm going to illustrate it. Using on Exco Playground Project, I implement a function which calculates the some off the first and positive integers. I call the function bad. Some toe make it clear that it's not the right approach to solve this problem. I saw you the right solution that relies on a simple formula in an upcoming lesson so that some, except on input parameter and off type int and returns on integer, will use a recursive approach. The function returns, the some off the input parameter and the result off calling itself with an argument that is smaller by one. All right now, let's try it with, say, three eventually runtime error. Of course, to understand the root cause, let's quickly recap how rickerson works. Each time molested call occurs a record off. The current context is made and added as a new stick frame toe the top of the stack, and each time a car returns, the top stick frame is removed. The stack is a special part off the memory used for fast static allocations. Each application has a stack, and Moody s credit applications have one stack per thread. The stack has a finite size. Thus, if we keep calling nested functions and none of these functions returns were run out off available stack space. When the memory available for the caustic is exceeded, the athlete crash with a stack overflow error. Let's inspect our best. Some function. There is no condition that prevents executing the nested call over and over again. I'm going to slightly change the court to print the value off the input argument whenever we call the function. Now we can clearly see that the execution doesn't stop after two recursive course as it should. What that means is the domestic context are not destroyed and the stack frames are not removed. We keep allocating memory on the stack without the allocating it, which eventually leads through the stack. Overflow exception
17. How to Avoid Infinite Recursion?: to avoid infinite Ryker Shin. Each recursive function needs to have at least one condition that prevents for the nested cost to itself. This condition is God based case. So let's go back to the best some function and add the missing base case. The issue is that bet some cause itself as we keep decreasing, Thean put argument and we need the base case here. That is a condition which makes the function return without performing another recursive call. We're only interested in the some off positive integers. So if the input argument is zero, we can simply return zero. If I run the function after this change, it produces the expected result. But does this check really work for all input values? Let's check it with mind Swan. Nope, since I only check for zero the function because runtime crash for negative input, we must also ensure that the function actually progresses towards the base case. I need to modify the base case so that it cowers not only the value zero, but also negative values. Now this conditional statement will work. Yet a guards statement is a better fit because it forces us to exit the function I know that some programming languages have built in checks to prevent stack overflows. Remember these rules. When you implement recursive solutions in Swift and Jack, you recursive functions thoroughly through unit tests that cower also edge cases.
18. Section 4: The Power of Algorithms: in this section, we are going to take a closer look at the importance and the benefits of algorithms and algorithmic thinking we've talked about the big or notation in the previous section. We've seen that our choice off implementing a problem can make a huge difference when it comes to the performance and the scalability off. Our solution are horrific. Thinking is the ability to approach a problem and find the most efficient technique to solve it. To demonstrate the power of algorithms, we are going to solve the following problems. We calculate the some of the first and natural numbers. Then we're going to implement the function that, given an array and the value returns the indices off any two distinct elements whose some is equal to the target value. Finally, we are going toe calculate the equilibrium index of honoree. The equilibrium or balance index represents the index which splits the arrays such that the sum of elements that lower indices is equal to the sum of items at higher in this is we're going to use a naive approach first, then will implement a solution with efficiency in mind. I'm going to show you how some basic math and the knowledge off sift language features and data structures can help us in implementing cleaner and more performance solution.
19. Calculate Sum(N): Our first task is to implement the function, which calculates the some of the first and natural numbers. We'll start with the naive approach. Then we're going to implement a more efficient way to solve this problem, using a formula that is more than 2000 years old. All right, so let's switch to our exclude playground project. I find a function called some, which takes a single argument. This argument represents the number of natural numbers who some we want to calculate the functional returns, an unsigned integer which gives the some we implement a very simple logic. We sum up all the numbers, starting with one up to the value of the input parameter. And because of this loop, our function has a linear time complexity. We can confirm this by calling the sun function with steadily increasing and values the cause. A log shows that the execution time increases linearly with the input. Although dysfunction does the job, there's a better way to compute the sum off the first and natural numbers. We're going to rely on a simple formula. Carl Friedrich Gauss is said to have found this relationship in his early earth. However, he was not the first to discover this formula. It is likely that its origin goes back to the categorias, and it was known as early as the sixth century. Before Christ. The benefits of using this formula become evident. If you check out some examples to calculate the some of the 1st 3 natural numbers, we can just add them. Together we get the same results using the new formula. As the number increases, it becomes obvious that the formula gives a cleaner and shorter solution. So let's implement it. Using Swift, the sum optimized function operates in constant time. The execution time does not depend on the input. Let's run the same performance tests as we did for the some function. The results prove that the execution times do not. Very. Regardless off the import size. There are only some minor, negligible differences in the range of microseconds, but not a huge benefit. Off the linear algorithm is its performance. Let's compare the two methods the some optimizes, more efficient, even for smaller values, and the difference just grows with the input. These charts visualize is the running times off the two functions. The optimized constant time function does not depend on the input, unlike the some function which runs in linear time. By applying this clever formula, we managed to implement a solution with the most optimal time complexity. In the next lecture, we're going to solve a more challenging problem.
20. Pair Matching Challenge: here is our next challenge. Our task is to write a function that, given an array and the target value, return zero based indices of any two distinct elements who some is equal to the target. Some. If there are no such elements, the function should return. Neil, for example, for the area of numbers 12234 And the target value for our function should return the topple off indices 12 or 03 First, we are going to implement the solution which relies on acid iterations because of the nasty reiterations. This algorithm has a quadratic time complexity. Then we're going to come up with an algorithm which operates in Lena time rather than quadratic time. All right, time to do some coding. As usual, we implement the brute force approach. First, we stepped through the array and check every value Starting from the outer index, we scanned the rest of the array. If we find a number such that the two numbers add up to the target, we returned the topper. As we continue iterating through the saberi. This algorithm uses two nested loops, which means quadratic time complexity. To be precise. This function has a worst time complexity off and squared, plus and divided by two. Where N is the size off the impetus? Three. It's not n squared, since Vienna look doesn't iterated through the entire array. Let's go through the steps off finding this formula. The outer loop iterated through the entire array, which gives an it orations the inner loop eateries and minus one times Force than l minus two times and so on. When the outer Loop Richard the penultimate index, the inner loop on reiterates wants to find out the total it orations perform in the inner loop, we have to calculate the sum off the first and minus one numbers. We can use the formula we learned in the previous lecture toe. Calculate the inner loop count that is an minus one times and minus one plus 1/2, which gives n minus one times and over to now together total number of it orations. We must add the count of the outer loops to so our function has a time complexity off and squared, plus and or two. Now let's implement a solution, which does not rely on nested loops. In other words, we're going to avoid scaring the rate wise to find numbers which add up to the target value . We'll use a single loop to eateries through the entire way. For each number, we check whether the difference between the target value and the given number can be found in addiction record. If if the difference is found, we've got our two numbers and we returned the top of with the indices as we store the current index, the difference being the key and we eater it further no, that both dictionary insertion and search happening constant time. Therefore, these operations won't affect the time complexity of our function. We managed to reduce the time complexity to order off, and because we only illiterate once through the array now let's compare the performance of the two functions. First, we run our test with an array which contains 10 random introduce. The optimized function usually performs better than the one with quadratic time complexity . Yet the speed difference is not that convinced Singer. For 50 items, we start to see noticeable differences. The benefits of the linear time complexity are quite evident. The results speak for themselves as we increase the size off the array further. In the next lecture, we are going to solve another interesting problem called the Equilibrium Index
21. Find the Equilibrium Index: in the following demo, we're going to build a function toe, Find the equilibrium indices of Honore. The Equilibrium Index of Honore is an index such that the sum of elements at lower indices is equal to the sum of elements at higher indices. For example, in Honore A, with the numbers minus three to minus 21 and minus two one is an equilibrium index because the element that index zero is equal to the sum of elements that indices two and three and four two is also on equilibrium index. Because the sum of elements that in this zero and one is equal to the sum off elements that in this is three and four, we're going to implement the function that given Honore returns Theokoles, Librium Indices or Neil if the area has no equilibrium index. First we're going to implement a solution, which uses two nested loops. Then I'm going to show you a way to optimize our algorithm by applying some basic math, will come up with an algorithm that that executes in linear time. Now let's switch to X court. The equilibrium function uses three loops the outer loop eateries to all the elements off theory, we need to inner loops to find out whether the current index big by the outer loop is an equilibrium index or not. The first in the loop tracks the some of the elements at lower indices. The second inner loop keep track off the some off items at higher indices. In other words, with plead theory at the position off the outer index, then we compare the some of the elements off the two Sabri's. If the sums are equal, we found an equilibrium index, and we added to theory of indices as we keep Iterating further, the worst time complexity off this solution is quadratic. Updating the left and right sums using to inner loops is a simple but also very inefficient approach. Let's try to come up with a solution that does not require nested loops. The idea is together told us some of the array. First, we use the reduced or a method to calculate the sum off all the elements in theory, then we Italy, through the entire array the left. Some is originally initialized to zero. We keep updating the left, some by adding T elements as we iterated through the array we can get the some of the elements of the right summary by subtracting the elements one by one from the total sum. With this algorithm, we only need to look once through the array. This function has a linear time complexity ordered off to end because of the reduced function and the loop. Now let's run our poor four months, Tess for five items. The optimized variant performs about two times better and over 20 times better for 50 items for injury off 200 items. The equilibrium optimized function runs about 1000 times faster than the function, which uses nested loops. The difference grows quickly with the input size for bigger arrays. Finding Thea creepy um indices with the optimize variant on it takes a fraction off the running time off the brute force approach as we expected, the function with quadratic time complexity is more sensitive to import data sizes than the function with linear time complexity. This is another example which demonstrates the importance off finding the right algorithm
22. The Power of Algorithms - Summary: in this section, we've seen some practical examples off solving problems using two different approaches. The knife implementations produced the right results. However, they start to show their weaknesses as the input size gets bigger. By using more efficient techniques, we reduce the time, complexity and, as a consequence, the execution time of our solutions considerably. Coming up with the optimal algorithm requires research and deeper understanding off the problem. We are trying to solve meth gears and the ability to apply the features off the given programming language will happier in creating more efficient algorithms. The dime complexity often algorithm, is crucial when it comes to performance. Do your best to avoid polynomial and worst time complexities. Nassib loops should always be a warning sign whenever you encounter them. Think about other alternatives to solve that particular problem. To recap, you should highlight the performance bottlenecks. Using dedicated comments, implement unit and performance tests, they were surfaced. The issues which otherwise would remain hidden finally try to solve the problem without relying on NASA reiterations if possible. Now we know what the bigger notation is all about. We are also aware off the benefits off algorithms and algorithmic thinking Let's proceed further with the study off algorithms in the next section, we're going to take a closer look at the most popular basic sorting algorithms. We'll analyze how the sorting algorithms work, and we are going to implement them from scratch using Swift.
23. Section 5: Generics: before we dive deeper into algorithms, it is important to talk about the related topic data structures. Data structures are containers which hold the data used in our programs. The efficiency of these data structures affect the entire software system, so it is important to understand how they work and which one to choose for the particular task. In this section, we're going to talk about generics and the building. Swift collection types Generics are the means for writing not only useful but also reusable code. There also used to implement data structures that are not constrained toe only one data type. Then we'll take a closer look at the building's fifth collection types. We'll talk about the array, the set and the dictionary type.
24. Why Generics?: generics stand at the core of the swift standard library. They are so deeply rooted in the language that you can't avoid them. In most cases, you won't even notice that you're using generics. To illustrate the usefulness of generics. We try to solve a simple problem. Let's say that we need to create types that hold pairs off different values. Assuming that we need a pair type that contains two strings, we could create a structure like this. Next, we need a pair holding to inter values. So we declare our in spare type. Then we'll also need the DYP, which has to float. Here we go. How about a pair which has to data properties? We greet yet another pair structure called data pair. And if we need the string double pair, we create the appropriate type, right? Could it be simpler? We can now use our newly created types whenever we need them. We must only remember their names. Whenever we need a new pair type we simply created. Is this really the way to go? Definitely not. We must stop the type explosion before it gets out of hand.
25. Generic Types: wouldn't it? Because toe have only one type, which can work with any value. Generic types come to the rescue. Using generics, we can define a pair structure that can be used with any type we use place weather's rather than concrete types for the properties in the structure T one and T two R police orders that can be used anywhere within the types definition. Also, we simply call our struck pair. Since it's a generic type, we don't have to specify the supported types in the name of our structure. Now we can create pairs off any type using the following syntax. We can even skip the place where there types. The compiler is smart enough to figure out the type based on the argument type. Inference works by examining the provided values. Why compiling the court? In Swift, we can define our genetic classes, structures or enumeration just as easily. All right, let's which lacks scored. I'm going to make the benefits of the genic types even more obvious. So this was our first naive attempt to create the pair types. A lot of redundant code. Right now, let's declare the generic version with the pair structure in place. We can delete all the specific pair structures. Now that they are gone, we get a compiler error. We can, isn't it? Fix that. I'm going to replace all the references to the old types with the generic counterpart that it generic types have that stopped the type explosion. Our code not only became shorter, but it's also re visible. Next, we're going to talk about generic functions.
26. Generic Functions: generic functions and methods are another powerful feature off the swift language. A generic function or method can work with any type that's we can avoid duplications and right cleaner code. Let's start with the programming challenge we need to implement a method which does, whether to values are equal. First, we defined the easy equal function for strings pretty straightforward. Then we need the same feature for double types, not a big deal. What about dates? Luckily, that won't take too long to implement either. Then we also need to tell whether to data instances are equal done by now. You probably see where this goes. And if you was the lesson about type explosion, you know what I'm about to say. This is not the way to go. Implementing a new function for every new type leads to a lot off redundant code. Such a court base is hard to maintain the news. We should always avoid code repetitions and generics. Help us solve this problem, too. All right, let's create the generic is equal function. The syntax for writing a generic function is similar to what we used to declare generic types. We specify the place where they're type between anger brackets. After the function or method name, we can refer to this place where there type in the argument, least or anywhere in the functions body. If we compile our code as it is now, we get an error. This is because the compiler doesn't know how to compare two instances off the place. Whether type, we must ensure that the type implements the equal to operator that's there. To adopt the Equator bill protocol must implement the equal to operator, so we must enforce a type constraint. Type constraints specified that the type must conform to a particular protocol or inherit from a specific class. Let's add the equitable type constraint. This limits the types that can be used as arguments in our function. We got rid off the compiler error. Now the genetic ease equal function can only accept instances off types that adopt the equitable protocol. Contact is obstruct that doesn't conform to the equator, where protocol our code won't compile if we try to. It was the easy core function with to contact instances. So let's add the equitable Protocol conformance to adopt the equator, where protocol we have to implement the equal tow operator as a static member. The implementation is simple. We check whether the properties off the two arguments match. The Equitable Protocol also defines the not equal operator. We don't need to implement it, though the standard library provides of the fourth implementation. This caused the custom equal tow operator and the gays. The result. We can apply type constraints, also toe generic types. You may remember the generic pair struck that we implemented in the previous lesson. Let's say that we want to limit the types toe those that implement the comparable protocol . Here's what we should, right. With this type constraint, we can only create pair instances out of type that adopt the comparable protocol. Generics are super used for, and Swift makes it easy for us to implement and use generic types and functions. In the following lessons were going toe cower the built in swift collection types
27. Section 6: Swift Collection Types: welcome to another section of the swift algorithms and data structures scores in this section. We're going to take a closer look at the building's fifth collection types. Sweet provides three primary collections. We talk about the array, which is on order sequence of items. The set, which is on a new order sequence off unique values, and the dictionary that lets our store and excess key value pairs in Swift. The array, the set and the dictionary are implemented as generic types, as we've seen in the section about generics. This provides great flexibility as we can use any type with the generic collections. They can store instances off glasses, structures and immolations, or any building type like int float, double and so on. All right, let's start with theory.
28. The Array: praise store values off the same type in a specific order. The values must not be unique. Each value can appear multiple times. In other words, we could define the array as an order, sequins off non unique elements. We create on the ray using the following syntax. Since it's a generic type, we must provide the type that are a rake in store. This is an array off in types. What that means is that we can use our A only within. There is a shorter way to define array by placing the time between angle brackets. This is the shorthand form, and it is the preferred way off defining Honore. Actually, there isn't even sure their way to create an array of ins for certain types. We can also rely on stiffs type inference, toe work out the type of the array. Now let's talk a bit about type inference swift congrats, the type based on the well we provide, which can save us from a lot of typing. However, type inference has its limitations. For example, if we define our away like this, the type inference engine will assume that it's an array of double the refinery off floats , we must explicitly provide the flow type. Each item in Honore has an index associated with it. The indices start at zero, and their value gets implemented for each subsequent item. So for our original example, the indices look like this. We can, either it through theory and print the indices using theory index off instance method All right Less than this in X code. Here's a console log. Know that index off returns the first index where the specified value appears in theory, that's why we get the same index for one and two. Another way is to use the arrays for each method. This method executed disclosure on each element in the same order as a for in loop, and the result is the same as with the foreign example.
29. Accessing the Array: we can access the elements off on the Rabei index. The array has to convenience excess sores for the first and the last element. No, that first and last return. Optional values. If the arrays empty, their value will be Neil. We don't have this safety net when accessing the array by index around time crash occurs. If we try to retrieve a value using on invalid index, we can prevent crashes by checking whether the index is within bounds. If an index is bigger than the size off the ray, it's going toe point beyond the last item. Also, the index must not be a negative number. We can come up with a shorter form. The ray has a property called Indices that holds, well the actual indices for the given Ray. If it contains the given index, then it is valid to push it even further. Let's create a custom safe index operator. We implement the operator in Honore. Extension extensions and custom operators are really powerful. They let us at operators to existing types without modifying their code
30. Modifying the Array: so far, we've seen how to access the elements of Honore in this video will look at how to modify the array to change the contents of Honore after its creation. We must assign it to a variable rather than the constant. This is how we can create a mutable array. Instance. Theory exposes different instance. Methods for adding new values We can. He was upend toe. Add a new element to the end of Honore to insert on element that are given position used the insert. At instance. Method. The new element is inserted before the element at the current index. All the elements after the given index are shifted. One position to the right. If you pass in the last index, the new element is abandoned. Toe the array. When using, insert at, make sure that the index is valid. Otherwise you'll end up in the run time error. Now, to remove an element from an array, we can use the removed. At instance, method. After the element is removed, the gap is closed. I know that the index we passed the remove that method must be valid or because around time , error God remove all. If you want to get rid off all the elements in the array. The method has a keep capacity parameter, which is set to force by the fourth. You may want to keep the capacity if you plan to re was the race soon, the array won't need to reallocate memory, which can be a nice performance optimization now, although the capacity is kept, we can't reuse the old. Indus is accessing the area where it's obsolete. Indices causes on instant crash as a rule of thumb. Always check whether the index is out of bounds before accessing it. There has further methods. Remove first, removes and returns the first element, whereas remove list, removes and returns. The last element. Do not go remove first or remove last on an empty or a as it would cause, as you may already know. Runtime error. We talked about some of the most frequently was array methods. I suggest you down with the sample projects and start tinkering with the rays to recap. Parades, store values off the same type in order sequence. Jews theory. If the order of the elements is important and if the same values shall appear multiple times if the order is not important or the values must be unique. You should rather use a set. We'll talk about sets in the upcoming lessons.
31. The Set: we've seen that the rake and store elements in a given order, we can even have difficulty in honoree. What if you need a collection that guarantees the uniqueness of its elements? The set is the answer. Set store unique values with no ordering and a given value can only appear once. Besides the set exposes used for mathematical set operations like union and subtract, we can declare a set using the following syntax. I know that you can't use the shorthand form as we did for a raise. If we defend it like this swifts type inference engine can't figure it out. Whether we wanted to instance you to set or rather Honore, we must specify that we need a set. However, type inference were still work for the values used in the set. If we initialize the set with an array off into little errors, the type inference engine will infer the type int. And if he was floating point literally, the in for type for the values will be double. Now let's clarify the main differences between sets on the raise. My looking at these differences will be able to choose the right one, the first fundamental differences that the set doesn't allow duplicates. Let's take a look at an example if we declare Honore like this is going to contain four ends with the value one. Where is the set declared with the same liberals will only have one value. The redundant values are skimmed. The other big difference is that the set doesn't provide the defined ordering for its elements. For example, the following Gray will print its contents in the given order. However, for said, defined with the same values, the order is undefined. We can generate over the values in a set using a for in loop. Remember, the set doesn't define the specific ordering off its elements. Thus, don't assume anything about the order in which the elements are returned if you need to. Iterating a specific order called the Sordid Method, so did returns the elements off the set as an array sorted using the less than operator. This is the output. We can also use the four H collection method with sets. This method executes its closure on each element in the set
32. Accessing and Modifying the Set: Now that we know how to create a set, let's talk about accessing and modifying its contents. Unlike theory, the said doesn't have indices, we can use the contains instance method to check whether a value exists in first set contains returns a bold in value so that we can use it in conditional logic, just like in this example, if the element cannot be found or if the set is empty. Contains returns force. Speaking of empty sets, we can check whether a set has elements through these empty property. Alternatively, we can use the sets count property. As you might have guessed country turns. The number of elements in the set know that the sets count and capacity properties can return different values. Country turns the number of existing elements in the set, whereas the capacity shows how many items it can store without allocating new storage. Will talk about this in a moment. Used the inserts, at instance, method to add new elements to the set. To remove elements from a set, we can call the remove method. The car does nothing if the element is not in the least, the remove method returns the element that was removed from the set. We can use this feature to check whether the value was indeed elated to remove all the elements in a set car, remove all the sets, remove our method has a keep capacity parameter, which is set to force by the fourth. It has the same effect as for a raise by setting the value of keeping capacity to true without the compiler to keep the SATs memory after removing its elements, they won't need to reallocate memory upon its next usage.
33. Set Operations: the set exposes useful methods that let us perform Fundamental Operations Union creates a new set with all of the element in the two sets. If the two sets have elements in common Onley, one instance will appear in the resulting set. No that 35 and seven can be found in both sets. But the resulting set will not contain duplicates. The result of calling the intersection set instance. Method is a set, which was the elements that appear in both sets. We can also subtract one set from another. The result will contain those values which are only in the source set and not in the subtracted set. The symmetric difference method returns are set with the element that are only in either set, but not both. The set exposing many other useful methods, like the ones which let us test for equality and membership. I suggest you down with the sample projects and start experimenting with sets
34. The Hashable Protocol: Hi there. We're going to talk about the mashable protocol. First, I'm going to create a very simple structure. This structure has only one properly off type string. Let's call it identity fire without instructing place. We could create, for example, Honore. I'm going to use the shorthand form for declaring theory off. Simple struck USOC's. And let's add a simple struck with the identify their I D. This gold comprise just fine. We have no issues. Now let's try to do the same. But this time I'm going to declare a set. Unfortunately, there is no way to it was a shortened form for a set because the compiler has no way to distinguish it from Honore. Basically, I have to tell the compiler that it's a set off given type. Here we go. Now. If I try to compile my code, I get an error. The compiler says that our type doesn't conform to the possible protocol. All right, what does that mean? The set is a special, kind off swift collection type. Unlike raise, a set cannot have duplicates. In order to check for duplicates, the set must be over type. That has a way to tell whether an instance is unique. Swift uses a hash value for this purpose. The hash value is a unique value off type IND which must be equal if two values are the same and swift. Our basic building types are hackable, so we can use a string a bull on end or a double in the set. Here we go. So if I declare my set just like this set double the compiler won't complain anymore since double provides a hash value. Now, in order to make our simple structure work with set, we must make it confirmed toe the Hassiba Protocol. Luckily, this is not a big deal. I just had a fashionable conformers. And if you check the possible protocol, it has a single property. It is a read only gettable property called hash value. So this hash value must be unique, since our simple structure has only one property off type string and string is a building type which already implements the hashiba Protocol. Implementing the hash value property is quite straightforward. All right, let's try to compile. Are cold now we still have a problem. Let's double check was going on here. All right, are simple struck type must also conform toe the Equitable Protocol. Why is that? Because Hashiba inherits from degradable protocol and if a protocol inherits from another one, all conforming types must also implement the requirements defined in that protocol. So as go back conforming to the weight of the protocol is also not a big deal. We have to implement the equal tow operator, which is a static method that does whether two instances off the given type are equal or not. We consider to simple struck instances to be equal if they're identify IRS are equal now our code should compile without errors. Indeed, the issue is gone to summarize. There will be cases when we must adopt the Hashiba Protocol. If you need to ensure that a given value is unique black, for example, when creating a set out of our types or using them as keys for a dictionary, then we must adopt the hash Hable Protocol To confirm to the habitable protocol are type must implement a read only properly called hash value. Also, because fashionable confirms toe the Equitable Protocol, we must implement the equal tow operator
35. The Dictionary: the dictionary, also known as hash map stores. Key value pairs use this collection type if you need to look up very, was based on their identify IRS. Each value must be associated with a key that is unique. The order of the keys and values is undefined. Just like the other swift collection types, the dictionary is also implemented as a generic type, too great a dictionary. We must specify the key and the value type. This is how we can create an empty dictionary. Another way is to use the initial Isar syntax. We can initialize a dictionary with a dictionary. Literal Swift can infer the type off the keys and the values based on the leader. When creating a dictionary, the type of the keys and values is supposed to be consistent. For example, Archies are a vintage your type and all the values out of type. String type Inference won't work if the type off the dictionary leaders is mixed. The compiler asks us to explicitly at type annotations if we really want to declare a hetero genius dictionary, and in certain cases you may want to do that. For instance, when converting Jason payloads or properly Lists type dictionary won't work. The Greater History Genius Dictionary. The keys must conform to the hash about protocol. As we've seen in the previous lecture, The Hashiba Protocol defines the property requirement called hash value. The hash value property must return a unique value for a given instance. No two different instances shall have the same hash value. The Swift Standard Library has a type called Any hash a ball any hash a book and hold the value off any type conforming to the hash about protocol. This type can be used as a super time for keys in hetero genius dictionaries so we can create a header. Virginia's collection like this any heritable is a structure which lets us create the type raised hashem value that wraps the given instance. This is what happens behind the scenes. Actually, this version would compile just fine if we typed it in X cold. But the shorthand syntax is obviously shorter and more readable. Any hash a bus based property represents of RAB value, it can be cast back to the original type, using the as conditional as or force as cast operators
36. Accessing and Modifying the Dictionary: we can access the value associated with a given he using the subscript syntax. We can also iterated over the key value pairs off a dictionary using a for in loop. Since it's a dictionary, items are returned as a key value topper. We can accept the dictionaries keys property to retrieve its keys. The dictionary values property. We return its values to add a new item toe, the Pictionary use a new key as the subscript index and the sign it a new value. To update on existing item, we can use the subscript syntax with an existing key. Alternatively, we can call the update value for key method. This method updates the well you in traditionally forgiven key. If the key doesn't exist it as a new key value pair to the dictionary. You can remove a value from a dictionary by assigning near for its key. We can achieve the same result yet with more typing. Why are the remove value for key method To wipe out the dictionary, car removed, all remove. All has a keep capacity parameter, which is set to false by the fourth. If you best through the operation, preserve the capacity off the underlying before. This is a performance optimization. If you plan to reuse the dictionary,
37. Section 7: Basic Sorting: in this section, we are going to talk about basic sorting algorithms. These algorithms are on essential starting point to solve problems in an elegant and efficient manner. Understanding Thean er workings and knowing how to implement the basic sorting algorithms gives you a strong foundation toe building other, more sophisticated algorithms for each algorithm. I'm going to explain how it works, and I show you how to implement them from scratch. Using the latest 50 version, we're going to use an excellent website sorting 0.80 to visualize how which algorithm works . All right, so what is sorting? First of all, sorting is a technique for arranging data in a logical sequence. According to some well defined rules, working with sorted data is very more efficient than accessing elements in a non order sequence. Sorting has a kilo in commercial data processing and scientific computing. We're going to start our journey into the area off sorting algorithms by studying three elementary sorting methods. Selection sort works by selecting the smallest item and replacing it with the previous one . This algorithm has a quadratic time complexity, and it is one of the simple sorting algorithms insertion sort as its name States works by inserting elements into their proper place to make space for the current item, larger elements must move. One position to the right. Insertion sort also has quadratic worse time complexity. However, the performance of the insertion source is largely affected by the initial order off the elements in the sequence. Barbara Sort. This algorithm works by repeatedly evaluating adjacent items and sweating their position if they are in the wrong order. The bubble sort is easy to implement, but it's also quite inefficient. It's average and worst case complexity are both quadratic.
38. Selection Sort: selection sort is one of the simplest sorting algorithms. It starts by find the smallest item and exchanging it with the 1st 1 Then it finds the next smallest item and exchanges it with the second item. The process goes on until the entire sequence is sorted. In the following demo, we're going to implement the selection sort algorithm using Swift three. We'll analyze the time complexity off this method, and we'll visualize how it works. It's time to do some coding, so let's which Toe X gold? The selection sort function takes an array of inter just as input and returns the sordid copy off the import array. Sorting makes sense only if the IRA has a least two elements. Otherwise, we just returned a copy of the original Re. Next, we make a copy of the Array. We're going to sort the contents off this copy and return it from our function. The function has two loops the outer Loop eateries through each element, and it maintains the index, which denies the sordid portion off the array. The inner loop finds the lowest number in the rest of the array For each element. The function swaps positions with the lowest value from the remainder of theory, given the unsorted or a 1 to 430 The Outer Loop picks the first element, which is one. Then the inner loop finds the lowest number in the rest of the array. Since zero is smaller than one, the position off the two elements gets swapped. Now we have the first element in the sorted portion off the array, which now becomes 0 to 431 The outer loop is the next element to the inner loop finds the lowest number starting from the next position. One is smaller than two, so their position gets swapped. Our early has no too sordid elements. Four is big by the outer loop. Next, the inner loop finds the lowest number, which is to since two is smaller than four. They are swabbed, and we get 01234 The Outer loop picks the next element, which is free. The rest of the rate conceits off. Only one element four is not smaller than three, so their position is not swapped. Finally, we get our sordid array. 01234 Now let's analyze the time complexity off the selection sort algorithm, the algorithm source theory from left to right, the running time is affected by the number off swaps and compares the number of exchanges depends on the original order of the elements. The worst case is when the array is rewards sorted. The number of swaps will be equal to the number off elements. In theory, however, the time complexity off the selection sort is dominated by the number of compares four Under day with an elements the outer loop, illiterates and minds one times the inner loop reiterates and minus two times first than and mine three times and so on. When the outer loop reaches the penultimate index, the in a loop on illiterates wants. This means that we have an minus one plus and minus to plus and minus three and so on, plus one iterations in total. If we had the number off Worst case. Weps, the time complexity becomes and plus and minus one plus and minus to plus and minus three and so one plus one, which gives n Times n plus one or two that is n squared, plus an over to, if you will be wondering, how did we get this formula, I suggest you watch the lecture about constant time complexity. If the input is already sorted, the number of swaps will be zero. So the complexity becomes an times and minus one or two or in shorter form and squared minus and over to now. Let's run some performance tests. We creates reraise with 10 101,000 elements, respectively. The results displayed in the council confirmed the quadratic time complexity of the selection sort algorithm. Next, we inspect the performance of the selection sort with order disorder input. Winnetou in has theory extension with the static method for creating incremental raise. As we assumed the performance is slightly better with sorted input. How about the worst case? First, we need a race started in reverse order. We'll add another method to our extension, which generates river sorted or race. All right, let's run our performance tests as expected. The running times off the selection sort of the worst when using rewards sorted raise. A big disadvantage of the selection sort is that it's insensitive to input. It takes almost as long to run selection sort on a rate that is already in order, as it does for a randomly ordered array
39. Selection Sort - Summary: the selection sort is definitely easy to grasp and implement as it relies on nested loops. It's time complexity is quadratic. The running time of the selection sort is mostly affected by the number off compares performed by the algorithm. The performance off the selection sort is insensitive to input. The running time won't very too much whether you sort of already sorted sequence. What a totally suffered one. As a conclusion. Understanding the selection sort and knowing how to implement it is important, however, try to avoid it in the real code. In the next sleep, I'm going to show you another simple algorithm, which is much quicker when the input is already partially sorted.
40. Insertion Sort: in this lecture, we are going to talk about the insertion sort. Concessional sort is a basic sorting algorithm, which works by analysing each element and inserting it into its proper place. Larger elements move on position to the right in social sort has quadratic time complexity . However, the performance of the insertion sort is largely affected by the initial order off the elements in the sequence. In the following demo, we are going to implement the insertion sort algorithm in Swift. We'll visualize how insertion sort works. Then we're going to analyze the time complexity of these algorithms we work in that an interesting experiment were compared the efficiency off the insertion sort with the selection sort algorithm that was presented in the previous episode. Now let's switch to X Code. The insertion sort function takes a day off into just as input and returns assorted copy off the import ary. We cloned the import array. This copy will hold the sordid result. The unsocial sort function uses two loops. The outer loop progresses as we process the array we started index one because the inner loop has to detriment the index as it reverses the saberi backward not that we don't need to validate the input size as we did in the implementation. Off the selection Sword function. The outer loop counter starts at one and ends at import size. Therefore, empty or single element arrays won't be processed anyway. For a sorted array, the number at the current index must be bigger than our previous ones. The inner loop steps backwards through the sort is saberi and swabs the values if it finds a previous number that is larger than the one at the current index, given the answer dealer a 1 to 430 The outlook pigs the element that index one, which is to the Outer loop, started index one because the inner loop must traverse the area backward. The Outer index also marks the sordid portion off the array, which at this point consists off only one element. The inner loop compares the number at the current index, with the elements in the sordid portion off the array that these it compares to with one since two is bigger than one. The numbers are not swapped. The sword is portion includes the numbers one to, and the unsorted has the numbers 430 The outer loop picks the element that index to which is four. The inner loop compares for with the elements in the sordid part four is not smaller than two. Therefore, no swap a course the outer index gets incriminated. The next element is the number street. The inner loop performs the comparisons with the sordid portion three smaller than four, so they are swapped. Three is not smaller than two, so they're no further swaps. We have no 1234 in the sordid part, the other index. Because the last time and from the unsorted portion off the array seen zero, it's more than all the numbers in the sordid portion it gets repeated. This walk with each item from the sword is Saberi. The unsorted part is empty, so now we have all the elements in order. The insertion sort uses two nested loops. Therefore, its worst time. Complexity is quadratic. However. It is much more performance than the selection sort. If the Imperatori includes already sorted sequences, we'll see this in the moment. Unlike the selection sort, which is insensitive to input, the running times off the insertion sort algorithm can very considerably depending on the order off the Imperatori, the best cases when the arrays already sorted during each generation. The next element from the unsorted portion is only compared with the right most element of the sorties subsection off the area. No swaps are made since the elements are already in place for the already ordered array, which has an elements the algorithm will execute and minus one compares and zero exchanges . In other words, the best running time off the insertion sort is senior. The worst cases when the input is in reverse order in this case, every duration off the inner loop. Fear compared the current element with all the elements in the sword is part. The number of swaps will be equal to the number of items in the sordid subsection, the worst case complexity of the insertion, sorties and squared minds. And using the bigger notation, we discovered a lower their term, which gives a quadratic running time for the average case. When each element is halfway in order. The number of swaps and compares is half compared to the worst case. This gives us and squared minus and over to which is also a quadratic time complexity to summarize the insertion sort performs in linear time for already or almost sorted arrays when the input is suffered or in reverse order, the insertion sort we runnin quadratic time. Next, we are going to measure the performance of the insertion sort and Jack. How it varies based on the input size will generate three or raise one, which has 10 elements, one with 100 elements and one with 1000 elements, respectively. Business Running Times We can tell that the insertion sort algorithm rising quadratic time . Now let's rerun our tests with order disordered arrays. The running times don't change as abruptly with the input size as they did with suffered input. The insertion sort performs in linear time, as expected. Finally, we compare the performance of the selection sort and the insertion sort algorithms. First, I'm going to use sharp for the Rays. As the input size increases the insertion sort perform slower. The two algorithms are supposed to run in quadratic time, but the selection sort run street as quicker for 100 random numbers and about 20 times faster for 1000 elements. The insertion sort usually makes fewer comparisons than the selection source However, the selection sort requires fewer swaps. As we've seen in the lecture about the selection sort, the maximum number off swaps is equal toe the input size. In other words, it grows linearly with the input. In the worst case, Insertion sort will usually perform water off and square swaps. Since writing toe memory is usually significantly slower than reading, the selection sort may perform better than the insertion sort for a larger input. The situation changes drastically if we ran our tests with sorted input. The best case linear nature of the Insertion sort shows its benefits over the selection sort algorithm, which is insensitive to input because the arrays or distorted the insertion sort runs is best case scenario. There won't be any straps, and the number of Compares is an minus one. The insertion sort algorithm runs in linear time, whereas the selection sort algorithm rising quadratic time. In our example, this translates to almost 10 times lower performance compared to the insertion sort to sum up the running times for the insertion sort. Best case is linear for almost sorted input. Average cases quadratic for suffered input, and the worst case is also quadratic for reverse sorted input
41. Insertion Sort - Summary: In the previous lecture, we took a closer look at the insertion sort algorithm. It is easy to understand and implement, and its worst case time complexity is quadratic. However, its running time decreases if the input contains already sorted elements. In the best case, when the input is already ordered, the insertion sort performs on the end, minus one compares where n is deemed put size. Unlike the selection sorts running time, which, as we've seen, is insensitive to input, the insertion sort performs better when the impetus partially or totally ordered as a conclusion. In spite off its simplicity, the insertion sort may perform, surprisingly, where with partially sorted sequences. As a matter of fact, even Apple relies on insertion sort in their sort implementation. If the range to be sorted contains fewer than 20 elements. Fifties open source so you can check the implementation off the sore function in the swift get help repository
42. Bubble Sort: Our next topic is Barbara Sort. This algorithm works by repeatedly of elevating adjacent items and swapping their position . If they are in the wrong order in the following demo, we are going to implement the bubble sort algorithm. As with the other algorithms, we are going to analyze the time complexity off the bubble sort and visualize how it works . Then we're going to compare the bubble thought, the insertion sort and the selection sort of rhythm in terms off efficiency. All right, let's switch to our exclude playground project. The bubble sort function takes on the ray of integers as input and returns assorted copy off the import array. The function repeatedly iterated through the array and compares every adjacent pair. If the numbers are not in the right order, their position gets swapped. The process continues impasses until the entire sequences sorted. The lower values bubble to the beginning of theory. The SWAT variable is used to track whether any swaps were made during a pass. If no stop occurred during the past, it means that the values are in order and we actually the outer Dubai loop. Let's see the bubble sort in action. We start with the answer, generate 14 to 30 in the first pass. The 1st 2 elements are checked since one is smaller than four. Their order is kept. The next bear is foreign to because four is bigger than to their positions get swapped. Then we compare four and three. They're now in order, so they are swapped. The less pair in this pass is four and zero C zero. It's more than four. Their position is exchanged. We continue with the second pass. The 1st 3 elements are already in order. Therefore, the algorithm does not swab them. The first pair, which requires swapping, is found it in the two and three. The last pair is already in order, so we keep them at the current position. The third past bubbles the value zero until it almost reaches the beginning. Off the list, however, there is not yet sorted. So we need another pass. The force passed on the exchanges. The very first pair of numbers zero is finally at the beginning of the sequence, the rest of the early's or disordered. But the algorithm is not aware of this, so the algorithm executes another pass. The fifth and final past doesn't find any pairs that need to be swapped. Now that we understand the inner workings off the bubble sort algorithm, let's analyze its time complexity. If the input is already sorted, the bubble sort needs only one past, which executes and minus one compares what the elements staying in place so the number of swaps will be zero. This means that the running time off the bubble sort only depends on the input size. When the sequences what a disordered. In other words, the best time complexity off the bubble sort is linear. The worst cases when the array is reversed for did either an items in the sequence the algorithm, Iran and passes into order. During each best, our function executes and minus one comparisons. This means n times n minus one compares for the reverse order sequence. The number of swaps will be an minus one in the first pass, en minus two in the second pass, and so on until the last exchange is made in the penultimate best. The total number of swaps is an minus one times and over to the worst case running time of our bubble sort. Implementation is the sum of two swaps and compares. This is clearly an order off and square time complexity. The average time complexity off the bubble Thought is also quadratic to summarize, the bubble sort has a linear running time for order disorder, the race and runs. In order of an squared average and worst time complexity, these running times might look similar to the time complexities of the insertion sort algorithm. However, the bubble sort needs considerably more swaps than the insertion sword. The higher number of swaps we result in slower performance. Therefore, the insertion sort performs considerably better than the bubble sort due to its poor performance. The bubble sort is almost never use in real software, however, because it's easy to grasp and implement. The bubble sort algorithm is often used in introductory computer science materials. Now let's execute some performance measurements. We compare the bubble sort with the insertion sort and the selection sort algorithm for 10 items. There is no noticeable difference between the three algorithms. The bubble thought algorithm starts to show its weakness when sorting 100 random numbers. Its performance gets worse as the input sizing crazies
43. Bubble Sort - Summary: Baba sought is the third sorting algorithm we talked about. While it is easy to implement, it has the worst performance among the basic sorting algorithms. The high number of swaps have to blame for the low performance of the bubble sort. Although we could slightly optimize our existing bubble sort implementation, the insertion sort is going to outperform even the optimized version for larger input. As a conclusion, you should avoid Barbara sorting production called Use Insertion sort. If you need a simple yet acceptable solution. In this section, we analyze some of the basic sorting algorithms. Studying them is definitely worth the effort to deepen our knowledge in algorithms, however, were soon paralyzed, sorting algorithms that can run way faster than any off the elementary algorithms we've started so far in the upcoming part. We're going to examine the more advanced, more sort and quick sort algorithms. See you in the next section
44. Section 8: Advanced Sorting: in this section, we are going to study too widely used sorting algorithms, my sort and quick sort. These algorithms are performance and can be actually used in production, called much thought and quick sort on, including in various libraries and frameworks. My sort works by splitting the sequence to be sorted into two halves. Then it saws the haves and finally combines the results. Additional sorting is performed by merging the Sabri's. The quick sort algorithm uses a divide and conquer approach like the March source. The difference is that the resulting Sabri's our order, the order and like for the more sort, further sorting is not required by merging the parts. All right, let's start with the most sort.
45. Merge Sort: Okay, so let's talk about the more sort as you'll see in a second. This is an algorithm which works by splitting the array in tow. Halfs. It continues, splitting the half until we get to the single element sub lists. These one element arrays are then sorted and merged. As a result, we'll have two elements. Subjects which are ordered. The sorting, emerging off sub list continues. Finally, there is only one list. Guess what this list will be. The sordid result. How about implementing the most would agree them from scratch In the following demo, I'm going to show you a top down, more sort implementation. But first, let's visualize how it works. I'm going to walk you through an example of ordering Honore using more sort. First, we split the array into two parts. The early has eight elements, so the spit indexes four. We got to sub lists, each with four elements. All right, let's continue with the splitting. After splitting the left half, we get to sub lists, each with two elements. Next we split these two. We cannot speak the left side further. Since now we have only one element arrays. So now comes the sorting and merging face. After two steps, the elements of the left half are ordered. I follow the same steps for the right half one street than two more splits on the two elements sub lists, and we get the one elementaries. Next, The single element sublets are sorted and combined until the right half is also sorted. What's next? There is only one step left. During this last step, the two thought it half armored and sordid. And there you go. The result is the order Ray. Now that you know how it works, let's implement this amazing algorithm. So he has the more sort function. It has a straightforward signature. The input argument is the array of Inter just to be sorted. The function returns and ray off, ordered into just well at least days ago. It doesn't make sense to sort Honore, which has one element. The gods state one changed the size off the import every if it has less than two elements, the function simply returns the input. Next we find the split index we need to split the right in the middle. Therefore, the index is calculated as the input count over two As you probably noticed, we go the Mursal function from itself. This record Asian is used to speed the half until one or both house have only one element. Then the algorithm stars the sort of much phase. The first helper function compares the items from the left and right sub list. The smaller value is appended to the sordid ary. If the original Ray has an odd number of elements, we cannot spit it into exactly two halves. That's why we need the last checks. This final step makes sure that the last element from either the left or the right summary is added to the sorted list. Finally, less does the freshly implement is more sort function. The most sort algorithm is a bit more challenging than the basic ones we've covered so far . Precaution is not easy, either. I suggest you take a closer look at the playground project, run it a couple of times and try to understand this logic. Also, revisit the part where I visualize how the most sort works. Unfortunately, the playground doesn't come with the bugger, but don't forget that you can print out the steps and the values to the council. Feel free to any search or print statements in the code wherever you need more clarity
46. Merge Sort - Summary: So far, we've talked about three basic and one more advanced sorting algorithm selection sort, insertion, sort baba sort and more sore. The muscle is the most efficient among them. It has a logarithmic worst and average time complexity. The most sort uses a divide and conquer technique. It's please the original Ray Recursive Lee into smaller sub lists. Then, in the sort of merch phase, it concerns the ordered result more sort world's best with larger inputs. If you need to sort tiny arrays, say less than 20 elements, insurgent sort may perform better due to its simply city. So what's next? In the next lecture, we are going to take a closer look at the very popular and also very fast sorting algorithm , the quick sort.
47. Quicksort: quick sort is probably the most widely was sorting algorithm is popularly is related to its performance. Besides, it's not too difficult to implement, and it works well with many different input types. However, the approach is different. Unlike for the most sort, the final sorting off elements happens before the merch phase. Soon I show you how to implement a quick sorting swift. But first, let's visualize how this algorithm works. So here's our answer. Jittery. First, we need to pick a pivot. The algorithm uses dispute to split their right into three parts one list with all the elements that are smaller than the people, one with the elements that are equal to the people. And as you might have guessed once of this, that contains all the elements that are bigger than the people. Spitting the array around the people is caught partitioning. The partitioning continues until there are no more sequences that require sorting. Now let's get back to our example. We big the element that index for S pivot. The resulting three sub Berries contain all the numbers that are smaller than three equal to three and bigger than three, respectively. Next we divide the left most ablest a spirit. We choose the item at index one. The resulting sub list contained the number 01 and two to. Since there are no bigger numbers than to establish that should hold, the bigger numbers is empty. Again. We pick a people and speed the next summary. You get to a raise with only one element. Now the process. The right most sub list, which contains the elements that are greater than the very first people we pick. Six. A spirit which gives us the following Sabri's on empty Sudbury for elements smaller than the people. One element Saberi for elements of equal to the people and one element subway for elements bigger than the people. We are done with the partitioning phase. By merging the terminating sub lists, we get the sordid result. All right. Now, let's implement the quick sort function. This variant is the simplest possible implementation. The people is the element from the middle of the area for petitioning. We rely on the filter function. The quick start function cause itself recursive Lee, until all established contain one element or equal items. Finally, the sublets are combined. In swift. You can simply use the ad operator. Tokcan, Cara, NATO raise Let's do a quick test run. The sordid result is shown in the console. Why this code is very simple. It's definitely not a production ready quick sort implementation. Instead of implementing a partitioning scheme, we rely on the filter library function. This produces a very simple and compact code. However, in terms off performance, this is definitely not the best choice. Many improvements have been made to the original quick sort algorithm, which was invented almost 60 years ago. Finding the optimal people and better partitioning schemes can further improve the efficiency of this clever algorithm. A simple way to improve the performance of the quick sort is to switch to an assertion, sort for smaller arrays. Even Apple uses district in the implementation off the swift sort function.
48. Quicksort - Summary: Let's quickly recap what we've learned in the previous lecture. The Greeks. What is likely to run faster than any other compare bay sorting algorithm. In most cases, this algorithm uses the divide and conquer strategy to partition. The array then saw the Sabri's independently. The Quick Sword was invented in 1960. It's being consistently studied and refined over time. Harder. The inventor of the algorithm Dextre, Lamu, Toe and others have been working on improving the efficiency of the quick sort. Even further smarter partitioning schemes and finding the optimal people can make a huge difference. With this, we are done with the sorting algorithms. By now, you probably know a lot more about algorithms than when you started the scores. But wait, there's more. See you in the next section.
49. Section 9: Final Thoughts & Useful Resources: So you've reached the end of this course. Congratulations. You've learned a lot about algorithms and you understand their benefits. Whenever in doubt, feel free to revisit the lectures in the section court. The power of algorithms constants like linear or quadratic time complexity won't make you raise your eyebrows anymore. The chapter about Bigelow notation has clarified some of the most common time complexities by a swift code. Examples without into the details off three popular basic sorting algorithms and to advanced ones, including the extremely widespread quick sort. By now, you are probably able to explain an implement of sorting algorithm from scratch. Now you may want to deepen your knowledge further. So what's next? I give you some useful online resource is which will help you in sharpening your according and problem solving skiers ability, a great resource for both developers and recruiters. Ability has a lot of quoting exercises and challenges. To test your knowledge, the site provides an online editor and compiler and supports a number of different programming languages, including Swift three. You can provide custom test data and run several test runs before submitting your solution . The solution is evaluated for correctness. Etch case scenarios and time complexity as well. You may not achieve the highest score, even if your solution provides the expected results. If it's performances slow or it fair, some extreme etch case an algorithmic approach is definitely required to solve most of the exercises on this side. Heck, Arinc Hecker Rank offers a lot of tutorials and challenges to Project. Oiler is a collection of challenging meth and computer programming problems. You should keep working on improving your algorithmic problem solving skiers. You'll have to practice a lot to make algorithmic thinking a habit. Instead of jumping to implementing a naive, slow solution. You will eventually find yourself analyzing the problem. And considering various aspects like worst case, average time from Black City and memory considerations, you'll not only solve the problems, but you'll be able to provide elegant, efficient and reliable long term solutions. I'd love to hear from you. Feel free to message me, and if you found this course useful, please leave a nicer viewer rating. Thank you