Transcripts
1. 0- Introduction: --
2. 1- Find pair that sums up to k: Welcome to this new video where we will see a
new coding problem, the 'find pair that sums up to k' problem. This problem says, given
an array of integers arr and an integer k, create a boolean function that checks if there
exist two elements in arr such that we get k when we add them together. I hope that you tried to
solve this problem, so let's start. First of all, we have to understand the problem, we have to
exactly know what does our function have to do. Here we want to find two elements such that we get
k when we add them together, so as soon as we find those two elements, we return true. And the first
solution that may come in your mind is the brute force solution, where we check all the possible
pairs until we find one that sums up to k. For example, if we have this array of 5 elements,
we check the pair of elements at indexes 0 1, then 0 2, then 0 3, then 0 4. Then i becomes 1
and we repeat, we check 1 2, then 1 3, then 1 4. And we continue like this for all possible
pairs, until we find a pair where arr[i] + arr[j] is equal to k. After checking all pairs,
if no one sums up to k, we return false. And the Python code of this solution is, we
define our function findPair() that takes as parameters the array of integers arr
and the integer k, then we start looping. Here to check all possible pairs, we need two
for loops, the outer loop where i starts at 0 and stops at arr length, and the inner loop
where j starts at i+1 and stops at arr length. j starts at i+1 because if it starts at 0, we
will traverse pairs that we already traversed so it's useless. And inside the inner loop, if
arr[i] + arr[j] is equal to k, we return true, because we found a pair that sums up to k. After
the loop, if the function didn't return true yet, it means that we didn't find what we're
searching for, so we return false. This solution works but it's not the best
one, because even if it has a constant space complexity, it also has a time complexity
of O(n²) because of the two for loops, which is not the best one for this problem,
so let's search for another solution. Now let's suppose that we have this array that
is sorted, and that we are searching for a pair that sums up to 10. Let's take for example the
first and last element, 3 and 9. When we add them together, we get 12, which is greater than 10. Now
we have two choices to check another pair, we can either increment the left one to check this pair,
or decrement the right one to check this pair, but what do we choose? To know which move we have
to do, we will use the characteristics of a sorted array, we said that the array is sorted. In a
sorted array, each element is greater than or equal to the precedent one, and each element is
smaller than or equal to the next one. It means that now that we have this pair of elements that
sums up to 12 and we have to decide which move to do, we know that if we increment the left one, it
will be useless, because 3 + 9 is already greater than 10, and incrementing the left one will make
it go to an element that is greater than or equal, so the pair sum will increase or remains the
same, and in both cases, it can't go down to 10. Which is the case here, if we increment the left
one, the new pair sums up to 14, which is greater, so we deduce that moving the left one will
increase the pair sum or make it remain the same. And the opposite for the right one, we said that
each element is smaller than or equal to the next element, so moving the right one will make it
point to a number that is smaller than or equal, so our pair sum will decrease or remain the same.
And knowing these two information, what we can do is that we sort the array first, because the input
array is not necessarily sorted, then we use the two pointers technique, one that starts at the
first index and one at the last index, then if we add arr[left] with arr[right] and we get k, it's
perfect, this is what we want, so we return true. Else if we add arr[left] with arr[right] and we
get a sum that is smaller than k, we have to make it bigger, so we increment the left pointer in
the hope of increasing the sum to make it reach k. Else, it means that our sum is bigger than k,
so we decrement the right pointer, in the hope of decreasing the sum to make it reach k. And all
that while left pointer is still before right pointer, because if we don't stop after it, we
will just traverse the same pairs so it's useless. After the loop, if the function didn't return true
yet, it means that we traversed all the necessary pairs and we didn't find one that sums up to k, we
return false. To better visualize this solution, let's apply it to this example, we have
this array arr and k being equal to 14, so we are searching for a pair that
sums up to 14. First we sort the array, then we put our two pointers, one on the
first element, and one on the last element. Now we have arr[left] + arr[right] is equal to
1 + 12, which is 13, which is smaller than 14, so we increment left pointer to get a bigger pair
sum. Now we have 3 + 12, it's 15, now it's the opposite, it's bigger than 14, so we decrement
the right pointer to get a smaller pair sum. Now we have 3 + 9, it's 12, which is smaller
than 14, we increment the left pointer. And now we have 5 + 9, which is equal to 14, so we
found a pair that sums up to k, we return true. The Python code of this solution is to first
define our function findPair(), then we sort arr, then we create and initialize our two pointers
to the first and last index respectively. After it we have the loop, while left is smaller
than right, if arr[left] + arr[right] is equal to k, we return true because we found the pair, else
if their sum is smaller than k, we increment left, else, we decrement right. After the loop, we
return false because we didn't find a pair. The time complexity of this solution is O(nlogn),
because we have sorting the array that costs nlogn and traversing the necessary pairs that costs
n, and nlogn + n gives O(nlogn). And for the space complexity, here it depends on the space
complexity of the sorting algorithm we use, for example if it's O(logn) then the space
complexity of this algorithm is O(logn) Even if this solution is better than the
first one we've seen, we still can do better, let's find a solution that runs on O(n) time
complexity. Now what if for example we would be able to check if a certain value exists in
the array without having to traverse it again, for example we have this array and we are
searching for a pair that sums up to 12. What we want to do is that we traverse the array,
then when we arrive to 10 here, we can say, 'wait, to reach 12 I need to be with the value 2,
and I already found it when traversing the array, so we can form a pair that sums up to k'. It means
that we want to store the values that we already visited, until we find the other element of the
pair. And the best way to perform this task, is to use a hash table, it can have another name
depending on the language but the concept remains the same. The hash table is a powerful tool
when solving coding problems because it has an O(1) lookup in average, so we can get the
value of a certain key in O(1). And it has an O(1) insertion in average, so we can insert an
element in O(1). So what we gonna do is that while traversing the array, we store the actual element
in our hash table that contains visited values, then as soon as we find that we already visited k
- the actual element, we return true. Why k - the actual element? Because it's the value that
we want to add to the actual element to get k. For example with this array, we start with an
empty hash table, because we didn't visit any element yet, and we start. Does 12 - 8, which is
4, exist in the hash table? No, we store 8 and we continue. Does 12 - 2 exist in the hash table? No,
we store 2 and we continue. Does 12 - 9, which is 3, exist in the hash table? No, we store 9 and we
continue. Does 12 - 5, which is 7, exist in the hash table? No, we store 5 and we continue. Does
12 - 10, which is 2, exist in the hash table? Yes, we already seen it, so we return true, because
we have a pair of elements that sums up to k. The Python code of this solution is to
first define the function findPair(), then we create an empty dictionary to store the
visited values, and we start iterating over the elements. For each element in arr, we check if k
- element exist in the dictionary, so we write, if visited.get(k-element), and if it's a
truthy value, it means that k - element exists, so we return true. Else, we just put our
element in the dictionary and we continue. After the loop, if the function didn't return
true yet, it means that we didn't find any pair that sums up to k, so we return false. And
this solution runs on O(n) time complexity, because we said that lookup and insertion are
constant in average in hash table, and we are traversing the n elements of arr, so we get
O(n), where n is the number of elements of arr. Same thing for the space complexity, it's
O(n) because we are using additional space for a hash table that can contain
n elements in the worst case. I hope that you understood these solutions,
and you can find their code in other programming languages below. So check the code of this
solution, and see you in the next problem.
3. 2- First repeating character: Welcome to this new video where we will see a
new coding problem, the 'find pair that sums up to k' problem. This problem says, given
an array of integers arr and an integer k, create a boolean function that checks if there
exist two elements in arr such that we get k when we add them together. I hope that you tried to
solve this problem, so let's start. First of all, we have to understand the problem, we have to
exactly know what does our function have to do. Here we want to find two elements such that we get
k when we add them together, so as soon as we find those two elements, we return true. And the first
solution that may come in your mind is the brute force solution, where we check all the possible
pairs until we find one that sums up to k. For example, if we have this array of 5 elements,
we check the pair of elements at indexes 0 1, then 0 2, then 0 3, then 0 4. Then i becomes 1
and we repeat, we check 1 2, then 1 3, then 1 4. And we continue like this for all possible
pairs, until we find a pair where arr[i] + arr[j] is equal to k. After checking all pairs,
if no one sums up to k, we return false. And the Python code of this solution is, we
define our function findPair() that takes as parameters the array of integers arr
and the integer k, then we start looping. Here to check all possible pairs, we need two
for loops, the outer loop where i starts at 0 and stops at arr length, and the inner loop
where j starts at i+1 and stops at arr length. j starts at i+1 because if it starts at 0, we
will traverse pairs that we already traversed so it's useless. And inside the inner loop, if
arr[i] + arr[j] is equal to k, we return true, because we found a pair that sums up to k. After
the loop, if the function didn't return true yet, it means that we didn't find what we're
searching for, so we return false. This solution works but it's not the best
one, because even if it has a constant space complexity, it also has a time complexity
of O(n²) because of the two for loops, which is not the best one for this problem,
so let's search for another solution. Now let's suppose that we have this array that
is sorted, and that we are searching for a pair that sums up to 10. Let's take for example the
first and last element, 3 and 9. When we add them together, we get 12, which is greater than 10. Now
we have two choices to check another pair, we can either increment the left one to check this pair,
or decrement the right one to check this pair, but what do we choose? To know which move we have
to do, we will use the characteristics of a sorted array, we said that the array is sorted. In a
sorted array, each element is greater than or equal to the precedent one, and each element is
smaller than or equal to the next one. It means that now that we have this pair of elements that
sums up to 12 and we have to decide which move to do, we know that if we increment the left one, it
will be useless, because 3 + 9 is already greater than 10, and incrementing the left one will make
it go to an element that is greater than or equal, so the pair sum will increase or remains the
same, and in both cases, it can't go down to 10. Which is the case here, if we increment the left
one, the new pair sums up to 14, which is greater, so we deduce that moving the left one will
increase the pair sum or make it remain the same. And the opposite for the right one, we said that
each element is smaller than or equal to the next element, so moving the right one will make it
point to a number that is smaller than or equal, so our pair sum will decrease or remain the same.
And knowing these two information, what we can do is that we sort the array first, because the input
array is not necessarily sorted, then we use the two pointers technique, one that starts at the
first index and one at the last index, then if we add arr[left] with arr[right] and we get k, it's
perfect, this is what we want, so we return true. Else if we add arr[left] with arr[right] and we
get a sum that is smaller than k, we have to make it bigger, so we increment the left pointer in
the hope of increasing the sum to make it reach k. Else, it means that our sum is bigger than k,
so we decrement the right pointer, in the hope of decreasing the sum to make it reach k. And all
that while left pointer is still before right pointer, because if we don't stop after it, we
will just traverse the same pairs so it's useless. After the loop, if the function didn't return true
yet, it means that we traversed all the necessary pairs and we didn't find one that sums up to k, we
return false. To better visualize this solution, let's apply it to this example, we have
this array arr and k being equal to 14, so we are searching for a pair that
sums up to 14. First we sort the array, then we put our two pointers, one on the
first element, and one on the last element. Now we have arr[left] + arr[right] is equal to
1 + 12, which is 13, which is smaller than 14, so we increment left pointer to get a bigger pair
sum. Now we have 3 + 12, it's 15, now it's the opposite, it's bigger than 14, so we decrement
the right pointer to get a smaller pair sum. Now we have 3 + 9, it's 12, which is smaller
than 14, we increment the left pointer. And now we have 5 + 9, which is equal to 14, so we
found a pair that sums up to k, we return true. The Python code of this solution is to first
define our function findPair(), then we sort arr, then we create and initialize our two pointers
to the first and last index respectively. After it we have the loop, while left is smaller
than right, if arr[left] + arr[right] is equal to k, we return true because we found the pair, else
if their sum is smaller than k, we increment left, else, we decrement right. After the loop, we
return false because we didn't find a pair. The time complexity of this solution is O(nlogn),
because we have sorting the array that costs nlogn and traversing the necessary pairs that costs
n, and nlogn + n gives O(nlogn). And for the space complexity, here it depends on the space
complexity of the sorting algorithm we use, for example if it's O(logn) then the space
complexity of this algorithm is O(logn) Even if this solution is better than the
first one we've seen, we still can do better, let's find a solution that runs on O(n) time
complexity. Now what if for example we would be able to check if a certain value exists in
the array without having to traverse it again, for example we have this array and we are
searching for a pair that sums up to 12. What we want to do is that we traverse the array,
then when we arrive to 10 here, we can say, 'wait, to reach 12 I need to be with the value 2,
and I already found it when traversing the array, so we can form a pair that sums up to k'. It means
that we want to store the values that we already visited, until we find the other element of the
pair. And the best way to perform this task, is to use a hash table, it can have another name
depending on the language but the concept remains the same. The hash table is a powerful tool
when solving coding problems because it has an O(1) lookup in average, so we can get the
value of a certain key in O(1). And it has an O(1) insertion in average, so we can insert an
element in O(1). So what we gonna do is that while traversing the array, we store the actual element
in our hash table that contains visited values, then as soon as we find that we already visited k
- the actual element, we return true. Why k - the actual element? Because it's the value that
we want to add to the actual element to get k. For example with this array, we start with an
empty hash table, because we didn't visit any element yet, and we start. Does 12 - 8, which is
4, exist in the hash table? No, we store 8 and we continue. Does 12 - 2 exist in the hash table? No,
we store 2 and we continue. Does 12 - 9, which is 3, exist in the hash table? No, we store 9 and we
continue. Does 12 - 5, which is 7, exist in the hash table? No, we store 5 and we continue. Does
12 - 10, which is 2, exist in the hash table? Yes, we already seen it, so we return true, because
we have a pair of elements that sums up to k. The Python code of this solution is to
first define the function findPair(), then we create an empty dictionary to store the
visited values, and we start iterating over the elements. For each element in arr, we check if k
- element exist in the dictionary, so we write, if visited.get(k-element), and if it's a
truthy value, it means that k - element exists, so we return true. Else, we just put our
element in the dictionary and we continue. After the loop, if the function didn't return
true yet, it means that we didn't find any pair that sums up to k, so we return false. And
this solution runs on O(n) time complexity, because we said that lookup and insertion are
constant in average in hash table, and we are traversing the n elements of arr, so we get
O(n), where n is the number of elements of arr. Same thing for the space complexity, it's
O(n) because we are using additional space for a hash table that can contain
n elements in the worst case. I hope that you understood these solutions,
and you can find their code in other programming languages below. So check the code of this
solution, and see you in the next problem.
4. 3- Remove duplicates: Welcome to this new video where we will see a new coding problem. The remove duplicates problem. The problem, say's given in the ray of vintages are creative function that returns an array that contains the values off our without duplicates. So each value must appear exactly once in the output and the older doesn't matter. I hope that you try to solve the problem. So let's start. The brute force solution of this problem is that you're creating empty array. That will be the output than for each element off. Are we go check If we didn't put it in the output yet. If it's the case, we push it and we continue. For example, if you have this array will first create an app t array to stall the valley without obligates. And we start The first element is for Is it in the output array? No. So we push it and to continue. There's two existing the output array. No, we push it and we continue the three existing the output array? No, we push it under continue does to exist there. Yes, because we already found one occurrence of two. So we don't push same thing for the next element, which is for well, really have a four in the output who continue. This element is also for you already have it. So we continue. The last element is one. It doesn't exist in the output. So we push it and now we're finished on we got the array with no duplicates is 4 to 3 at one. The python code of the solution is to first define our function that takes our eyes parameter. Then you create an empty array and after it for each moment in our if that element is not in no duplicates are our output array very upended. At the end, we just returned. Our low duplicates are with the problem with the solution is that we always have to troubles our output array again and again to check if the actual element is present in it or not. In all the words for the and elements off our where each time traversing the output array that can have on elements in the worst case, which gives a time complexity off, off and squared on the space complexity off over on. If we decide to count the output a race pace So let's search for the better solution. If we take a look on a sorted array, we can notice. An interesting thing is that all elements that have the same value are grouped together like in this example. So we did use that while traversing the array. Each time we find an element that is different from the previous one, it means that we found the new value. For example, here we started with the value one that we have 11 on one and after it. We have this element that is different from the element before it because three isn't equal to one. So we push the value three in our output array, we continue now. We also have three. It's equal to their president elements. So we have nothing to do. Now we have four. It's different for three. So we push it and we continue. And now we have six. It's different from four. So we also push it and we continue. And now we have sex. No different on also sex no different. And we finished traversing our array and we got that. The values without duplicates are 134 and six and you can notice that here we didn't have to traverse the output array each time to check if we didn't already stole the actual Arment. We just had to pay the cost off. Sorting the array, the poison could of the solution is that we define our function. We saw the array we create our output array on it really contains the first element because it's the first Valley off the array and because here were darkly reading the first element , we have to make sure that it exists because if they raise empty, the first element doesn't exist. And we would get an arrow on trying to read an element that doesn't exist. To counter this problem, we added only exit condition of the beginning. If the Array has no elements in all the words, it's like zero. Then we have no values at all. We returned the empty array. Now we need to start reversing their remaining values that come after the 1st 1 and you start from the second element because, well, really store the 1st 1 and the right for I starts at one and stops at length off are we compare the actual element with the previous one. So if our eyes different from our off I minus one, then we found the new value we appended to our no duplicates are after the loop. We just returned our out buttery. This solution has an O on look on time, complexity because we sorted the array andan off and space complexity if we decide to count out with pace. But let's start to find a solution and off on that doesn't require sorting the array. Now I think that you guessed it. We will use our powerful tool. The hash table where we will store the values will find one traversing the array with this example. First we have an empty hash table and we start first element for would take for as a key and we put true and we continue. Same thing for two. Would take two as a key ally reap a true and we continue something for three with. Take it as a key and we continue now. We have the value to a value that will really found before, but it's not a problem because we're taking it as a key and putting true nothing will change. It's like we had the variable X initialized to five. Then we assigned 52 x Again, nothing changes. So we continue. And here we have four nothing genders. We also have four here. Nothing changes on for the last element would take one of the key and we put true and we finished our task. Now that we got this hash table, we just take its keys because they represent the values off our without duplicates. The python code of the solution is we first define our function. Then we create a hash table visited by using a dictionary to solve the visited values than for each element in our we said visited of that element of truth. So we take that element as a key, and we put true after it would just returned the keys on the rate called the lesson Parthenon. I know that we could also use the said their structure for this problem. This solution runs on off on time complexity because we are just traversing our on. Don't forget that. Look up. An insertion in hash map are constant and average for the space complexity we have off on because of the hash map at the output array. If we decide to count it. I hope that you understood the solution. Don't forget to check the code and see him in the next problem.
5. 4- Find the duplicate: Welcome to this new video where we will see a new
coding problem, the "find the duplicate" problem. The problem says, given an array of integers
arr that contains n+1 elements between 1 and n inclusive, create a function that returns
the duplicate element (the element that appears more than once). And you can assume
that there is only one duplicate number, and that it can be repeated more than once.
I hope that you tried to solve the problem, so let's start. Before discussing any solution, I
want to talk about how is it possible that we will always have a duplicate number in our context.
It's because of the pigeonhole principle. In mathematics, the pigeonhole principle says that if
we try to insert n items in m containers, and that if n is greater than m, then at least 2 items
will share the same container. Which is true, because for example if 5 people decide to go to
4 pizzerias, then at least 2 of them will go to the same one. This one can for example go here,
this one here, this one here, this one here, and now the last one has no choice, he can only
go to a pizzeria where there is already someone, because the number of pizzerias is
smaller than the number of people. And this process applies to our problem,
because we said that we have an array that contains n+1 elements, all of them are
between 1 and n inclusive, so we have only n values for n+1 elements, which means that at
least two elements will have the same value, in other words, there will always be
a duplicate value. That being said, we are now sure that we'll have a duplicate
value, so we can start solving our problem. The first solution that may come in your
mind is obviously the brute force solution, where for each element of the array, we
search for it in the remaining elements, and if we find it, then it's the
duplicate number, we return it. For example with this input, we start with the
first element, 4 isn't present in the remaining values, so it's not duplicate. Then we move to
2, same thing not present in remaining elements. Then we move to 1, and you can see what we can
find it in remaining elements, so we return 1. So in Python, the solution would be, we
define our function, then for i starts at 0 and stops at array length, for j starts at
i+1 because we are checking remaining elements, and stops at array length, if arr[i] is equal
to arr[j], we return arr[i]. And this solution obviously runs on O(n²) time complexity and O(1)
space complexity, so let's see a better one. The other idea that we can think of when dealing
with duplicate values and stuff like that is to sort the array, to get the same values grouped
together. Then we just have to iterate over the elements and as soon as we find an element that
is equal to the precedent one, we return it. The Python code, we first define the
function, we sort the array, then for each element starting from the second one, if it's
equal to the precedent element, we return it. And this solution runs on O(nlogn) time
complexity, and for the space complexity, here it depends on the space complexity of the sorting
algorithm we use, for example if it's O(logn) then the space complexity of this algorithm
is O(logn) Now let's see a better solution, and by obviously using el famoso hash table. We
create an empty hash table, then for each element, if it already exists in the hash table, it
means that we already visited it, so it's the duplicate number, we return it. Else, we just
store it and we continue. And the Python code is, we first define the function, then we create the
empty hash table, then for each element in arr, if visited.get(element) returns a truthy value, it
means that we already stored it, so we return it. Else, we store it and we continue. And this
solution runs on O(n) time complexity, because we are just traversing the array once, and on
O(n) space complexity because of the hash table. Now what if I tell you that we can solve this
problem in O(n) time complexity, and O(1) space complexity. It's by applying the Floyd's cycle
detection algorithm, also called, the tortoise and hare algorithm. First of all, what is the
Floyd's cycle detection algorithm? As we can guess it from its name, it's used to detect cycles,
usually in linked lists. A cycle in linked list is a group of nodes that form a loop, so the last
element points to the first element and so on, like in this example. And the first node of the
cycle has the particularity that it's the only node that is being pointed to by two nodes, the
one before it, and the last node of the cycle, a very important information that we
will use later, just keep that in mind. Now let's see how does this algorithm
work to detect a cycle in a linked list. You have to know that for this algorithm we
will use the fast and slow pointer technique, this is why it's called tortoise and hare. So we
take tortoise and hare, we put them on the first node, then the concept is that the tortoise, which
is the slow pointer, moves by 1 node at a time. And the hare, which is the fast pointer, moves
by 2 nodes at a time, and we keep repeating the process until tortoise and hare meet, let's
start. Tortoise and hare start at this node that has the value 1. Then we move tortoise by 1
node and hare by 2. They didn't meet so we repeat. We repeat again. We repeat again. We repeat again.
We repeat again. We repeat again. We repeat again. We repeat again, and now they finally met at the
node that has the value 10. But it's not the end of our work, we have a second step to do. Now we
put tortoise on the first node, and we keep moving tortoise and hare by 1 node at a time this time
until they meet. So now tortoise goes on this node and hare on this one. They didn't meet yet, so we
repeat. They didn't meet yet, so we repeat again. And now they met again, and you can see that they
met in the first node of the cycle, which is our initial goal, so by applying this algorithm,
we could detect the cycle and where it begins. Okay but what does this algorithm have
to do with our problem? We have an array, and not a linked list, and we are not searching
for a cycle, just the duplicate value. But what if I tell you that we CAN have a linked list from
that array, and that we ARE searching for a cycle, let me explain. What we gonna do here is that we
will transform our array into a linked list, where each element points to another element depending
on its value, for example an element that has the value 5 will point to the element at the
index 5. Let's apply this process on this array. First we transform our elements to nodes, and we
start putting our links. The first element has the value 5, so it points to the element at index 5,
here is it. The second element has the value 2, so it points to the element at the index 2. The
third element has the value 4, so it points to the element at the index 4. The fourth element
has the value 2, so it points to the element at the index 2. Fifth element, value 1, points
to element at index 1. Sixth element, value 6, points to element at index 6. Last element, value
3, points to element at index 3. And because it's quite a mess, let me just rearrange them to
visualize better. Ah, it's better, we can start. Remember what did I tell you earlier, I
told you that the first node of a cycle has the particularity of being pointed to by two
nodes. Or more by the way. And in our context, to be able to be pointed to by two nodes, those
must have the same value right? For example here, this node, which is initially the
element at index 2 in the array, is the first node of this cycle. And it's
pointed to by two nodes that have the value 2, which means that the number 2 is the duplicate
value. So our goal now is to find the first node of the cycle, because its index represents the
duplicate value. Let's apply tortoise and hare process on this linked list, which is our array
in reality. Tortoise and hare start at index 0, then tortoise moves by 1 node, and hare by 2.
We apply the process, but they didn't meet yet, so we repeat. They didn't meet yet, so we
repeat again. They didn't meet yet, so we repeat again. They didn't meet yet, so we repeat again.
They didn't meet yet, so we repeat again. And now, we can see that they met, so we can move to the
second step. We take tortoise to index 0, and now they both move by 1 node at a time. They didn't
meet yet, we repeat. Same thing here, we repeat. And same thing here, we repeat. And now they met
again at the index 2, so 2 is the duplicate value. Please pause the video or watch the last
few minutes again if you didn't understand. The Python code of this solution is, we first
define our function, then we need to initialize tortoise and hare. We said that tortoise starts at
index 0 then moves 1 node at time, so we can set its initial value to arr[0], because don't forget
that to move between nodes, we use the index, and the element at index 0 points to the element
at the index arr[0]. And for hare, we said that it starts at 0 then it moves by 2 nodes. So
we can initialize it to arr[arr[0]]. Don't be confused, it's just our way to move by 2 nodes,
because the element at index 0 points to the element at the index arr[0], and the element at
index arr[0] points to the element at arr[arr[0]]. Now the while loop, while tortoise isn't equal to
hare, because we said that we repeat the process until they meet. Inside the loop, tortoise
moves by 1 node, so we replace tortoise by arr[tortoise], which is the index of the next
node. And hare moves by 2 nodes, so hare gets replaced by arr[arr[hare]], which is the index of
the other node that comes after the next one. Then after they meet, we said that tortoise goes back
to the beginning, so we set it to the index 0. And while they didn't meet again, while tortoise
isn't equal to hare, we move them both by 1 node, so tortoise becomes equal to arr[tortoise],
and hare becomes equal to arr[hare]. And after they meet again, we said that it means
that their value represents the duplicate value, so we return one of them. And this solution runs on O(n) time complexity
because in reality we are just traversing the array twice. And for the space complexity, here
it's O(1), because we used only simple variables, so you can see that we got a solution in O(n)
time complexity and O(1) space complexity. Okay that's cool this algorithm works and gives
us the result we want, but, the real question is, why does it work? And more precisely, why does it
always work? And this is what I'm going to explain now. Because I know that this solution isn't
intuitive like sorting the array or something like that, so even if it works, we aren't directly
convinced, this is why I will demonstrate it. And prepare yourself, because there
will be a mathematical demonstration. Okay so let's take our first linked list example,
this one, and let's have a look on the important components. Don't forget that in this example,
tortoise and hare met in this node the first time. First of all we have the part before reaching the
cycle, let's name it: beforeCycle. We also have the cycle, and we can name its length: lenCycle.
We also have the distance before the meeting point, let's name it: beforeMeeting. And we
also have the distance between the meeting point and the first node of the cycle, but from
the other side, let's name it: afterMeeting. Note that lenCycle is equal to beforeMeeting +
afterMeeting, this information will help us later. Now let's calculate the distance
traveled by the tortoise and the hare. For the tortoise, we have distance beforeCycle,
then before reaching the meeting point, it's possible for the tortoise to travel the
cycle many times before meeting the hare. So we add k1 multiplied by lenCycle, +
beforeMeeting, where k1 is a constant that represents the number of times tortoise
must travel the cycle before meeting hare, and it can be 0 by the way. And beforeMeeting is a
required distance to go from the first node of the cycle to the meeting point. Now let's calculate
distance of hare. Same thing here, we have the distance beforeCycle, then a fixed number of
cycle traversing + beforeMeeting distance. And because the number of times that hare traverse
the cycle can be different from tortoises's one, let's name it k2. Now that we calculated
both distances, don't forget that hare is moving by 2 nodes at a time, so the distance
he travels is twice as long as tortoise's one, so we write: hareDistance is
equal to 2 times tortoiseDistance. Now we replace them by their respective expression
we calculated now, and this is what we get. We move all the terms to the right
side, then we simplify with subtraction, and this is what we get. Now we take 2k1 - k2 to
the other side, and we obviously flip the terms. Now because k1 is constant, and same thing
for k2, we can replace them by a constant k3. Now we take beforeMeeting to the other side, so
the equation becomes, beforeCycle is equal to k3 lenCycle minus beforeMeeting, which is equal
to (k3-1) lenCycle + (lenCycle - beforeMeeting). Now we replace k3-1, which is a constant,
by k4, and as I told you before, lenCycle is equal to beforeMeeting + afterMeeting,
which means that lenCycle - beforeMeeting is equal to afterMeeting, so we replace in the equation.
Now we have beforeCycle is equal to k4 times lenCycle, + afterMeeting, and k4 times lenCycle
doesn't change anything in the destination, because each time we traverse the
cycle, we go back to the first point. So even if we traverse it 100 times we go back to
the same position, and all that because a constant times x modulus x is equal to 0, which is the case
here, k4 times lenCycle modulus lenCycle gives 0, so we can put 0 here, and we get that beforeCycle
is equal to afterMeeting. What does it mean? It means that this distance, is equal to this
distance, this is why when we take tortoise the the first node again and they walk 1 node at
time, they meet in the first node of the cycle, which is what are searching for. So we
finally proved that in the general case, our algorithm will always detect the cycle by
following the algorithm we did here. I hope that you understood all the stuff presented in this
lecture, watch it again if you missed something, you can also ask me questions in comment
section below, and see you in the next problem.
6. 5- Tree depth first search: Welcome to this new video where we will learn
how to traverse a tree in depth first search. So given a binary tree root, you are asked
to create 3 functions to print tree elements, one in preorder traversal, one in inorder
traversal, and one in postorder traversal. Preorder means that we print the root value,
then we print the left subtree, then we print the right subtree. Inorder means that we print the left subtree,
then we print the root value, then we print the right subtree. And postorder means that we print the left
subtree, then we print the right subtree, then we print the root value. I hope that you tried to solve the problem,
so let's start. First of all, why is the tree traversal particular? It's because the tree is not a linear data
structure like the array or the linked list. Because for example in an array, we have a
starting point, the first element, and we have an ending point, the last element, so
to traverse an array, we can just start in the first element and move straightforward
until the last element. But it's not the case for a tree, even if
we have a starting point, which is the root, but it's not enough, so we have to find a
way to traverse all nodes. And for that, we have 2 main ways to do it,
we have BFS, breadth first search, where we traverse the tree level by level, where we
print the root, then this level, then this level, and so on. And we will see how to do it in another problem
of this course. And the second way to traverse a tree is DFS,
depth first search, where we go deep into the tree, then we backtrack, we go from the
other side, and so on. Now let's start, let's see how preorder traversal
works. We said that for preorder, we print the root
value, then we print left subtree, then we print right subtree. Okay now we printed the root value, and we
have to print the left subtree. But wait, the left subtree is also a tree,
so we have to apply the same process on it, which means that it's recursive process, because
we print the root value, then we call the same function on both subtrees, and so on. So let's continue, we said that we print the
root value, then we print left subtree. And here, same thing goes again, on the left
subtree of the left subtree of the initial root. So we print the root value, and we print the
left subtree. But here, the left subtree is null, which
means that we have to backtrack. When we backtrack, we go back to the previous
call, to print the right subtree. But here, same thing, right subtree is null,
we backtrack again. And now, we ended the process on this subtree,
we printed its root value, we called the function on its left and right subtree, so the function
call ends, and we backtrack. Now we continue our process, and we print
the right subtree this time, because we ended the left subtree. So we print the root value, then we print
left subtree. Left subtree is null, same thing for right
subtree, so we backtrack. And now we printed both left and right subtree,
we backtrack again. And now you can see that we finished the whole
global left subtree, which means that we go back to the first function call ever, and
now we have to print the right subtree. We print the root value, and we print the
left subtree. We print the root value, and we print left
subtree. But left subtree is null, we backtrack. Same thing for right subtree, it's null, we
backtrack. We finished printing this subtree, so we backtrack. Now we print the right side, we print the
root value, and we print the left subtree. But it's null, we backtrack, right side is
also null, we backtrack. We have no work to do for this subtree, so
we backtrack. Same thing here, we printed both left and
right sides, we backtrack. Now you can see that we went back to the initial
root, and because we printed both left and right subtree, then our job is done here,
so the function call ends. Now if we print the same tree but in inorder
traversal, we first print left subtree then root value then right subtree. So we have left, left, left, backtrack, print
root, right, backtrack, backtrack, print root, right, left, backtrack, print root, right,
backtrack, backtrack, backtrack, print root, right, left, left, backtrack, print root,
right, backtrack, backtrack, print root, right, left, backtrack, print root, right, backtrack,
backtrack, backtrack, the end. Now let's do it in postorder, so we will print
the left subtree, print the right subtree, then print the root value. So we have left, left, left, backtrack, right,
backtrack, print root, backtrack, right, left, backtrack, right, backtrack, print root, backtrack,
print root, backtrack, right, left, left, backtrack, right, backtrack, print root, backtrack,
right, left, backtrack, right, backtrack, print root, backtrack, print root, backtrack,
print root, the end. Now let's move to the code. In reality, the code of those traversals is
really simple, for example for preorder, we said that we print the root data, then we
print left subtree, then right subtree. So we first define the function, it takes
a binary tree named root as parameter, then inside, we print the root data, we call the
same function with the left subtree, then we call with the right subtree, that's it. But because it's a recursive function, we
have to add the base case at the beginning, to know when to stop, it's when the passed
parameter is null, in that case it means that we arrived to the bottom, so we have nothing
to do, we directly return. Now for inorder, we said that we print left
subtree, we print root data, then we print right subtree. So we define the function, we set the base
case, it's the same here, we call the same function on left subtree, we print root data,
and we call the same function on right subtree, that's it. Third one, postorder, we define the function,
we set the base case, we call the function on left subtree, on right subtree, and we
print root data. So you've seen that the code of the three
traversals is very similar, we just modify the order of instructions. Now for the time complexity, here we are just
traversing all the nodes of the tree, so it's O(n), where n is the number of nodes. And for the space complexity, even if we aren't
using size related additional data structures, we are using a recursive function, which means
that we have to have a look on the maximum calls that can be on the call stack, and in
this case, it represents the height of the tree, which represents the maximum depth of
a node in the tree, so our space complexity is O(h). Before ending this video, with the last example,
I want to show you the order of printed nodes in each traversal, please pause the video
to analyze them and compare. I hope that you understood tree depth first
search, it can be a difficult concept for some people because of the recursive side
of the solution, and that traversing a tree is not intuitive like for a linked list or
an array, so if you didn't understand something, you can watch the video again or ask a question
in comment section below. Check the code below, and see you in the next
problem.
7. 6- Maximum subarray: Welcome to this new video where we will see
a new coding problem, the "maximum subarray" problem. The problem says, given an array
of integers arr, create a function that returns the sum of the subarray that has the greatest
sum. And we don't consider the empty array as a subarray. I hope that you tried to solve
the problem, so let's start. First of all, we need to know what is a subarray. A subarray
of arr is a contiguous block of elements of arr, for example this one is a subarray, this
one is a subarray, this one is a subarray, but not this one, because the
elements are not contiguous in arr. Now let's talk about what do we want to get as a result,
we want to get the sum of the subarray that has the greatest sum. Which means that our
array arr has a certain number of subarrays, each one of them has a sum, that we can get
by adding its elements, and we need to find the greatest one. So the brute force solution
of this problem would be to calculate the sum of each subarray, while keeping track
of the max one. For example if we have this array of 4 elements, let's see its different
subarrays, and for that, we will use two pointers that represent the boundaries of the actual
subarray. First, i and j are both equal to 0, we have the subarray that contains the
first element only. Then we increment j, and we get this subarray that goes from index
0 to 1. We increment j again, we get the subarray from 0 to 2. We increment j again, we get
the one from 0 to 3. And now j can't go further, so we move i to start reading the subarrays
that start from the index 1. And so on until i can't go further. And for each subarray,
we will use another for loop to calculate the sum of that subarray, then we compare
it with the maximum sum we found until now, to see if we replace it. So this solution
in JavaScript would be, we first define our function, that takes arr as parameter, then
we create the maxSum and we initialize it to -infinity, to be sure that the next subarray
will be greater.Then we have the for loop for values of i, we have i starts from 0 and
stops at array length. Inside it we have the loop for values of j, we have j starts at
i and stops at array length, j starts at i because we said that we read subarrays that
starts at the index i, so j, which is the end, can't be before it. And now to calculate
the sum of the subarray between indexes i and j, we first create a variable actualSum
initialized to 0, then we loop over indexes from i to j. So for k goes from i to j, we
will each time add the element at the index k, arr[k]. After getting the sum, we check
if it replaces maxSum. So we replace maxSum by the maximum value between the actual value
of maxSum, and the sum of the subarray we just calculated now, and things get repeated
for each subarray. At the end, we just return maxSum. And as you can see, here we have 3
nested for loops, and in this situation they give a time complexity of O(n³), which is
really huge, so let's see for another solution. One of the things that was slowing down our
solution is that we were each time using a loop to calculate the sum of the subarray,
so for each subarray, we were starting again from 0. And we can counter this by using the
cumulative sum, because for example here, the sum of the subarray from 0 to 2, is just
the sum of the subarray from 0 to 1, plus the element at the index 2. And the sum of
the subarray from 0 to 3, is just the sum of the subarray from 0 to 2, plus element
at index 3. So in general, when two subarrays start in the same index and end in two successive
indexes, we just have to add the last element to get the sum of the second one. And this
solution in JavaScript would be, we first define our function, then we create and initialize
our maxSum to -infinity. After it, we have a for loop where i goes from 0 to array length,
and inside it, we have our cumulativeSum. We initialize it to 0, then we have a for
loop where j starts at i and stops at array length. We need this loop because we want
to get the sums of all subarrays that start at the index i, so at each iteration, we just
add the element at index j to get the sum of the subarray from i to j, then we compare
it with the maxSum. At the end we just return maxSum, and we get the same result. And this
solution runs on O(n²) time complexity, which is better than O(n³), but not enough for
this problem, because we can find a solution in linear time complexity. In this problem,
things would have been easier if all elements were positive, because the more we add elements,
the more the sum gets bigger, so the maximum subarray will just be the whole array. But
it's not always the case, because in this problem, we can have negative elements in
the input, so we need to find another way. Let's suppose that we have this array, and
now for each index, let's determine the maximum sum that ends in that index, in other words,
among all the subarrays that end at a certain index, we will search for the one that has
the greatest sum. For example for the first index, the maximum subarray is this one, we
have no other choice, and its sum is 2. For the second index, it's this one, its sum is
5. For the third index, the maximum subarray is this one, its sum is -3. For the fourth
index, the maximum subarray is this one, it has only the element at index 4, and its sum
is equal to 4. And for the last index, the maximum subarray is this one, its sum is 9.
What we can now notice is that, for example for the second index, we took the maximum
subarray that ends at the first index, and we added the element at index 1. For the third
index, same thing, we took the maximum subarray that ends at the index 1, and we added the
element at the index 2. But for the index 3, we didn't take the maximum subarray that ends
at the index 2, we just took the element at the index 3 and that's it, please pause the
video for some seconds and try to find the reason, try to find why in index 1 and 2,
we took the maximum subarray of the precedent index, and why we didn't do it in the index
3. *slurp* So the reason behind it is simple, I'm gonna explain it now. We can notice that we
always have two choices, to take the precedent max subarray, or to not take it. And this
choice will obviously be related to the sum, for example in the index 1, the sum of the
precedent max subarray is 2, so if we take the new element only, we would get 3, but
if we add the precedent max subarray, we would get 5, which is better, so we take it. Same
thing for the index 2, the sum of the precedent max subarray is equal to 5, so if we take
the new element only, we would get -8, but if we take the precedent max sum, we would
get -3, which is better. But for the index 3, the precedent max sum is -3, so if we take
the new element only, we would get 4, but if we add the precedent max sum we would get
1, so it's better to take the new element only, in other words, taking the precedent
sum is not worth at all, because we always want a greater sum. So what we are doing here
is that, for each move, to get the maximum subarray that ends in the actual index, we
take the max between the actual element, and the actual element + the last sum as we said
earlier. And while doing this, it won't always increase, but we are doing our best to make
it as great as possible by always taking the maximum. And for the global result, the result
that we're initially searching for, we store it in a variable called globalSum for example,
and each time we calculate the new value of localSum, we see if it replaces it. So globalSum
becomes equal to the maximum between the actual value of globalSum, and the value we got now
that is stored in local sum. The JavaScript code of this solution is, we define the function,
then we create and initialize globalSum to -infinity, and localSum to 0. Then for each
element of arr, we replace localSum as we said with the maximum between the actual element,
and the actual element if we add to it the actual value of localSum, it's to see if it's
worth to take the precedent sum or not. And after getting the new value of localSum, we
replace globalSum, which will be our result at the end, by the maximum between the actual
value of globalSum, and the new value of localSum we just calculated now. At the end, we just
return globalSum, which represents the sum of the maximum subarray in the whole array.
And this is what we call, the Kadane's algorithm, it's the one we used in this solution. And
this solution obviously works on O(n) time complexity because we are just traversing
the array once, and on O(1) space complexity because we are just using simple variables.
And by the way, we can say that we used dynamic programming for this solution, because if
you noticed, we are each time getting the new value of localSum by using the precedent one, and
if we decided to store values in an array, we would write this relation. Note that dp[i]
represents the sum of the maximum subarray that ends at the index i. *slurp* Okay Kadane's algorithm
works fine, and in linear time complexity, but, why does it work? How can we be sure
that it will give us the maximum sum? This is what I'm gonna explain now. In reality,
the explanation is very simple. By definition, the maximum subarray, will always end at a
specific index in the array, it can be the index 0, the index 1, the index 3, anyway.
And what we are doing in our last solution is that we are calculating the maximum sum
at each index, so we are getting the maximum sum of subarrays that end in the index 0,
then the index 1, the index 2, and so on. Which means that we are covering all the cases,
so we have nothing to fear, so for example if the maximum subarray ends in the index
3, then when we arrive to the index 3, we will get the maximum sum that we are searching
about, we won't miss it. In brief, if we use logic to explain what we are talking about,
we can say that, because the fact that for each index i, the value of actualSum at that
index represents the maximum sum that we can get for any subarray that end in that index, and
because the global maximum subarray ends in an index i, then when we arrive to the index
i, the value of actualSum will be equal to the sum of the maximum subarray, so we won't
miss it. I hope that you understood this lecture, check the code below, and see you in the next
problem.
8. 7- Reverse binary tree: Welcome to this new video where we will see
a new coding problem, the "reverse a tree" problem. The problem says, given a binary tree of
integers root, create a function that reverses it left to right in-place. I hope that you tried to solve the problem,
so let's start. First of all, we need to understand the question,
we must exactly know what does reverse a binary tree mean. By reversing a binary tree we mean reversing it
left to right, so for example the leftmost node will become the rightmost one, and vice
versa. Or for example the node by taking the path
left left right, will be the node by taking the path right right left, and vice versa. Now you have to know an important thing when
dealing with data structures like linked lists, trees, graphs, and remaining ones where we
have nodes and links. You have to know that with those data structures,
especially with problems that include swapping, reversing, sorting, and other stuff that modifies
the structure, we have two ways of solving, we can either deal with values or with links. For example if we have this linked list,
and we want to swap the second element with the third one, we have two choices, we can
either swap them by swapping their values. Or their links, so let's try to do both with
our binary tree. If we try to solve our problem with values,
then for each node, we must find the equivalent one from the other side, then to swap their
values. So if a node is at the path left left right
for example, we must find the node at path right right left, then we swap their values. And we will have to do this for each node. And things start getting worse when a node
doesn't exist in the other side, for example this one has no equivalent from the other
side, so we will have to create a node, copy the value, and delete this one. So you can see that trying to solve this problem
by swapping values is painful, let's see if we can do it by modifying links. We know that in the final result, all nodes
at the left will be in the right side, and vice versa. So what would happen if we swap root's left
and right attributes? We know that the root has 3 attributes, the
data, which is the value stored in it. The left pointer, the one that points to
the left subtree. And the right pointer, the one that points
to the right subtree. So if we swap the two pointers, this is what
we get. Basically here the left subtree became the
right subtree and vice versa. Is it enough? Can we say that we reversed the tree? Well, not yet, because if we take the left
subtree for example, its leftmost node must be swapped with its rightmost node, so now
we must reverse the subtrees, and this is what we get. So you can see that we have a kind of recursive
process, where we swap left pointer with right pointer, then we need to reverse the subtrees,
so we reverse their left and right pointer. Then we have to reverse the subtrees of the
subtrees. And so on until we arrive to the leafs, a
leaf is a tree node that has no children. So if we want to write the code of this solution
in Python, we first define the function, then we set the base case, the base case
is basically where the recursive function should stop. Here the base case can be when the passed
tree is null. Because we said that we must stop when we
arrive to a leaf right? And a leaf has the value null for both pointers,
so when we call the recursive function with its left child, nothing happens because it's
null, it directly exits, and same thing with its right children which is also null, so
it will stop at the leaf. After setting the base case, we said that
we swap the left with the right pointer, here is it. And now, we said that we have to reverse the
left and right subtree, so we call the function again, once with the root of the left subtree,
and once with the root of the subtree, and we're done. I know that this recursive process can be
confusing when starting studying about trees, but you really have to work on it. If you aren't comfortable with trees and recursion,
try to solve more problems on it to get better. The time complexity of this solution is O(n),
where n is the number of nodes of the tree, because we are traversing and swapping left
and right. And the root will call the function on its
children, and they will do the same thing with their children, and so on until traversing
all the nodes once. And for the space complexity, it's O(h),
where h is the height of the tree, and the height of the tree represents the maximum
distance between the root and a leaf, for example in this tree, the height is 4. And it's not O(1) because don't forget that
it's a recursive function, so we have to consider the call stack. Here the maximum number of calls that can
be on the call stack at the same time is h, it's when we arrive to the bottom of the tree. Before ending this video, I want to tell you
that it's possible to solve this problem without recursion, because we can do it by traversing
the nodes level by level, and we swap left and right pointer of each node, but because
there will be a problem is this course about how to traverse a tree by levels, I prefer
waiting until that problem, then I'll show you how to do it. I hope that you understood this lecture, check
the code, and see you in the next problem.
9. 8- Longest substring without repeating characters: Welcome to this new video where we will see a new
coding problem, the "longest substring without repeating characters" problem. So what does the
problem say? It says, given a string str made of alphabetical letters only, create a function
that returns the length of the longest substring without repeating characters. I hope that you
tried to solve the problem, so let's start. First of all, we can divide our task
into 3 parts, we have longest, substring, without repeating characters. Longest means that
we are searching for the greatest length, so we will probably use a variable maxLength to keep
track of the maximum length we will have found. Then we have substring, a substring is a part
of the string where characters are contiguous. And we have, without repeating characters, which
means that in our substring, each character must appear exactly once, not more. So the brute
force solution of this problem is quite obvious, we just have to check all the possible substrings,
and each time we find one that has no repeating characters and whose the length is longer than
maxLength, we replace maxLength with its length, and it becomes the longest substring without
repeating characters we found until now. But to write this solution, we first need to create a
boolean function to check if a given string has no repeating characters or not, and for that, the
idea is to use a boolean array of 128 elements. Why 128? It's because the string has alphabetical
letters only, so covering the 128 ASCII characters is enough, where the uppercase letters go from
65 to 90, and lowercase letters from 97 to 122. And in this boolean array, each element visited[i]
represents if we already visited the character whose ASCII code is i, and because at the
beginning we will not have visited any character yet, so we obviously initialize them all to false.
Then after it, for each character in the string, if we already visited that character,
we return false, because it means that we have a repeating character. Else, we just
note that we visited the character by setting its element in the array to true. And to get the
equivalent element of a character in the array, we just get its ASCII code, by using the ord()
function in Python, and we got the right index. After checking all characters, if the function
didn't return false yet, it means that the string has no repeating characters, so we return
true, and we got our function that checks if a string has no repeating characters. Now,
we can create our "longest substring without repeating" function, and as said earlier, we
will iterate over all possible substrings. First we create and initialize maxLength to 0,
to 0 because we can't have a negative length. Then a for loop for values of i, we have
for i starts at 0 and stops at str length. Inside it, we want to traverse all
substrings that start at the index i, so we use another for loop where j starts at i
and stops at array length, j represents where the actual substring ends. And inside, we get
the substring between indexes i and j inclusive, and because the second parameter is exclusive, we
write j+1 to not miss the character at index j. And now we check the condition, if the
substring has no repeating characters, and its length is greater than maxLength, we
replace maxLength by our substring length, that's it. At the end, we just return maxLength,
which is the result we are searching for. But the problem with this solution is that it runs
on cubic time complexity, it's because we have two nested for loops, and for each substring,
we are calling the boolean function we made, which runs on O(n) because it traverses all the
string, which gives a total of O(n³). And for the space complexity, extracting a substring creates a
new one, and it's what we did in this instruction. And because that substring can have at most n
characters, then the space complexity is O(n). But we can do better than cubic time complexity,
so let's see another solution. Let's take for example this string, and let's try to build the
longest substring without repeating characters. If we add the first character, no problem, because
our substring still has no repeating characters. We add second character, same thing, no problem.
Third character, no problem, all characters are appearing once. Fourth character, no problem.
But if we try to add the fifth character, here we would have a problem, because we already
have the letter 'b' in our substring, so if we do, our substring would be no more considered as
without repeating characters, and it's not what we want. What if we can just skip that character, and
move to the next one? Well, we can't do it because the definition of a substring as I said at the
beginning is that characters must be contiguous in str, so if we skip a character, then it
won't be considered as a substring anymore. So the only thing that we can do to keep
our substring without repeating characters, is to remove characters from the left, so
we keep removing until the first occurrence of the repeated character, and in this
case, we remove the first two letters. And as you can see, our substring
is now without repeating characters, we continue. We add the letter 'e', no problem.
We add the letter 'g', same thing, no problem. Letter 'h', no problem. But for the letter
'e', we already have an 'e' in our substring, so we apply the same process, we remove all
the part that ends in the first occurrence. Now last letter, 'f', we can add it with no
problem. And now that we have no more letters, we finished our task, and we got that the maxLength
is 6, which is the length of this substring. Okay now we know how to find the
maxLength without checking all substrings, but how can we write the code of this solution?
In order to do that in Python, we first define our function, than we create and initialize maxLength
to 0. And now, as you've seen during the process with the pink arrow, the index where our substring
begins changes when we find a repeating character, so we need a pointer to know where our substring
is starting, let's name it start, and its initial value is 0. Before starting traversing the string,
we need to create an array of 128 elements, 128 because of the same reason I mentioned in
the first solution. And this array will help us to know if the actual character that we are
reading is already here in our substring or not, because we will store in it the index of the
last time we've seen each character. And to know if the actual character is present in our
substring, we just check if the last index where we've seen it is greater than or equal to the
index where our substring is starting. And the initial value of each element in that array
is -1, because we didn't see any letter yet. Now we can start traversing characters, so we
use a for loop where i starts at 0 and stops at str length. Inside it, we check if the character
at index i is already present in our substring, and as I told you before, we check if
the last index where we've seen it is greater than or equal to start, the
index where our substring is starting. And to get the last index, we take it from our
indexes array. So if it's the case, we said that we remove all the left part until that index,
which means that we need to start from the index that is just after it, this is why start becomes
equal to the last index where we've seen str[i], +1. The +1 is very important, because if
we don't write it, the first occurrence of str[i] would still be in the substring. Now
always in the loop, we need to update the last index we've seen the actual character,
and it becomes equal to i, the actual index. We also need to compare the actual length of the
substring to see if it's greater than maxLength, and for that, we replace maxLength by the
maximum between the actual value of maxLength, and the actual substring length. The actual
substring length is equal to i - start + 1, where i represents where the substring ends,
start represents where the substring starts, and we need the +1 to make both i and
start inclusive. After the loop, we just return maxLength, which holds the length of the
longest substring without repeating characters. And this solution runs on O(n) time complexity
because we are just traversing str once, which is a huge difference with the first method. And
for the space complexity, it's O(1), same reason as for the first method. And by the way, this
method always works because for each index, we are having the longest substring without repeating
characters that ends at that index, so we can't miss the global longest one, it looks like the
reasoning we used for maximum subarray problem. I hope that you understood this solution, check
the code below, and see you in the next problem.
10. 9- Reverse linked list: Welcome to this new video where we will see
a new coding problem, the "reverse linked list" problem. So what does the problem say?
It says, given a linked list of integers list, create a function that reverses it without
using an additional data structure. I hope that you tried to solve the problem, so let's
start. We have been told in this problem that we must not use any additional data structure,
because if it was the case, we could just create a new linked list, then we keep adding
elements from the beginning, and we get it reversed. But here, we aren't allowed, so
we have to work in-place. As I told you in the "reverse tree" problem, we usually have
two ways to solve problems where we modify a tree or a linked list for example, we can
solve it by modifying values, or by modifying links. If we try to solve this problem by
modifying values, then we'd have to keep swapping the symmetric nodes, like we do for an array.
And things would have been easier if it was a doubly linked list, because we would have
just to put left pointer on the head, right pointer on the tail, then we keep swapping
left.data with right.data, then we move left to next node and right to previous node, and
we repeat the process, while left is still before right. But here we have a singly linked
list, so to get the last element, we have to traverse the list, we swap, then to go
to the previous node, we have no previous pointer like in DLL, so we'd need to traverse
it again and stop in before last element, we swap, and so on. And if we want to write
this solution in Python, we first define the function, it takes the parameter list of type
LinkedList. Then we need to get the length of the list, so we create and initialize length
to 0, we create a temporary node to traverse the list, then while temp is not null, we
increment the length, and we move to the next node. Now that we got the length, we have
to deal with left and right pointers to swap values. With left pointer, there is no problem,
because we can just put it at the beginning of the list, then we move it forward each
time, because we can access to the next node. But the problem comes with the right pointer,
because as I told you before, we don't have access to the previous node of a node like
in a DLL, so we will each time have to take it to the right position, then we swap, and
we repeat the process in the next iteration. And in order to do that, we need a first for
loop for the number of iterations, the number of swaps. And you should know that we stop
in the middle of the list, when left and right meet, so we will do a total of length divided
by 2 iterations. And inside this loop, we have to take right pointer to its position,
so we put it in the head, then we move it by length-i-1 positions to the right, so at
the first iteration we will go on the last node, then on the one before it in the next
iteration, and so on. The -1 is important to avoid going out of bounds. Now that we
got the right position of left and right pointers, we swap their values, and we move our left
pointer for the next iteration. Then in the next iteration, we repeat the process, we
take right to its position depending on the value of i, we swap, and we repeat. After
the loop, we have nothing to return because the work happens in-place. And in this solution
we have two nested for loops, and in this situation they give a time complexity of O(n²).
And for the space complexity, we didn't use any additional data structure or recursion
calls, so it's O(1). But having an O(n²) time complexity to reverse a linked list is
quite too much, so let's try to find out a solution in O(n). And as I told you before,
we can either deal with values or with links, and in this solution we worked with values, and
we got O(n²) time complexity, now let's see if we can find a solution by working with
links. So let's take this linked list, and we try to reverse it just by modifying links
between nodes, and in order to help ourselves let's put the result we are searching for
at the bottom, here is it. First of all, we notice that the first element before reversing
is now pointing to null, then the second element before reversing is now pointing to the element
before it, and so on, and the last element before reversing is now considered as the
head of the list. So what we can do is that, to reverse the list, for each node, we make
it point to the element before it, for example here the first node has no precedent node
so it points to null, then the second node points to the node whose value is 5, then
the third node points to the node whose value is 3, and so on. But the problem is that it's
a singly linked list, so we don't have access to the previous node, this is why we must
always let a pointer on the previous node, named previous for example. And because the
first node has no previous node, the initial value of previous is null. So we've set our
pointers, and now we make the current node point to the previous node, which is null
here. And now we have to move to the next node to apply the same process, but here we
have a problem, because we changed the current node next pointer, then we lost our way to
the next node, we no more can jump to it. And in order to counter this issue, before
changing the current node, we have to put a pointer on the next node to not lose our
way, so in total we will have 3 pointers, one on the current node, one on the previous
node, and one on the next node. Let's try it on this linked list now. We've set the
first node to previous, which is null, so now we must move to the next node. So previous
becomes current, and current becomes next. Now we repeat the process, we set next on
the next node of current, we make current point on previous, we set previous to current,
and we set current to next. Current isn't null yet, we repeat the process, we set next
to the next node, we make current point on previous, we set previous to current, and
we set current to next. Current isn't null yet, we repeat, we set next to the next node
of current, we make current point on previous, we set previous to current, and we set current
to next. Current isn't null yet, we repeat, we set next to the next element, we make current
point to previous, we set previous to current, and we set current to next. Now current is
null, so we finished our process, and the last thing we have to do is to update the
head, so head becomes equal to previous, which is on the last element of the initial linked
list, it becomes now the head, and we got our final result. Now let's see the Python
code, and you will see that it will be exactly what I described during the process. We first
define the function, it takes list as parameter, of type LinkedList. Then we said that initial
value of previous is null, and current starts at the head of the list, at the first element.
Then if you remember, we stopped when current became null, so while current is not null,
we create next and we set it on the next node, to not lose the way, then we make current
point to previous, so we write current.next is equal to previous, then we set previous
to current, and we set current to next, to continue traversing the list, and the process
gets repeated. After the loop, we said that we update the head, and we set it to previous,
and we finished. And for the time complexity of this function, we are just traversing the
linked list once, so it's O(n). And for the space complexity, we aren't using any additional
data structure or recursive calls, so it's O(1). Before ending, I want to show you a
third solution, which in reality is the one we've seen now, but recursively. Why recursion,
because for example if we take our last example again, and by a way we ignore, we could directly
reverse the remaining part after the first node. And now, we just have to modify links
for the first node, that must become the last node in the reversed list. And in order to
do that, we just make the next node point to it, so we write node.next.next becomes
equal to node, because we want to make the next node point to it, so we have node, we
have node.next, it's this arrow, and we have node.next.next, it's this arrow. Then because
node is now the last element of the list, we make it point to null, so we write, node.next
= null, None in Python. Okay but where is the recursivity in the story? The recursive
part is that, now we assumed that the remaining part is sorted, but in reality, in order to
sort it, we have to call the same function on it to reverse it, and so on. Before continuing
this example, I want to first show you the code, then we'll see an execution example
to understand better. So we have our function reverseNodes(), it takes the head as parameter,
and not the linked list itself, it's taking its head. And inside, we need to set the base
case, the base case is when there is only one node or no nodes at all, because a list
that has 1 node or no nodes at all is already reversed, so we directly return node. Now
we need to reverse the remaining part, and because we want to store it to return it later,
we store it in a variable named reversed for example, where we call the function on node.next,
which is the beginning of the remaining part that we want to reverse. After reversing the
remaining part, as said earlier, we set node.next.next to node, and we set node.next to null. After
it, we just return reversed, because it's the result that the previous recursive call
is waiting for. Now we need another small function that takes the linked list itself
this time as parameter, and it replaces its head by the head returned by reverseNodes()
function, because we have Node type, and LinkedList type, they are different. I know that things
are quite confusing like this, so let's see an example. We have this linked list, and
let's start. The first call is on the head of the list, and it's neither null, nor pointing
to null, so we continue. Now we call the same function to get the remaining list reversed,
so the actual call is pending and waiting for the result returned. So the functions
starts again for the second node, it's neither null nor pointing to null, we continue. And
now same thing, it will call the function for its next node. Third node is neither null
nor pointing to null, so it calls the function on the next node. Fourth node is neither null
nor pointing to null, so it calls the function on the next node too. And here, fifth node
is pointing to null, so we return it as a result, and we backtrack. Now in the fourth
node, we got the result of reversed, we can continue, and we set node.next.next to node,
and we set node.next to null. Same thing, we return reversed, which is node whose value
is 7, and we backtrack. Now third node got the result of reversed, so we set node.next.next
to node, and we set node.next to null. We return the same reversed, which is the head
of the reversed list, and we backtrack. We set node.next.next to node, and we set node.next
to null, we return reversed, and we backtrack. We went back to the first node, we set node.next.next
to node, and node.next to null, and we return reversed. It was the first call ever, so we
reversed the whole list, and the node whose value is 7 is now the head as you can see,
because when we call this function from that second function, the head gets updated by
the result returned, which is node whose value is 7 in this case. For the time complexity,
same thing as precedent solution, we are just traversing the list once, so it's O(n). But
for the space complexity, here it's O(n), because it's a recursive function, so we don't
forget to count the maximum call stack size, which is n here, the number of elements of
the list, so our space complexity is O(n). I hope that you understood the solutions we've
seen, check the code below, and see you in the next problem.
11. 10- Peak finding: Welcome to this video where we will analyze
two solutions of the "peak finding" problem. What does the problem say? The problem says:
Given a non-empty array of integers arr, create a function that returns the index of a
peak element. We consider an element as peak if it's greater than or equal to its neighbors, the
next and previous element (assume that arr[-1] and arr[n] are equal to -∞). And if there are
multiple peaks in arr, just return the index of one of them. For example, if we have the array
[6, 4, 5, 8, 3, 7], we have 3 valid outputs, we have the index 0, because 6 is
greater than its left neighbor, which is considered -inf because it doesn't exist,
and its right neighbor, 6 is greater than 4. The second valid output is 3, because arr[3]
is greater than its left and right neighbors. And the third valid output is 5, because arr[5] is
greater than its left neighbor, and greater than its right neighbor, which is considered -inf
because it doesn't exist. The first solution is obvious, we just traverse all the elements
while searching for a peak, and as soon as we find one, we return its index. Okay but how can we
identify a peak? We said that an element is a peak if it's greater than or equal to its neighbors,
so arr[i] >= arr[i-1] and arr[i] >= arr[i+1]. But this condition doesn't handle edge
cases, the first and the last element. To do so, we write: it's the first element or it's
greater than or equal to the previous element, and, it's the last element or it's
greater than or equal to the next element. Now we traverse elements of arr, and if one them
respects this condition, we return i. We can be sure that we will find a peak, because a non-empty
array always contains a peak, this is why we don't need to return something after the loop, we
can be sure that some iteration will return i. For the time complexity, here we are
just traversing elements of arr while doing O(1) operations inside the loop, the time
complexity is O(n). And for the space complexity, we are not using input size-related extra
space, so it's constant, O(1). What if I tell you now that we can solve this problem in O(logn)
time? We can do so by using divide and conquer, the technique we used in binary search. The idea
of this second solution is that we directly go to the mid of the array, and if the element at
the middle is smaller than the next element, it means that we have an increasing sequence of
elements, so we search in the right half only. Because in an increasing series of elements,
we always have a peak, let me tell you why. We found the beginning of an increasing series
of elements, now we have 3 scenarios. The first one is that the series decreases at some point,
in that case, here is the peak. Second scenario, the sequence becomes stable, in that case, here
is the peak, it respects the condition. Third scenario, it keeps increasing until the end of the
array, in that case, the peak is the last element, because it's greater than or equal to its both
neighbors. Remember, arr[n] is -infinity. Okay but what if the element at mid
is not smaller than the next element? In that case, it means that we either have an
increasing sequence of elements from the right that starts from the index mid+1, or a stable
sequence of elements. For the increasing sequence, we've seen that we always have a peak. And for
the stable sequence, we also always have a peak, here are the 3 scenarios. When the
stable sequence increases at some point, then we have an increasing sequence, we know
that it contains a peak. When it remains stable, then each element in the stable sequence is
a peak, because it's equal to its neighbors. And when it decreases, then the first element of the decreasing sequence
is a peak. So when arr[mid] is not smaller than arr[mid+1], we're sure that we will find a peak in
that sequence, so we search in the left part only. In code, same as in binary search, we use
two pointers left and right that represent the boundaries of the part where we're
searching for a peak. At the beginning, we're searching in the whole array, so left starts
at the first index and right at the last one. Now while left and right didn't meet yet, we calculate
the middle index, which is (left+right)/2, then if arr[mid] < arr[mid+1], then there is a peak in
the part that starts at mid+1, this is why left becomes equal to mid+1. Else, it means that there
is a peak in the part that ends at index mid, this is why right becomes equal to mid. After the
loop, left and right are at the same position, the position of a peak, we return one of them, we can
return left for example. For the time complexity, same as in binary search, we are dividing the
input size by 2 at each iteration, and we stop when we have only 1 element left, so the number
of iterations is the number of divisions by 2 to go from n to 1, which is log_2(n). And inside
the loop, we are doing O(1) operations only, so the time complexity is O(logn), where n is
the length of arr. And for the space complexity, we are not using input size-related extra space,
so the space complexity is constant, O(1). We reached the end of this video, in this one
we've seen an interesting solution that uses decreases and conquer to solve the "peak
finding" problem in O(logn) time complexity. I wanted to include this problem to show you
that knowing techniques like divide and conquer, dynamic programming, greedy algorithms, can
help you find a solution that has a better time complexity. I hope that you understood
both solutions, and see you in the next one!
12. 11- Palindrome linked list: Welcome to this new video where we will see
the "palindrome linked list" problem. So what does the problem say? It says, given a linked
list of integers list, create a boolean function that checks if it's a palindrome linked list
in constant space complexity. I hope that you tried to solve the problem, so let's start. First of all, in the problem description, it's mentioned that our solution must be in
constant space complexity, because if it wasn't the case, we could just copy our linked list
into another one by inserting elements at the beginning to get it reversed, then we
just keep comparing element by element, and they must all be the same to know if the linked
list is palindrome. But here, we aren't allowed, so we must find another way. You have to know
that to know if a list, or an array, or a string, or whatever, is palindrome, we have
to compare this pair of elements, then this one, then this one, and so on until the middle,
and they must they must be all equal. But here, it's a singly linked list, so we can't go
backwards like we did here with the right pointer. But instead, we can always search
for the position of right pointer at each iteration, then we compare. In other words,
for each iteration, we put our right pointer at the beginning, then we use a loop to take
it to the right position of the actual pair, and we compare the node at left, with the
node at right, and if they're different, so the list isn't palindrome, we return false. Remember that we did something similar in "reverse linked list" lecture, so we take
the same function, and we change the function name. But here we don't want to swap values,
we want to compare, so instead of swapping, we put a condition: if left.data isn't equal
to right.data, we return false. After the loop, if the function didn't return false
yet, it means that our linked list is palindrome, so we return true. And this solution obviously
runs on O(n²) because of the nested for loops. And for the space complexity, we are using
simple variables only, so it's constant, as required in this problem. But O(n²) time
complexity to check if a linked list is palindrome is quite too much, so let's see another solution.
In reality, what we are doing to know if a linked list is palindrome, is that we are
comparing the left half with the right half backwards, so what if we search for the beginning
of the right half, then we just reverse the right half, and we compare them element by
element, it's a totally valid solution. So our solution will have 3 parts, find the beginning
of the right half, reverse the right half, and compare the two halves. In order to do
the first step, we don't have the length, so we will use the slow and fast pointer technique.
So we will have slow pointer, that moves by 1 node at a time, and fast pointer, that moves
by 2 nodes at a time, and we keep moving them until fast arrives to the end of the list,
so at that moment, slow will be at the middle of the list. After it, second step, we need
to reverse the right half, in other words, the part that starts from slow pointer. And
in order to do that, we will use the function we made in "reverse linked list" lecture,
but here, the parameter is of type Node, and not of type LinkedList, and we can name it
head for example, instead of list, it's more appropriate. Then at the end of the function,
we return the new head which is the node pointed to by previous if you remember. Now third
step, we need to compare the two halves, and in order to do that, we put a pointer on head,
then while slow is not null, we compare head.data and slow.data, and if they're not equal, we
return false. Else, we just move head and slow by one position to the right and we continue.
If slow pointer reaches null, it means that there is no different pairs, so we return
true. And if we want to write this solution in Python, we first define the function, it
takes the parameter list of type LinkedList. Now first step, we put slow and fast on the
head, then while fast and fast.next are not null, we move slow by one position and fast
by two positions, so slow becomes equal to slow.next, and fast becomes equal to fast.next.next. Now second step, we have the right position of slow, so we can reverse the right half,
and to do that, we replace slow by reverseList(slow), the function that reverses a list we talked
about earlier. Now third step, we compare the two halves, so we put a pointer on the
head, then while slow is not null, if slow.data is not equal to head.data, we return false,
because the list is not palindrome. Else, we just move slow and head to the right to
move to the next pair of nodes, and we continue. At the end, if the function didn't return
false yet, then it's a palindrome linked list, we return true. For the time complexity, searching
for the beginning of the right half requires traversing the list once only, so it's O(n).
Reversing the right half is also O(n), because we traverse the list once. Third step, comparing
left and right halves requires traversing the list once only, so it's O(n). And repeating
an O(n) task 3 times gives O(n) time complexity. For the space complexity, here we used simple
variables only, so it's constant, O(1), and it respects the constraint imposed by the
problem. I hope that you understood this lecture, check the code below, and see you in the next
problem.
13. 12- Longest possible palindrome: Welcome to this new inside code video where
we will discuss a new coding problem, the "longest palindrome" problem. So what does the longest palindrome problem
say? It says, given a string str made of alphabetical
letters only, create a function that returns the length of the longest palindrome string
that can be made from letters of str. A palindrome string is a string that we can
read from the left or from the right by the same way, like the word 'racecar' for example. And here are four input output examples below, so pause the video if you want to try to solve the problem. Now let's start solving this problem. First, what would be the brute force solution? The brute force solution would be to generate
all the possible strings that we can make with those letters, then to start checking
one by one if it's palindrome, while saving the length of the longest one you found. But this method is very inefficient, because
there is a huge number of strings that we can make with n letters, and generating them
all would take a lot of time. So, let's see a better one. But before, you have to know something, in
this problem we are asked to find the length, so we are not asked to make the string itself,
we just need its length, don't bother yourself trying to find the string itself, you are
asked to get its length only, focus on it, because you don't always need to build something
to get one of its properties. So let's analyze this example. One of the longest palindromes we can make
is this one, it has 13 letters. And we can see that in a palindrome, we have
the same letters from the left and from the right, for example here we have the letter
'e' from both sides, we also have here the letter 'd' from both sides. So we can deduce than to extend a palindrome
string, we just have to add the same letter from both sides, for example to extend this
palindrome string, if we had let's say two letters 'f' in our initial input, we could
add one 'f' from the left, and one from the 'right', and we would get a longer palindrome
string. Now let's see how did we build this longest
palindrome from our input. Here are our input letters, and we have an empty
string now. Remember that this work isn't necessary in
the algorithm, we said that we just need the length, I'm showing this just to explain to
you how did we arrive to the solution that I will show you just after. So, let's take these 2 letters 'b' and we
put one from the left and one from the right. Now let's take these 4 letters 'c' and we
put 2 from the left and two from the right, the string is still palindrome. Let's also take these 2 letters 'd' and we
put one from the left and one from the right. And now, let's take these 4 letters 'e' and
we put 2 from the left and two from the right. Now we used all our pairs of letters to make
this palindrome string, we can't find another pair because the remaining letters in the
input are all different. But, isn't there a way to extend the palindrome
string more? Yes there is, it's by adding a center letter,
a letter that will be in the center, and it won't break the palindrome symmetry because
the center letter is symmetric to itself, so we can add it safely to make a longer palindrome. Now if we wanted to describe the algorithm
that I did to build that palindrome, we could say that I first counted occurrences of 'a',
I found that we have only one occurrence, so we can't make a pair of letters, this is
why I didn't use it since the beginning. Then occurrences of 'b', we have 3, so we can make only one pair with them by taking 2 letters of 'b'. Then occurrences of 'c', we have 4, we can
take them all to make 2 pairs of letters. Then occurrences of 'd', we have 2, so we
can take them all to make one pair. And finally, occurrences of 'e', we have 4,
we take them all to make 2 pairs of letters. At the end, because we have characters left,
we use one of them as the center letter, and we got our length, 0 + 2 + 4 + 2 + 4 + 1,
is equal to 13. So the algorithm is, we want to store the
occurrences of each letter, because we said that the input is made of alphabetical letters
only, and for that, we store them in an array of 128 elements all initialized to 0 where
each element occurrences of i represents the occurrences of the character whose i is the
ASCII code. As reminder, uppercase letters ASCII range
goes from 65 to 90, and lowercase letters from 97 to 122, so for example if we find
an uppercase 'A', which has 65 as ASCII code, we will increment the element at index 65, because it represents the number of occurrences of uppercase 'A'. So, we create that array, then we iterate
over letters of str, and for each letter, we increment the element at index ASCII code
of that letter. Now that we have the occurrences, we create
and initialize our length to 0, then we iterate over occurrences, and for each occurrences
of a letter, we want to make the maximum of pairs of that letter, so we must take an even
number to have an equal number from the left and from the right, and for that, we take
the greatest even number that is smaller than or equal to the number of occurrences. So if the number of occurrences is even, we
take it all, like we've seen with the letter 'e' that had 4 occurrences in the example
before, else if it's odd, we take number of occurrences minus one, like we've seen with
the letter 'b' that had 3 occurrences in the example before and we took 2 of them. After the loop, we said that if there are
remaining letters, we take one of them as the center letter, and to know if there are remaining letters in the input, we just have to compare the length. So if the length of our palindrome is smaller
than the length of the input, then we didn't use all the letters, so we add 1. And we got the length of the longest palindrome that we can make with letters of str, so we just return it. To write the solution in Python, we first
define the function longestPalindrome() that takes the string str as parameter, then we
create the array of 128 elements all initialized to 0. Then for each letter in str, we increment
the element at the ASCII code of that letter, we get the ASCII code by using ord() function, and note that we could also use a hash map instead of an array. After it, we create and initialize length
to 0, then for each occurrences occu in the array, if occu is even, we add occu itself,
because we take all the letters, else, we add occu-1, because we have one unused letter that couldn't be used to form another pair of letters. After the loop, we compare the lengths, and
if the length of the palindrome is smaller than the length of str, we add one, the center
letter. Now we just need to return the length,
and we finished our task. For the time complexity of this algorithm,
it's O(n) where n represents the length of str, because we are just iterating over the
n characters of str, and the length of occurrences array is constant, 128, so it doesn't increase
the complexity. And for the space complexity, it's O(1), because
even if we use an extra array of 128 elements, it is still considered as constant, because
the concept of constant space complexity is not that we don't use extra space, it just
means that the amount of extra space we use is constant, in other words, it doesn't get
affected by the input size, which is the case here because we will always have an array
of 128 elements no matter the input length. I hope that you understood this solution,
and see you in a next video!
14. 13- Get substring index: Welcome to this new video where we will see
the "get substring index" problem. So what does the problem say? It says, given two strings str1 and str2,
create a function that returns the first index where we can find str2 in str1. If we cannot find str2 in str1, the function
must return -1. I hope that you tried to solve the problem,
so let's start. The brute force solution of this problem is
quite obvious, we just have to traverse characters of str1 and check if str2 starts at that index by comparing the m next characters, where m is the length of str2. For example, if we have str1 being equal to
"aabbaaabab" and str2 being equal to "aaba", we start at the first index of both, and they
match, so we continue, second characters match, we continue, third characters match, we continue,
but for the last one it doesn't match, so we have to start again from the second character
of str1 now. First characters match, we continue, but second
ones don't, so we start again by incrementing i. Here the first ones don't match, and same
thing for the next i. Now they match, we continue, second characters
match, we continue, but third ones don't, we start again. First ones match, same thing for the second
ones, third ones, and last ones, so we finally found str2 in str1, we return i. In brief, with the brute force solution, for
each index of str1, we try to find str2 by starting from there, and if we fail, we have
to start again from the next index, let's see that in code. For the code of this solution, we first define
the function, it takes str1 and str2 as parameters, and we store their lengths in n and m respectively. After it, we said that we have to iterate
over indexes of str1, but we have to make sure that the number of remaining characters
is at least equal to m, to avoid going out of bounds, and for that, we stop at n-m+1,
the last index from where we can potentially find str2. Now inside that loop, we first suppose that
we found str2, by setting a boolean variable found to true, then we compare the next m
characters starting from i. And if str1[i+j] isn't equal to str2[j], it
means that we couldn't find str2 in the index i, so we set found to false, and we break
the loop. After the loop, if found remains true, it
means that we found str2 at the index i, so we return i. In the end, if we checked all indexes and
we didn't find str2, then it means that str2 doesn't exist in str1, we return -1, as mentioned
in the problem description. For the time complexity of this solution,
we are traversing str1, and for each index, we are re traversing str2 to check if we can
find it there, we can say that we are traversing str2 n times, and its length is m, so we get
a time complexity of O(nm). And for the space complexity, we are using
simple variables only, so it's constant, O(1). The problem with this solution is that when
failing to find str2 in a certain index, we have to always start again from the beginning,
this is why we end up with a time complexity of O(nm). So what we want to do in this second solution
is that, when finding characters that don't match, we try to not start from the beginning
again. Let's suppose that we have this input, and
all this part matches, but when we arrive here, you can see that characters don't match. But instead of starting everything again from
the second index of i, we can just continue from here, and j directly goes to the index
4, without comparing "abcd" again, because we already know that the previous 4 characters
match. Okay but how we did know that? We knew that because in the part that matches,
the length of the longest proper prefix that is also a proper suffix is 4, it's "abcd",
so we know that the last 4 characters of that part are equal to the 4 first ones, this is
why we can just skip them. But, in order to do that, you can see that
we need to always know the length of the longest proper prefix that is also a proper suffix,
at each index, so the best way to do that, is that before starting, we do some preprocessing,
by creating an array, where each element represents the length of the longest proper prefix that
is also a proper suffix of the substring of str2 that ends at that index. Let's build that array for this str2, the
first index will always be 0, because the only proper prefix of a string of length 1
is the empty string, the length is 0. Second index, same thing, we have no proper
prefix that is also a proper suffix except for the empty string, so we put 0. Same thing for the two next indices, but for
the next one, we have the string "a", because the substring starts and ends with "a", so
we can put 1, the length of "a". Next index, we have "ab", we put 2. Next index, we have"a", we put 1. Then, we have "ab", we put 2. Then, we have "abc", we put 3. Next index, we have "abcd", we put 4. Next index, we have empty string only, we
put 0. And last index, it's "a", we put 1. Let's see another example, this one, please
pause the video and try to do it by yourself. The first index holds 0, as always, then we
have "a", we put 1, then "aa", we put 2, they are overlapping by the way, then empty string,
we put 0, then "a", we put 1, then "aa", we put 2, then empty string, we put 0, then "a",
we put 1, then "aa", we put 2, then "aaa", we put 3, then also "aaa", we put 3, then
"aaab", we put 4, then "aaaba", we put 5, then empty string, we put 0, and now we have
"a", we put 1, and we finished filling the LPS array, LPS stands for longest prefix suffix. Okay but now, we filled the array manually, but how can we make an algorithm that fills it in linear time? For that, we need two variables, one to traverse
the array, and one to keep track of the length of the actual prefix suffix, i starts at 1
because we said that the first index is always 0, and length starts at 0 because we didn't
make a prefix suffix yet. Let's start, str[i] is equal to str[length],
so our prefix suffix becomes longer, we add 1 to length, we put length in lpsArr[i], and
we increment i to move to the next index. Same thing here, str[i] is equal to str[length], we increment length, we put length in lpsArr[i], and we increment i. But now str[i] and str[length] don't match,
so length becomes lpsArr[length-1], to be able to compare these two characters, because
if they match, then we'd have a prefix suffix of length 2. But it's not the case, so we set length to
lpsArr[length-1] again. Still no match, and length became 0, so the
longest prefix suffix is the empty string, we put 0 in lpsArr[i], and we increment i,
we have to start again, unfortunately. Now str[i] and str[length] match, we increment
length, we put length in lpsArr[i], and we increment i. Same thing here again. But now, they don't match, so length becomes
lpsArr[length-1], which is 1 here. They don't match, length becomes lpsArr[length-1],
which is 0. And now they don't match, and length is already
0, so we just put 0 and we increment i. Now they match, we increment length, we put
it in lpsArr[i], and we increment i. Same thing here, and same thing here again. But now, they don't match, so we set length
to lpsArr[length-1]. Now they match again, so "aaab" is a prefix
suffix, we increment length, we put length in lpsArr[i], and we increment i. They match again, same thing, and same thing
again. Now they don't match, length becomes lpsArr[length-1],
it becomes 1. Here they don't match again, length becomes lpsArr[length-1] again, it becomes 0. And here they don't match, and length is 0,
so we put 0 in lpsArr[i] and we increment i. Now they match, we increment length, we put
length in lpsArr[i], and we increment i. Now we finished filling the lps array, and
we got the same result. The code of the function that returns lpsArr
is exactly what we described now, it takes the string, then it creates an array, a variable
length initialized to 0, and a variable i initialized to 1. Now while i didn't reach the end of the string,
if they match, we increment length, we put length in lpsArr[i], and we increment i. Else, we check if length is greater than 0, and if it's the case, then length becomes lpsArr[length-1]. Else, when length becomes 0, we have no choice,
we have to put 0 in lpsArr[i], and we increment i. After the loop, we just return lpsArr. Now we learned how to get the lps array of
str2, but how can we use it for our initial problem, which is finding the index of str2
in str1? For that, let's take this input. First step, we have to make the lps array
of str2, so we create an array of the same size, and the first cell is always 0. Second index, we don't have a prefix suffix
except the empty string, so we put 0, and same thing for the next two indices. But then, we can have "a", so we put 1. Next index, we have "ab", we put 2. Then, empty string, 0, and same thing for
the one that comes after it. Next index, we have "a", we put 1, then we
have "ab", we put 2, and then we have "abc", we put 3. Last index, we have the empty string, we put 0. Now that we got the lps array, we put i and
j at the beginning of str1 and str2 respectively, and we start comparing. Characters match at index 0, 1, 2, 3, 4, and 5. But at index 6, they don't, and now comes
the difference with the first method, here we won't directly start again from the beginning,
but instead, we put j to lpsArr[j-1], to get the index from where we can start again in
str2 to avoid losing our progress. And here it's 2, so j becomes 2, and we continue
comparing. But they also don't match, so we set again
j to lpsArr[j-1], it becomes 0. They also don't match here, and j is 0 now
so we just increment i to try again from the next index. Now str1[i] and str2[j] match at index 0,
1, 2, 3, 4, 5, 6, 7, 8, 9, and 10. But at the index 11 of str2, they don't match,
but we don't go back to the beginning of str2, we just set j to lpsArr[j-1], so it becomes
3, and i doesn't go back. And now str1[i] and str2[j] match again, so
we will be able to continue from there without going back to 0, here they match at index
3, 4, 5, 6, 7, 8, 9, 10, and 11. Now j becomes equal to m, it means that we
found str2 in str1, so we return i-j, which is the index from where str2 starts in str1,
and we've been able to solve this problem by using the KMP (Knuth-Morris-Pratt) algorithm,
the one we've seen now. We move to the code of the main solution function
now, we define the function, it takes str1 and str2 as parameters, and it stores their
lengths in n and m respectively. Now, we have 3 early exit conditions, we're
not obliged to put them, but they will cover cases where we can directly get the result. For example, when str2 is longer than str1,
in other words, when m is greater than n, then it's impossible to find str2 in str1,
we directly return -1, because a string cannot contain another string that is longer. Second early exit condition, when n
and m are equal, in that case, the result will depend on if str1 and str2 are equal,
so we return 0 if they're equal, and -1 else. 0 because it means that str2 starts at index
0 of str1. Last one, when str2 is the empty string, we
can directly find it at index 0, so we return 0. Now we can start, so we get lpsArr of str2
by using the function we made earlier, and we also need i and j, they both start at 0. Then we can start comparing, and we stop when
a pointer reaches the end of its string, so we write, while i is smaller than n and j is
smaller than m. Inside the function, if str1[i] and str2[j]
are equal, we just increment both i and j, like we were doing in the last example. Else if they don't match, we check if j is
greater, and if it's the case, we said that we set it to lpsArr[j-1], to go to the index
from where we continue comparing. Else if j is already 0, it means that we can't
start from the actual value of i, so we increment i, and j remains 0. After the loop, if j is still smaller than
m, then it means that we weren't able to find the end of str2, in other words, we didn't
find it, so we return -1. Else, in the case where we find str2, we return
i-j, as explained earlier, so our result will depend on what condition broke the while loop. For the time complexity of this solution,
we have two steps, the first one is getting the lps array of str2, and for that, we have
to traverse str2, so it costs m. Second step, we have to traverse str1 to search
for str2, and the length of str1 is n, so it costs n. In total, we have n+m, so the time complexity
is O(n+m). But, don't forget that m cannot be greater
than n, otherwise, the function would exit directly, so in the worst case, m is equal
to n, which means that we get n+n, which is 2n, we remove the constant, and we get a time
complexity of O(n). And for the space complexity, we are using
an array of m elements, the lps array, so the space complexity is O(m). I hope that you understood the KMP algorithm,
check the code below, and see you in the next problem!
15. 14- Tree breadth first search: Welcome to this new video where we will learn
how to traverse a binary tree in level order, called breadth first search. So given a binary tree root, create a function
that prints its nodes in level order, for example if we have this tree, the function
has to print the root, then this level, then this level, and this level. You can see the
output below. I hope that you tried to solve the problem,
so let's start. An important detail that we have take in account,
is that in a binary tree, a node can only be accessed from its parent, so for example
to be able to access to this node, we need to be on this one, its parent. Okay but why is this detail important in our
case? It's because we will traverse the tree level
by level, which means that once we pass from a node, we won't go back to it later, so we
won't be able to access to its children. This is why the idea of breadth first search
is that for each node, we print its value, and we store its children so that we can have
access to them later, then same thing when we access to a child later, we store its children
and we continue, until we get null elements. To better understand, I will show you the
process on this binary tree, and at right you can see the data structure where we will
store our elements, and at the bottom you can see the output. So at the beginning we have the root, we take
it, we print its value, we store its left child, and we store its right child. We still have elements, so we continue. We take the next element, we print its value,
we store its left child, and we store its right child. We continue, we take the next element, we
print its value, we store its left child, and we store its right child. We repeat again, we take the next element,
we print its value, we store its left child, and we store its right child. Same thing, we take the next element, we print,
we store left, and we store right. We repeat, we take the next element, but it's
null, so we just continue, because it has neither value nor children. Next element, we take it, we print, we store
left, and we store right.Next element, it's null, so we do nothing we just continue. We still have elements, so we continue. We take the next element, we print, we store
left, and we store right. Same thing again, we take the next element,
we print, we store left, and we store right. Next element, we take it, we print its value,
we take its left child, and its right child. We repeat, we take the next element, we print,
we take left, and we take right. We take the next element, and now we have only null elements left, so we continue, we continue, we continue, we continue, we continue, we continue, we continue, and we continue. And we have no more elements, so we finished
our task, we printed all the nodes in level order traversal. Okay now we used a data structure, but we
didn't precise which one. You can see that we were inserting elements
from the bottom, the tail, and we were removing elements from the top, the head, and the best
data structure to do that is the queue, because in a queue, we have two main operations, we
have enqueue, the action of inserting an element at the tail, and dequeue, the action of removing an element from the head, and it's exactly what we were doing here. Now let's have a look on the code, we first
define the function, and here we create our queue, but we can just use an array that acts
as a queue, it's not a problem, like we did here, and the queue initially holds the root. Then, while the queue still has elements,
we get the next element by popping the first element on the queue, the one at the index
0. But because popping the first element runs
on O(n) time complexity because we have to shift all the values to the left, then I decided
to use another alternative, instead of popping, we just put an index i that will walk on the
array, so to move to the next element, we just increment i instead of popping. So we initialize i to 0, the first index,
then while i is smaller than the length of the queue, it means that we still have elements. Then inside the loop, we get the element at
the index i, and we increment i, and by this way, it's like we popped an element in O(1)
time complexity. Now after that we got the actual node, we
check if it's null, because if it's the case, we just continue without doing nothing, as
you've seen during the process applied on our last example. Else if the element is not null, then we do
exactly what we did, we just print its value, we enqueue its left child, and we enqueue
its right child, that's it, and this loop will get repeated until we have no more elements
in the tree. For the time complexity, here we are just
traversing the tree once, so it's O(n). And for the space complexity, we used extra
space for the queue that can contain n elements, so it's O(n). We can also write the function recursively
if we want, for that we first define the function, it takes the root, the index i, and the queue. Then remember that the loop was running while
i is smaller than queue length, so our base case is when i is greater than or equal to
queue length, in that case we have no more elements to print, we just return. Else, we do the same thing, we pop a node
from the queue, and if it's not null, we print its value, we insert its left child, and we
insert its right child. Then to continue the recursion, we call the
function again with the root, i+1 because we want to move to the next element, and the
same queue. We also need this small function that takes
the root only then calls the recursive one with 0 as initial value of i, and the queue,
it contains the root at the beginning. And the complexity is the same as iteration,
O(n) time and space complexity. Before ending this video, I want to go back
to what I told you on "reverse binary tree" lecture, we did it in depth first search, but I told you that we can also do it in breadth first search. And now that we've seen how to do breadth
first search, we will reverse the tree that way. To reverse the tree, for each
node, we swap between its left and right child, instead of printing like we did here, it's
the only difference. So we just take the function we made now,
we change the function name, and instead of printing here, we swap between poppedNode.left
and poppedNode.right, that's it, and we got our function, in O(n) time and space complexity. I hope that you understood the problem, check
the code below, and see you in the next one.
16. 15- Sort linked list: Welcome to this new video where we will see
the "sort linked list" problem. The problem can be easily guessed from the title, it says,
given a linked list list, create a function that sorts it in ascending order. Note that
the function returns nothing, the list must be sorted in-place. I hope that you tried
to solve the problem, so let's start. First of all, you probably know that there exist
different sorting algorithms, we have bubble sort, insertion sort, selection sort, heap
sort, merge sort, quicksort, and many other ones. And for this video, we will apply two
of them on a linked list, we will see bubble sort and merge sort, and because you are more
familiar with arrays, I will first apply them on an array. Let's start with bubble sort, you probably know how it works but I'll explain it again quickly. We have this array of 5
elements for example. The idea of bubble sort is that we start at the first element, then
while we didn't reach the last element, if the actual element is greater than the next one, we swap them. And we repeat this process n times. Here we start at the index 0, and
6 is greater than 5, we swap, and we continue. 6 is greater than 3, we swap, and we continue.
6 is greater than 4, we swap, and we continue. And 6 is greater than 1, we swap, and we finished.
But we said that we repeat the process, so we start again from 0.
5 is greater than 3, we swap, and we continue. Also greater than 4, we swap, and we continue. Same thing with 1, we swap, and we continue. Now nothing will change because the remaining part is sorted. We repeat, 3 is smaller than 4, we just continue. 4 is greater than 1, we swap, and we continue.
And now the remaining part is sorted, nothing will happen. We repeat the process, 3 is greater
than 1, we swap, and we continue. Now nothing will happen, the remaining part is sorted. And here we finished our work, and the array is sorted. In code it would be, we first define
the function, then we set a counter i to count how many times we repeated the process. We
said that we repeat it n times, where n is the length of the array, so while i is smaller
than array's length, we start from the beginning of the array, by putting an index j initialized
on 0. Then while j+1 is smaller than array's length, we do what we were doing in the last
example, if arr[j] is greater than arr[j+1], we swap. We wrote j+1 because the next element
must exist to be able to compare, so if we write j, when the loop reaches the last index,
we will go out of bounds when we write arr[j+1]. Then in all cases, we increment j to move to the next index, and when we finish the process, we increment i. The function returns
nothing because it sorts the array in-place. Okay but how can we write this code for a
linked list?. For that we need to make some changes, for example we can write arr[j],
but we cannot write list[j], because a linked list doesn't have random access like an array.
We also have to change the while loop condition, we don't have the linked list's length. Instead,
we will put pointers on nodes and we traverse the linked list until we arrive to null. We
want to start at 0, so we put a pointer i on the head. Then the condition for array
is while i is smaller than array's length, so for a linked list we write, while i is
not null, because when i becomes null, it means that we finished traversing the linked
list. Now for j, it also starts at 0 in the array, so it starts on the head for the linked
list. And for the while loop condition, j+1 had to be smaller than array's length, and in a linked list, j+1 represents the next element, so we write, while j.next is not null. Inside the loop, we compare the data of the node j with j.next, so if j.data is greater
than j.next.data, we swap their data. And in all cases, instead
of incrementing j, here it's a pointer on a node, so to continue traversing, j becomes
j.next. Same thing for i, here i becomes i.next. And we got our bubble sort on linked list. For the time complexity, we are re traversing the linked list n times, so we have a time complexity of O(n²). And for the space complexity, we are using simple variables only, so it's O(1). Now we move to the second sorting algorithm of this video, merge sort. Merge sort works
this way: to sort an array, we divide the array into two halves, we sort the left half, we sort the right half, then we join the two sorted parts while keeping the result sorted. But you can see that the same action is repeated inside the function, so it's a recursive process. For example for this array, we divide it into two halves. Now we sort the left half, so
we apply the same process on it, we divide it into two halves. Now we sort its left part,
we first divide it into two halves. And here on the left part, it's the base case, it has
only one element, so it's already sorted. Now we sort the right part, but it's also
the base case, it's already sorted. Now we merge the two halves, the smallest
element is 3, then comes after it 5. Here we backtracked, and we sort the right half. We have only one element from the left, and one element from the right, so now we have to merge by keeping the result sorted, the smallest element is 2, then comes after it 7. Now we got the two
halves sorted, we merge them. The smallest element is 2, then we have 3, then 5, then
7. Back to the original call, we sort the right half now. From the left, we have only
one element, the base case. And from the right, we divide by two, then we have base cases, and we merge, we have 4, then 9. Now we merge, we have 1, then 4, then 9. Now we merge the original two halves, we have 1, then 2, then 3, then 4, then 5, then 7, then 9, and we get the array sorted. Oh and by the way, if you are asking yourself how are we merging sorted arrays, know that we put a pointer on the first element of the first array, and
one on the second array, then we keep taking the smallest value between arr1[ptr1] and
arr2[ptr2], and we increment the pointer of the array from where we took the value, until
we have no more elements in both. Okay but how can we apply this process for a linked
list? First of all, to divide into two halves, in an array we just have to calculate the
middle index, as we were doing in divide and conquer lectures, left + right, divided by
2. But for a linked list, we don't have the length, and we don't have random access, so to find the middle, we will apply slow and fast pointer technique, as we did in "find the duplicate" lecture. So we will put both of them on the head, then we keep moving slow by one node and fast by two nodes, all that while we still have at least two elements
left. Let's start, we put slow and fast, and we start moving. We still have at least two
elements left, we continue. Same thing here, we continue. And here, it's the last element, so we stop, and you can see that slow pointer is
in the middle of the list. But we don't
have random access, so we need to get two separate linked lists, and for that we will
do a smart move, we put a pointer on the node that is just after slow, which represents
the head of the right half, and we set slow.next to null, to make it the last element of the
left half linked list, and we got two separate linked lists. And now we sort these two linked
lists, by applying the same process, etcetera etcetera, same as we did with arrays. For the code, we first define the function, then the base case is when the linked list has at most one element, so if head is null or head.next is null, it's already sorted, we return it. In the general case now, we search for the middle, we do what I explained before, we put slow and fast on the head, then while we still have at least two elements to be able to jump, in other words, while fast.next and fast.next.next are not null, we move slow by one node and fast by two. After the loop, slow is on the middle, we save the head of the right half in a variable to not lose it, then we set slow.next to null. Now we sort the two halves, and for that, we sort the linked list head, which is the left half, then we sort the linked list headRightHalf, which is the right half. Now last step, wereturn these two sorted linked lists merged, by calling the function mergeSortedLists that
we will define now. To merge two sorted linked lists into a sorted one, we first define the function, then we put ptr1 and ptr2 on their heads to traverse them. Now we declare two
other variables, returnedHead that will contain the head of our merge, and tail, the pointer
that will act as the tail of our merge, to know where to put the next node. And now, while one of the lists still has elements, we search for the next node, which we can name lower because we are searching for the one with the lower value. And we can have
3 cases, the case where both lists still have elements, in that case, we have to compare,
so if ptr1.data is smaller than ptr2.data, then lower is ptr1, and ptr1 moves to the
next node. Else, lower is ptr2, and ptr2 moves to the next node. Now second case, when only
the first list still has elements, in that case, we directly take ptr1, so lower is ptr1
and ptr1 moves to the next node. Third and last case, when only the second list still
has elements, so we directly take ptr2, lower is ptr2, and ptr2 moves to the next node. The case where they are both null doesn't exist because the loop would have been already broken. Now still in the loop, after we got the lower node, which is the next element
on our merge, we have to insert it in our merge. And for that, we have two cases, the
first one is when the returnedHead is still null, which means that it's the first element
of the list, so it's both the head and the tail, this is why we set returnedHead and
tail to lower. Second case, when we already inserted elements before, in that case, we
set tail.next to lower, to insert lower from the end of our list, and we move tail to the
next node to be able to insert the element of the next iteration. At the end, after we
merged our two linked lists, we return the returnedHead, which is the head of our merge. To better understand how this function works, let's merge these two sorted linked lists. We first put ptr1 and ptr2 on heads, and returnedHead and tail on null, because we didn't insert
yet. Now we can start, here ptr1 and ptr2 are both not null, so we compare, 1 is smaller
than 3, so lower is ptr1, and we move ptr1. Now we found lower, and returnedHead is null,
so we set both returnedHead and tail to lower. We continue, ptr1 and ptr2 are both not null,
we compare. And 3 is smaller than 5, so lower is ptr2, and we move ptr2 forward. Now tail.next becomes lower, and we move tail forward for the next iteration. We continue, we compare,
5 is smaller than 6, so lower is ptr1, and we move ptr1 forward. We also set tail.next
to lower, and we move tail forward. We repeat the process, we compare, 6 is smaller than
8, so lower is ptr2, and we move ptr2 forward. And we set tail.next to lower, and we move
tail forward. We repeat, but here, only ptr1 is not null, so we directly set lower to ptr1,
and move ptr1 forward. Then we set tail.next to lower, and we move tail forward. We repeat,
only ptr1 is not null, so we do the same thing, we directly set lower to ptr1, and we move
ptr1 forward. Then we set tail.next to lower, but nothing changes, and we move tail forward. Now, ptr1 and ptr2 are both null, so we finished our task, and we return returnedHead, by keeping the result sorted Back to the code, we could
say that we finished, but we still have one thing to do. You have to know that in the
linked list implementation of this course, we have two data types, we have the class
Node, that contains the data and the pointer on the next node. And we have LinkedList,
which contains only the head, which is of type Node. I'm telling you this because the
function prototype of our problem has list of type LinkedLIst as input, and it returns
nothing. But the mergeSort function we made, directly takes the head, which is of type
Node, and returns the head of the sorted linked list, so the output is of type Node. And to
solve this type incompatibility, in our initial function, which is sortList in reality, we
just replace our linked list head with the result returned when we sort it, and we return
nothing, and our linked list is now sorted. For the time complexity, as explained in lectures
where we used divide and conquer, when we keep dividing something by 2, we need logn
divisions to reach 1, which is the base case here. And we are always searching for the
middle and merging the two sorted halves, which both cost O(n) in the worst case. So
we just multiply to get the time complexity, we get O(nlogn). And for the space complexity,
we are always dividing the list by 2, and the base case is when we arrive to 1 element,
for that we need logn divisions, so the maximum call stack size is logn, which gives us a
space complexity of O(logn). Extra information, some of you may know that space complexity of merge sort with arrays is O(n), why isn't it the same with linked lists? It's because,
with arrays, we need to create an additional array to merge the elements. Which is not
the case with linked lists, you've seen that we merged those lists in-place, without using
an additional linked list, so we only count the maximum call stack size, this is why we
got O(logn) only and not O(n). We can also notice that the time complexity of merge sort, which is O(nlogn), is asymptotically better than bubble sort's one, which is O(n²), so
in general, we will use merge sort to sort a linked list instead of bubble sort. We arrived at the end of this video, I hope that you understood both sorting algorithms and how they work on a linked list, check the code below, and see you in the next lecture.
17. 16- Valid binary search tree: Welcome to this video where we will see the
"valid binary search tree" problem. So what does the problem say? It says, given a binary tree root, create a boolean function that checks if it's a binary search tree. Note that in a binary search tree, every node
must be strictly greater than all nodes on its left subtree, and strictly smaller than
all nodes on its right subtree. I hope that you tried to solve the problem,
so let's start. To solve this problem, the intuition that
we usually have is that for every node, we check if it's greater than the left node and
smaller than the right node. And it can work sometimes, like with this
binary tree. Okay but what if we had this binary tree? Here our solution returns true because this
node is greater than its left node and smaller than its right node, same thing for this one,
and same thing for this one. But in reality, this binary tree is not a
binary search tree, because even if the root is greater than its left node, it's not enough,
because it must be greater than nodes of the the whole subtree, which is not the case
here, 16 is smaller than 19, so this solution doesn't work, it checks only one level, this
is why we have to find another solution. In order to do so, we will use the following
property: in a binary search tree, every node value must be in a certain range, it must
be between its closest left ancestor and it's closest right ancestor. For example for this node, it must be between
8 and 16 exclusive, so we can put 13 for example. This means that the idea of our solution is
to check that every node respects this condition, and we can do that by always passing the minimum and the maximum value that the next node must respect. Let's apply this process to this binary tree. The root can be any value, so the minimum
is -infinity and the maximum is infinity. Now we have to check if the left subtree is valid
BST, its head must be between -infinity and 16, which is its closest right ancestor. So what we did here is that when calling the
function, we didn't pass -infinity and infinity, we passed -infinity and the root value, which
is 16. We continue, we check the left subtree now,
the node must be between -infinity and 9, which is the case here, and it's a leaf so
we backtrack. From the right now, the node must be between
9 and 16, it's the case here. From the left, 11 must be between 9 and 13,
which is the case, and it's a leaf so we backtrack. From the right now, 14 must be between 13
and 16, the condition is respected, and it's a leaf, we backtrack. Now we checked that the node respects the range, we checked that both subtrees are valid BSTs, so it returns true. Same thing here, it returns true. Back to the original call, we now need to
check if the right subtree is valid, and for that, 16 becomes the minimum here, so we call the function with 16 as a minimum, and infinity as a maximum. Now 22 must be between 16 and infinity, it's
the case here, we continue. From the left we have nothing, and from the
right, 31 must be between 22 and infinity, it's valid, we continue. From the left, we have 25, it must be between
22 and 31, it's the case, and it's the last node, so we have true, true, true, and true. For the code of this solution, we first define
the function, it takes the root, and the minimum and maximum, their initial value is -infinity
and infinity respectively. Now we have two base cases, the first one
is when the tree is empty, we return true because it doesn't break the BST condition. Second base case, when the root is not in
that range, in that case, we return false because it means that the binary tree is not
a BST. And in the general case, we have to check
that both subtrees are BSTs, so we return isBst with root.left and isBst with root.right,
but look at parameters, root.data becomes the maximum of the left subtree, so with root.left,
min and max are min and root.data. And root.data is also the minimum of the right
subtree, so with root.right, min and max are root.data and max, as you can see. For the time complexity, here we are traversing
the whole tree, so it's O(n). And for the space complexity, the maximum
call stack size is when we reach the bottom, so it's O(h). This solution is enough and optimized for
this problem, but we can have another approach, because another property of a binary search tree is
that when traversing it inorder, values are sorted as you can see, so the idea of this
second solution is just to traverse the tree inorder while checking if values are sorted. And for that, we will have a variable precedent
that will contain the value of the precedent node inorder. Remember that in inorder traversal, we first
deal with the left subtree, then with the root, then with the right subtree. So in code, we define the function, it takes
the root and the precedent value, then we have the base case, when the tree is empty, in that case, we return true, same as for the first solution. Now we must check if the left subtree is a
binary search tree, so if the function called with root.left returns false, so the left
subtree is not a binary search tree, we return false. Now the root, we have to check that the root
value is greater than the precedent element, so if root data is smaller than or equal to
precedent[0], we return false. Here I used smaller than or equal because
even if it's equal, then it's not valid, because a binary search tree, by definition, doesn't
contain duplicates. And precedent is an array of 1 element, we
did that to be able to pass it by reference and modify it from any call. Else if root.data is greater than precedent[0],
then before calling the function on the right subtree, precedent[0] becomes equal to root.data
because the root data is the precedent element of the right subtree inorder, this is why
we performed this modification. With the right subtree now, we call the function
with root.right, and if it returns false, then we return false because this tree is
not a BST. At the end, after checking the 3 conditions,
that the left subtree is a BST, that the root data is greater than the precedent element,
and that the right is a BST, we can return true. But this one is not the main solution function,
because the main solution function takes only the root as a parameter, it creates that array
of one element initialized to -infinity because the first node inorder can be any value, then
it returns the result returned by that recursive function. For the time and space complexity of this
solution, it's same as for the first one, we are just traversing the tree once and the
maximum call stack size is h, so we get O(n) time complexity and O(h) space complexity. I hope that you understood this lecture, check
the code below, and see you in the next one.
18. 17- Minimum cost path in matrix: Welcome to this new video where we will see
the "minimum cost path in matrix" problem. So what does this problem say? It says, given
a matrix of integers matrix of size n times m, where each element matrix[i][j] represents
the cost of passing from that cell. You are asked to create a function that returns the
cost of the minimum cost path to go from the top left cell to the right bottom cell, knowing that
you can only move to the right or to the bottom. I hope that you tried to solve
the problem, so let's start. First of all, this problem description is quite
long, so let's break it into pieces to understand it better. So we have a matrix of integers of size
n times m, in other words, we have a 2D array that has n rows and m columns. Where each element
represents the cost of passing from that cell. An example of that, let's suppose
that we want to take this path, so we have 0 at the beginning, then we start at
the first cell, so we add its cost, then this one, then this one, then this one, then this one,
then this one, then this one, and this one. So here we got the cost of going from top left
cell to right bottom cell by taking this path. Then, we are asked to return the cost of the
minimum cost path to go from top left cell to bottom right cell. Because you have to know that
to go from top left cell to bottom right cell, we have many ways, here we have this one, this
one, this one, and many other ones. And each one of them has a cost, that we can get by adding
the cost of each cell, like we did earlier, and our goal is to find the
path that has the minimum cost. Last thing, it told us that we can only move
to the right or to the bottom, so we can't go up or left while traversing the matrix, which
means that this path for example is not allowed. Now that we understood the problem, we can
start. The brute force solution of this problem is obvious, it's to check every possible path,
and taking the one that has the minimum cost, but how can we do that? For that, we can notice
something interesting, it's that each time we pass by a cell, we have two choices to continue,
we can either go to the right, or to the bottom, and each choice will give us a new path, that
will itself create two new paths, and so on. And in reality, to get the minimum cost path, we get
the minimum between the minimum cost path from the right and the minimum cost path from the bottom,
and we add to it the cost of the actual cell. For example if we have this matrix, we get
the minimum cost when we start from this cell, the one at the right, it's equal to 57 here, by
taking this path. And we get the minimum cost when we start from this cell, the one at the bottom,
and it's equal to 51 here, by taking this path. And we take the minimum between
them, which is equal to 51 here, and we add to it the cost of the top left cell,
and we got the minimum cost to go from top left cell to right bottom cell, which is 54. If
you are asking yourself why did we do this, it's because when we are at the top left cell and
we have to make a choice for where to move, we have to know the minimum cost of both of them, so
that we can take the worth one, the minimum. But you can see that to get the minimum cost, we must
get the minimum cost from right and from bottom, which means that we are doing the same action,
so our function will be a recursive function, where parameters are the matrix, and the cell
indexes i and j, that are initially equal to 0, because we start at the top left
cell. And we return matrix[i][j], plus the minimum between the minimum cost from the
bottom, where we call the function with i+1 and j, the indexes of the cell at the bottom,
and the minimum cost from the right, where we call the function with i and
j+1, the indexes of the cell at the right. But don't forget that it's a recursive
function, so we need to set the base case. Here we have three other cases, the first
one is when we are at the right bottom cell, so when i is equal to n-1 and j is equal to m-1,
and in that case, we have no cell from the bottom, and no cell from the right, so
we just return matrix[i][j]. Second case, it's when we are on the last
row, when i is equal to n-1, and in that case, we have no cell from the bottom, which means that
we can continue from the right only, this is why we return matrix[i][j] plus minimum cost call from
the right, j+1. Third base case, it's when we are on the last column, when j is equal to m-1, and in
that case, we have no cell from the right, we can only go to the bottom, so we return matrix[i][j]
plus minimum cost call from the bottom, i+1. And the general case is when we can go from both right
and bottom, and in that case, as said earlier, we return matrix[i][j] plus the minimum between the
minimum cost from the bottom, and from the right. To understand better, let's search for
the minimum cost in this 2 times 4 matrix. So at screen you have the matrix, the function
code, the call stack, and the recursion tree. The first call goes on the top left
cell, where i and j are both 0. And both right cell and bottom cell exist, so we
search for the minimum between their minimum cost, and we first search for the minimum cost from
the bottom, so a new function call appears on the call stack. Here i is equal to n-1, so
we get minimum cost from the right, because we can go to the right only. Same thing here, i
is equal to n-1, we search from the right only. Same thing here, we search from the right. And
here i is equal to n-1 and j is equal to m-1, so we return matrix[i][j], and we backtrack.
Here we got the minimum cost from the right, we return it + matrix[i][j]. Same thing here,
we backtrack. And same thing here, we backtrack. We went back to the original call, and we got the
minimum cost from the bottom, now let's search for the one from the right. Here both right and bottom
cell exist, we start searching from the bottom. Here i is equal to n-1, we search from the
right. Same thing here, we search from the right. And here, we just return matrix[i][j], and
we backtrack. We also backtrack here. And same thing here. Now we search from the right. Both
cells exist, we start searching from the bottom. i is equal to n-1, we search from the right.
Here it's the right bottom cell, we directly return matrix[i][j] and we backtrack. Here we also
backtrack. Now we search from the right. Here j is equal to m-1, so we search from the bottom only.
And we have i is n-1 and j is m-1, we return matrix[i][j] and we backtrack. Same thing here, we
backtrack. And now we got both minimum cost from bottom and right, so we return the minimum between
them, plus matrix[i][j], the total gives 13 here. Same thing here, we got them both,
we return their min + matrix[i][j], which gives a total of 14 here, 1 + 13. Now we
went back to the original call, we have them both, we return their min + matrix[i][j], and we
get the minimum cost of the whole matrix, which is equal to 17. Now for the time complexity, each recursive call is calling the function twice, and the height
of the recursion tree is n+m-1 because at each call, we are either moving to the right or to the
bottom, and the length of the path to go from the top-left cell to the bottom-right cell is n+m-1,
so we get an O(2^(n+m)) time complexity. And for the space complexity, the maximum call stack
size is n+m, we get an O(n+m) space complexity. And you can see that the time complexity of this
solution is huge, so let's search for another one. A solution that may come in your mind is to be
greedy, so we always pass from the cheapest cell between the one at the right and the one at the
bottom. So if we have this matrix, we start at the top left cell, and 6 is smaller than 10, so we
directly go to the bottom. 2 is smaller than 11, we go to the right. 8 is smaller than 15,
we go to the bottom. Now we have no choice, we go to the right. Same thing here, we go the
right. And same thing here, we go to the right. And here we arrived to the bottom right
cell, we just add its cost and we finished. But the problem with this solution is that
it doesn't always work, for example here, by being greedy, we got a path whose cost is
74, but if we took this path, we would get 37, so our greedy method doesn't always give the right
path, then it's not a valid solution. In reality, the reason behind the huge time complexity of the
first solution is that, if we take that recursion tree we've seen earlier, you can see that we
have function calls that are repeated uselessly, so the what we gonna do to solve this issue is
to use dynamic programming, because we will store the result of each function call in a matrix of
the same size to avoid recomputing it later, called minCosts for example, a matrix where each
element at indexes i j represents the minimum cost from top left cell to cell at indexes i j. But
how can we do that? This is what we gonna see now. Let's suppose that we're at the right bottom
cell, and by a way we ignore, we got that the minimum cost from top left cell to the cell at
the left is 47, and to the cell at the top is 61, so to get the minimum cost, we take the cost
of the bottom right cell, which is 7, and we add to it minimum between when we come from the
left and from the top, which gives 47 + 7 here, which is equal to 54. Okay it works, but to get
those two values, we need to do the same thing, and so on. So our relation is, minCosts[i][j] is
equal to the minimum between minCosts[i-1][j], the cell at the top, and minCosts[i][j-1], the
cell at the left. + matrix[i][j]. But you can see that we always need the previous cells, does it
mean that we can't fill this matrix? No, so let's see how to fill it. First of all, the top left
cell doesn't have previous cells, neither from the left nor from the top, so it's just equal to
matrix[0][0]. Then in the first row, each element has only the cell from the left, so minCosts[0][i]
is equal to minCosts[0][i-1] + matrix[0][i]. In the first column, each element
has only the cell from the top, so minCosts[i][0] is equal to minCosts[i-1][0]
+ matrix[i][0]. Now for the remaining cells, we start at minCosts[1][1], and we fill row
by row, until the last cell, by using that relation we've seen earlier. To understand better,
let's fill the minCosts matrix of this matrix. As said earlier, minCosts[0][0] is just equal
to matrix[0][0]. Now we fill the first row, we have 3 + 12 is equal to 15, then 15 + 4 is 19,
then 19 + 7 is 26, then 26 + 10 is 36. Now the first column, we have 3 + 6 is 9, then 9 + 19 is
28, then 28 + 1 is 29. Now we fill the remaining cells, and we start with minCosts[1][1]. We
have 9 is smaller than 15, + 8 it gives 17. Then 17 is smaller than 19, + 15 it gives 32.
Then 26 is smaller than 32, + 11 it gives 37. Then 36 is smaller than 37, + 4 it gives 40. Now
third row. 17 is smaller than 22, + 5 it gives 22. 22 is smaller than 32, + 14 it gives 36. 36 is
smaller than 37, + 32 it gives 68. 40 is smaller than 68, + 21 it gives 61. Now last row. 22 is
smaller than 29, + 20 it gives 42. 36 is smaller than 42, + 2 it gives 38. 38 is smaller than
68, + 9 it gives 47. 47 is smaller than 61, + 7 it gives 54. And we finished filling our minCosts
matrix, and we got that the minimum cost to go from top left cell to right bottom cell is 54. Now
if we want to write this solution in Python, we define our function, then we store the number of
rows and columns in n and m respectively. Then we create our n times m minCosts matrix, and we start
filling it. So first, we said that minCosts[0][0] is equal to matrix[0][0]. Then we fill the first
row, by using a for loop where i starts at 1, the index of the second element, and stops at m,
the number of columns, where each element is equal to minCost of the previous cell on the first row +
the cost of the actual cell. Now the first column, we fill it by using a for loop where i
starts 1, and stops at n, the number of rows, where each element is equal to the previous cell
on the first column + the cost of the actual cell. Now we fill the remaining cells
starting from minCosts[1][1], so we have a first for loop for rows,
where i goes from 1 and stops at n, and inside we have a for loop for columns,
where j goes from 1 and stops at m, and inside, we calculate minCosts[i][j] by
getting the minimum between minCosts of the element at the left and the one at the top, +
the cost of the actual element, matrix[i][j]. Now that we filled the costs matrix, we just
return the value of the bottom right cell in minCosts, which represents the minimum
cost to go from top left to bottom right, so we return costs[n-1][m-1], where n-1 and
m-1 are the indexes of the bottom right cell. For the time complexity of this solution,
here we are just filling an n times m matrix, so our time complexity is O(nm). For the space
complexity, here we used an additional matrix of size n times m, so our space complexity is
O(nm). And you can see that this solution has a way better time complexity than the first
solution, so dynamic programming helped us to avoid useless function calls. I hope that you
understood this lecture, and that it helped you to understand dynamic programming more, so check
the code below, and see you in the next lecture.
19. 18- Balanced binary tree: Welcome to this new video where we will analyze
two solutions to the "Balanced binary tree" problem. The problem says: Given a binary
tree, create a boolean function that checks if it's balanced. For this problem, we define
a balanced binary tree as a binary tree whose left and right subtrees are balanced and differ
in height by at most 1. First of all, we need to know what does height mean. The height of a tree
is the length of its longest root-to-leaf path, for example, the height of this tree is 4.
And to calculate the height of a binary tree, we recursively calculate the height of its left
subtree, the height of its right subtree, we take the max between them, and we add 1 to count
the root. The height of a tree of 2 levels is 1, the height of a tree of 1 level is 0, so the
height of a tree with 0 levels, the empty tree, is -1. In code, the function takes as parameter
the tree, then we have the base case. If the tree is empty, we return -1, because the height of
the empty tree is -1. Else, we get the height of the left subtree, we get the height of the right
subtree, and we return 1 + the max between them. Let's analyze the complexity of this function.
We're recursively calling the function again on both the left and right subtrees, so in
reality, we're traversing the whole tree, we're doing a depth-first search traversal.
And at each node, we're doing O(1) work, multiplied by n nodes, we get a time complexity
of O(n). And for the space complexity, we're using extra space for the call stack, and the maximum
call stack size is the number of levels of tree, which is the height + 1, so the space complexity
is O(h), where h is the height of the tree. Okay now we've seen what is the height of a tree,
how to calculate it, and analyzed the complexity, we can move to checking if a binary tree is
balanced. We said that in this problem, we define a balanced binary tree as a binary tree whose
left and right subtrees are balanced and differ in height by at most 1. So the idea of our solution
to check if a binary tree is balanced is to check 3 conditions, that the subtrees differ in height
by at most 1, that the left subtree is balanced, and that the right subtree is balanced,
the last 2 ones are done recursively. And the empty tree is balanced, because it doesn't
break the property of being balanced. In code, we just write, if the tree is empty, we return true,
else, we get the height of the left subtree, we get the height of the right subtree, and we check
the 3 conditions. The difference between left height and right height is smaller than or equal
to 1, the left subtree is balanced, and the right subtree is balanced, that's it. To calculate
the difference, we used the absolute value. For the time complexity, we will analyze
the case where the binary tree is perfectly balanced and the case where the
binary tree is totally unbalanced. When it's perfectly balanced, in the base
case, we just return true, we write, T(0) = 1. And in the recursive case, subtrees have
n/2 nodes each, so calculating the height of the left subtree costs n/2, calculating
the height of the right subtree costs n/2, recursively calling the function on the left
subtrees is T(n/2), and recursively calling the function on the right subtree is T(n/2). We write,
T(n) equal to 2T(n/2)+n, because n/2+n/2 is n. Here we can directly apply the Master method, a
is 2, b is 2, log_b(a) is 1, c is 1, and d is 0. log_b(a) and c are equal, and d is greater than
-1, so the time complexity is O(n^c.(logn)^d+1). c is 1, d is 0, we get O(nlogn). Now the
case where the tree is totally unbalanced. Here we've put the long branch from
the right but it doesn't matter. In the base case, we just return True, we
write, T(0) = 1. And in the recursive case, the left subtree has 0 nodes and the right subtree
has n-1 nodes. So the cost of getting the height of the left subtree is 1 to not write 0, the cost
of getting the height of the right subtree is n-1, the cost of recursively calling on the left
subtree is T(0), because it has 0 nodes, and the cost of recursively calling on the
right subtree is T(n-1), because it has n-1 nodes. We get 1+n-1+T(0)+T(n-1). T(0) is 1,
and we remove smaller terms, we get T(n-1)+n. But here we missed an important detail,
when a tree is totally unbalanced, after getting the height of the left and right
subtrees, the difference will be greater than 1, so this condition will return false. And when we
have conditions separated by the 'and' operator, as soon as one of them returns false, the program
doesn't evaluate other conditions, because it already knows that the final result will be false.
So in reality, we have no recursive calls here, the program directly detects that the tree is
unbalanced without checking subtrees. So the cost of this function is just the cost of getting
the height of the left subtree, 1, and the cost of getting the height of the right subtree, n-1.
The total is n, we get a time complexity of O(n). This is why I told you that knowing what is
happening in your algorithm is very important, if we didn't know this, we would keep replacing
in the recurrence relation and get O(n²) at the end. We found O(nlogn) when the
tree is perfectly balanced, O(n) when it's totally unbalanced, so the worst case is when the binary
tree is perfectly balanced, the time complexity is O(nlogn). And for the space complexity, the
maximum call stack size is the height + 1, so the extra space size depends on the height of the
binary tree, the space complexity is O(h) where h is the height of the tree. One of the things
that were slowing down our solution is always calling the height function again and again, so
let's see how to fix it in the second solution. If we have a look at the height function that
we made earlier, we can see that at each node, we are getting the height of both subtrees, so
the idea is to take advantage of this and before returning the height, we check the difference
between the height of subtrees. If they differ by more than 1, we set the boolean variable to
false, it means that we found a subtree that is not balanced. So we will kinda have a
global boolean variable that is initialized to true and becomes false as soon as we find
an unbalance while traversing the tree. And in Python, because we want to be
able to modify a variable that is outside the function, we gonna use a list of
1 element, to be able to modify it. So in the non-recursive function, we create a
variable output which is a list of 1 boolean element that starts at true, we call the recursive
function to check if the tree is balanced, and we return output[0] which is the boolean value
that tells us if the tree is balanced or not. And in the recursive function, we take the height
function we made earlier, we just change its name, add the parameter output, and after getting
the heights of left and right subtrees, if the difference is greater than 1, then we found an
unbalance, we set output[0] to False, that's it. We found out earlier that the time complexity
of the height recursive function is O(n), and here we just added this condition that costs
O(1), it doesn't change the time complexity. And in the non-recursive function, we're
just creating an array of 1 element, and calling the recursive function that costs n,
the total is n+1, which gives a time complexity of O(n). And for the space complexity, the maximum
call stack size is the height + 1 as said earlier, and we have an array of 1 element. The total is
h+2, we get a space complexity of O(h). You can see that we found a solution that has an O(n) time
complexity for this problem. We reached the end of this video, in this one we've seen 2 solutions
to the "Balanced binary tree" problem, I hope that you understood them both, ask questions
if you didn't, and see you in the next one!
20. 19- Paths in matrix: Welcome to this new video where we will see
the "paths in matrix" problem. In this one, given a matrix of size nm that contains only 0s
and 1s, where 0 means that the cell is empty and 1 means that there is a wall in that cell, create
a function that returns the number of paths that we can take to go from the top left cell to the
right bottom cell, knowing that you can move to the right or to the bottom only. I hope that
you tried to solve the problem, so let's start. First of all, let's see some examples.
First example, we have this matrix, and the possible paths are this one, this one, this
one, this one, this one, this one, and this one. Second example, we have this matrix, and the
possible paths are this one, this one, this one, and this one. Third and last example, we have
this matrix, and the possible paths are this one, this one, and this one. Now that you've seen
some examples, we can start solving the problem. In this example, we found 3 paths,
let's see how we arrived to that result. We start at the top left cell, and we can
decide to start from the bottom. So we go down, we go down, we go down, we go down, but here,
we went out of the grid, so we backtrack. Now we move to the right, we move to the right,
we move to the right, we move to the right, and here we arrived to the bottom right cell, so
we return 1 because we found a path. We backtrack, we backtrack, we backtrack, we backtrack, we
backtrack, and here if we go from the right, we find a wall. We backtrack again, and now we can
go from the right. If we go down, we find a wall, so we go to the right. Now we go down, we go down,
we go down, we went out of the grid, we backtrack, and we go to the right. We also go to
the right here, and to the right again, and here we arrived to the bottom right cell,
we return 1 and we backtrack. We backtrack, we backtrack, we can't go to the right
because there is a wall so we also backtrack, same thing here, we backtrack, we
backtrack, and we backtrack again. Now we went back to the top left cell, we got all
the possible paths when we go from the bottom, now we go from the right. We go down, we can't
go further down because there is a wall so we go to the right, we go down, we go down, now we go
to the right, to the right, and here we arrived to the bottom right cell, so we found a new path,
we return 1 and we backtrack. We also backtrack, and we backtrack, now we can't go to the right
because there is a wall so we backtrack, same thing here, we backtrack, we backtrack, we can't
go to the right, and we backtrack. We arrived to the top left cell again, we got all paths from the
bottom and all paths from the right, we got 2 and 1 respectively, so the total number of paths
from top left cell to bottom right cell is 3, 2 + 1. Now if we analyze what we did here, we can
see that we just returned the sum of paths when we start from this cell, the one at the bottom, we
found 2, and paths when we start from this cell, the one at the right, we found 1. And we got 2
from the bottom here, because we did the sum of paths when go to the bottom, we found 1, and paths
when go to the right, we found 1, and 1 + 1 is 2. Same thing for this cell, we did the
sum of paths when we go to the bottom, we found 1, and paths when go to the
right, we found 0 because there is a wall, and 1 + 0 is 1. So you can see that we have a
recursive process, where the number of paths is just the number of paths from the cell at the
bottom + the number of paths from the cell at the right, but we have 2 base cases here. So we
first define the function, it takes the matrix, and the indexes i and j whose the default
value is 0, because we start at matrix[0][0]. Then we have the first base case, it's when the
actual cell is outside the matrix, or has a wall. In code we write, if i is greater than n-1,
where n-1 is the index of the last row, or j is greater than m-1, which is the index of the
last column, or matrix[i][j] is equal to 1, which means that there is a wall in our cell, and in
that case, we can't have a path, so we return 0. Second base case, it's when we arrive to the
bottom right cell, it means that we found a valid path, so we return 1. And we have the general
case, where we return the sum of paths from the bottom cell, i+1, and from the right cell, j+1.
Now for the time complexity, each recursive call is calling the function twice, and the height of
the recursion tree is n+m-1 because at each call, we are either moving to the right or to the
bottom, and the length of the path to go from the top-left cell to the bottom-right cell is n+m-1,
so we get an O(2^(n+m)) time complexity. And for the space complexity, the maximum call stack
size is n+m, we get an O(n+m) space complexity. And I think you guessed it, we will see a more
optimized solution to avoid having O(2^(n+m)) time complexity. If we take the recursion tree,
we can find a lot of repeated recursive calls, while they give the same result. So to counter
this issue, we will use dynamic programming, we create a matrix where each cell represents the
number of paths from top left cell to that cell. Let's start, we first define the function, then we
create an n times m matrix called dp for example, where dp stands for dynamic programming. Then for
the top left cell, if it has a wall, we can't have a path, we put 0. Else, there is exactly one path,
because we are already on the cell, so we put 1. Then, we move to the remaining cells of the
first row, and you have to know that we are walking straightforward, so if there is a path
that goes to the previous cell, then it's the same one for the actual cell, except if it's
a wall. So for i goes from 1 and stops at m, if matrix[0][i] is equal to 1,
so the actual cell is a wall, we put 0. Else, it's equal to matrix[0][i-1],
the previous cell, as explained earlier. Now for the remaining elements on the first
column, same reasoning, if the previous cell has a path so it's the same one for the actual
cell, except if it's a wall. So for i starts at 1 and stops at n, if matrix[i][0] is equal to
1, then it's a wall, we have no path, we put 0. Else, it's equal to matrix[i-1][0], the previous
cell on the first column. Now last part of our dp matrix, the part that is filled in blue green,
and for each element there, the number of paths is just equal to the number of paths when we
come from the top, + the number of paths when we come from the left, except if it's a wall. So
for i starts at 1 and stops at n, for j starts at 1 and stops at m, if matrix[i][j] is equal to 1,
we put 0. Else, dp[i][j] is equal to dp[i-1][j], the cell at the top, + dp[i][j-1],
the cell at the left. At the end, after filling dp matrix, the result that we are
searching for is the bottom right cell of dp, because it represents the number of paths
from top left cell to bottom right cell, so we return dp[n-1][m-1], and we got our
function. To better understand how does it work, let's apply it on the example we've seen
earlier. So we have this matrix, and we create an n times m size matrix dp. We start with the
top left cell, and matrix[0][0] is not a wall, so we put 1. Now the first row, this cell is not
a wall, so it's equal to the previous cell, 1. Now this cell, it's a wall, we put 0. Now this cell,
it's not a wall, it's equal to the previous cell which is 0. Now last cell of the first row, not a
wall, it's equal to the previous cell which is 0. Now we go the first column, this cell is not
a wall, so it's equal to the previous cell, 1. Same thing for this cell, we get 1. And
same thing for the last cell, we get 1. Now second row, the first cell is not a wall, so
it's equal to the cell at left + the cell at top, we get 1 + 1, which is 2. Same thing for the next
one, it's not a wall, we get 0 + 2, which is 2. Now this one is a wall, we put 0.
Last cell, not a wall, and 0 + 0 is 0. Third row, this cell is a wall, we put 0.
Now for this cell, we have 2 + 0, it gives 2. Now this cell, it's a wall, we put 0. Last cell,
we have 0 + 0, it's 0. We go on the last row, and this cell is not a wall, we have 0 + 1, which is
equal to 1. Next cell, we have 2 + 1, it's 3. Next cell, we have 0 + 3, it's 3. Last cell, we have
3 + 0, it's 3. And now we filled dp matrix, and we got that the number of paths from
top left cell to bottom right cell is 3. For the time complexity of this solution, we are
just traversing a matrix of n times m elements, so the time complexity is O(nm). And for the
space complexity, we used additional space to create a matrix of n times m elements, so
the space complexity is O(nm). I hope that you understood this lecture, check the code below,
ask me questions, and see you in the next one.
21. 20- Tree breadth first search II: Welcome to this new video where we will see a second variant of the "binary tree breadth first search" problem. In the first one, we were just asked to print
values level by level, but in this one, we want to get the values of each level as an
array, so given a binary tree of integers root, create a function that returns an array
where each element represents an array that contains the elements at the level i. For example, if we have a binary tree of 4
levels, the output will be an array of 4 arrays where the first one contains values of the
level 0, the second one contains values of the level 1, the third one contains values
of the level 2, and the fourth one contains values of the level 3. I hope that you tried to solve the problem,
so let's start. The difficulty of this problem is just to
know how to identify the level of the actual node, to be able to determine where we will
insert our value in the output. And learning this trick will give you the
possibility to solve many problems related to binary tree levels, this one, binary tree right side view, sum of levels in binary tree, and many other ones. If you remember what we did when we printed
values in BFS, you'd know that we used a queue to enqueue node's children, so we will do
the same thing, but we have to find a way to also enqueue the level. And in order to do that, we will mix three
things, the first one is the fact that the root represents the level 0, we already know
that the first node that we will enqueue has the level 0. Second one, we know that if a node is in the
level i, its children will be in the level i+1, we deduce that if we know the level of
a node, we can know the level of its children. Third thing, instead of enqueueing the node
only, we will enqueue an array of two elements, the first element represents the node, and
the second one represents its level. Let's now do this example to see how we can
join the things that we've talked about now. We first create a queue that contains the
root node, but also its level, which is 0. Now we dequeue it, and it's not null, so we
can insert it in our output. But our output array is still empty, so we
first create an array in our output, and we push root value, then we enqueue its children,
and we said that the level of a node's children is node's level + 1, so it's 0 + 1, which
is 1. The queue is not empty, we continue. We dequeue the 10, its level is 1, and because
we don't have an array at index 1 in the output, we create it, and we insert 10. Now we enqueue the children, and the level
is 1 + 1, which is 2. We continue, we dequeue the 3, and we already
have the array at index 1, so we append 3, then we enqueue children, their level is 2. We continue, we dequeue the 4, and we don't
have an array at index 2, we create it, and we insert 4, then we enqueue children. Now we dequeue the 6, we append it to the
array of its level, for example here its level is 2 so we append it to the array at index
2. Then we enqueue its children, and we continue. Now we dequeue, and we get null, we just skip. Here we dequeue 7, we push it into the array
at index 2, and we enqueue the children. Now we dequeue null, so we just skip. Here we dequeue 8, and we don't have an array
at index 3, so we create one, and we append 8, then we enqueue the children, their level is 4. We dequeue the 9, we append it into the array
at index 3, we enqueue children, and we continue. We dequeue the 1, we push it into the right
array, and we enqueue children. Now we dequeue the 2, we push, and we enqueue. Here we dequeue null, so we just skip. Now all remaining elements are null, so nothing
will happen, we don't even need an array at index 4 because we don't have values, we only
have null elements. So you can see that the trick is to always
pass level+1 when enqueuing, that's it, and you will see it in the code. For the code of this solution, we first define
the function, then we create our output array, at the beginning it's just an empty array
because we don't know how many levels there will be. And we also need the queue, which initially
contains an array of 2 elements this time, the root and 0, its level. We also don't forget the variable i to traverse
the queue, we said that we do that to avoid always removing the first element of the queue
array, which costs O(n). Now we can start, the loop condition is that
the queue still has elements that we didn't read, in other words, while i is smaller than
its length. Inside it, the element at the index i has
two elements, the node on the index 0 and the level on the index 1, so we put each one
of them in a variable to easily recognize them after, we also increment i, to move to
the next element for the next iteration. Now if the poppedNode is null, we just that we just skip the iteration, so we use the continue statement. Else, here we don't print as we did in the
previous BFS problem, here we want to append it to the right array in the output. But before, we have to check if the array
at index level exists, and we can know that by comparing the actual length of output,
in other words, how many arrays do output have, with the actual level, so if the length
is smaller than or equal to level, we create a new empty array that will store the elements
of the actual level. Now that we know that the array at index level
exists, we can append the poppedNode value. After it, we just have to enqueue the children,
so we first enqueue the left node, then the right node, and with both of them, we put
level+1, as explained earlier. And this loop gets repeated until it traverses
the whole tree, so after it, our output array will be ready to be returned, so we return it. For the time complexity, here we are just
traversing the tree, so it's O(n). And for the space complexity, we are using
a queue that can contain n elements, and same thing for the output array, so we have n +
n, 2n, we remove the constant, and we get a space complexity of O(n). I hope that you understood how does this solution work, check the code below, and see you in the next one.
22. 21- Product of array except self: Welcome to this new video where we will see
the "product of array except self" problem. So what does the problem say? It says, given
an array of integers arr that has at least two elements, create a function that returns
an array output where output[i] represents the product of all elements of arr except
arr[i], and you are not allowed to use the division operator. I hope that you tried to
solve the problem, so let's start. For example in this array, we get this output, output[0]
is equal to 60 because it's the product of all elements of arr except arr[0], we have
5*3*4 is equal 60. Then output[1] is 24 because 2*3*4 is 24, then
output[2] is 40 because 2*5*4 is 40, and output[3] is 30 because 2*5*3 is 30. This problem would be really easy if we were allowed to use division operator, because we would just have to get the product of the whole array, which is 120 in this example,
then we just have to divide by arr[i] to get output[i]. Here 120 divided by 2 is 60, 120
divided by 5 is 24, 120 divided by 3 is 40, and 120 divided by 4 is 30. But we're not
allowed to do that, so we have to find another way. The brute force solution of this problem
is that for each index i, we traverse the array again to calculate the product of all
elements except arr[i], so we first define the function, the we create our output array.
Then for each element in the array, we create a variable product initialized to 1 because
it's a product, and we use a nested for loop to traverse the elements. And if i is equal
to j, it means that we are on the current element in the first for loop, so we have
to ignore it, this is why we just continue. Else, we multiply the product by arr[j]. After
the loop, we now have the product of all elements except arr[i], and we append it into our output
array. At the end, we just return the output array. For the time complexity, we used two
nested for loops, and the array has n elements, so in this situation we get a time complexity
of O(n²). And for the space complexity, we used extra space for the output array that
will contain n elements, so the space complexity is O(n). But it's obvious that it's not the
best solution for this problem, so let's see another one. In reality what we are doing
is that for each element, we are calculating the product of the elements at its left, and
the ones at its right. For example if we take this element, the product from left is 4*1*2, it gives 8, and from right it's 5*2*2*3, it gives 60. And
the product of all elements except arr[i] is 8*60, which gives 480. In code we
would just have to modify the loop, inside it we create productFromLeft initialized
to 1, then we iterate over elements before the index i to get their product. Then we
create productFromRight initialized to 1, and we iterate over elements before the index
i but starting from the right. After it, we append product from left multiplied by product
from right. Okay but this solution doesn't fix the problem, because the time complexity
is still O(n²) because of the nested for loops. And in order to fix that, we will use
the cumulative product, because the product of elements until an index i from left is
just the product of elements until i-1 multiplied by arr[i-1], and we will use this relation
to get the cumulative products from the left, and the cumulative products from the right,
to avoid always calculating again. In this example, cumulative products from left we
have 1, the initial value of a product, then we have 4, 4, 8, 48, 144, 288, and 576. Then from the right, we have 1, the initial value of a product, then 5, 10, 20, 60, 360, 720, and 720. Now that we got both cumulative products
from left and from right, we can get the output values. And to do that, we were multiplying
product from left by product from right, and we will do the same thing here, for each index
i, output[i] is just equal to cumulativeFromLeft[i] multiplied by cumulativeFromRight[i]. So here
we will have 720, 2880, 1440, 480, 960, 1440, 1440, and 576. Now let's see the code, we
first define the function and we store the array length in a variable n, then we get
cumulative products from the left. And for that, we create an array of n elements, then
we set the first element to 1 as we did earlier, and we starting iterating over elements starting
from the second one, where cumulativeFromLeft[i] is equal to cumulativeFromLeft[i-1] multiplied
by arr[i-1]. Now we get cumulative products from the right, and for that, we create an
array of n elements, and we set the last element to 1, because we will start from the right.
Then for each index starting from n-2, which is the index of the before last element, we
set cumulativeFromRight[i] to cumulativeFromRight[i+1] multiplied by arr[i+1], +1 because we are
walking backwards. Now that we got both arrays, we can create the output array, we create
an array of n elements, then for each index i, output[i] is just the product of cumulativeFromLeft[i]
and cumulativeFromRight[i]. At the end, we return the output. For the time complexity,
here we have 3 for loops, but they aren't nested, so we have n + n + n, which is 3n,
we remove the constant, and we get a time complexity of O(n). And for the space complexity,
we have 3 arrays of n elements, n + n + n, 3n, we remove the constant, and we get a space
complexity of O(n). I hope that you understood the solution, check the code below, and see you in the next one.
23. 22- Jump to last index: Welcome to this new video where we will see
the "jump to last index" problem. So what does the problem say? It says, given a non-empty
array of non-negative integers arr, where each element represents the maximum jump that
we can perform from that index, create a boolean function that checks if we can reach the last index starting from the first one. For example here, the first element is 3, it means that
we can jump to this index, to this index, and to this index. I hope that you tried to
solve the problem, so let's start. If we take this array, we cannot directly jump from the
first index to the last one, but what we can do is that from the first index to this one,
because we can jump by two indexes. Then we jump to this one, because we are allowed to
jump by 2. Same thing here we jump by 2. And last step, we jump by 1 index and we reach
the last index. The first solution that we can find for this problem is to check if there's a path when we jump by 1 step, then by 2 steps, and so on until arr[i]. And as soon as we
find a path, we return true. And after checking all possibilities, if we didn't find a path in all of them, we return false. For example with this one, we check if there's a path
when we jump by 1 index, if there isn't, then we check if there's a path when we jump by
2 indexes, if there isn't, then we check if there's a path when we jump by 3 indexes,
and if there isn't, we return false, because we can jump by at most 3 indexes. Before applying
this method on an example, let's see the code. So we first define the function, then the base
case is when we are at the last index, it means that we reached it, so we return true.
And in the general case, we said that we try with each possible amount of indexes that
we can jump by. And the jump possibilities are between 1, when we jump by 1 index, and
arr[i] inclusive, and here we wrote +1 because the second parameter of range is exclusive,
so to we add the +1 to reach arr[i], it's like we wrote this in a C-like for loop. And inside, if we can jump starting from the index i+j, we return true. i+j because i represents
the index of the actual element, and j the amount of indexes we will jump by. After the loop, if we couldn't find a path with any amount of jumps, so there is no way to reach
the last index, we return false. Now let's apply this solution on this example. The first
call goes on the first index. And now we try to jump by 1 index, but we have 0, it's not
the last index and we have no way to continue jumping, so we return false. Now we jump by
2 indexes. And now from here, we jump by 1 index, but it's 0 and we didn't arrive to
the last index, we return false and we backtrack. We jump by 2 indexes now. Now we jump by 1
index. Same thing here, by 1 index. And same thing here, we try with 1 index. And now we
arrived to the last index, we return true. Now in this call we found a way to reach the
last index, so it returns true. Same thing here, it returns true. And same thing here,
we get true. Also true here. Now we went back to the original call, we found a way to reach
last index, it's when we jump by 2 indexes, so we return true, which means that there
is a way to reach the last index. Now let's see this solution on an example where we have
no way to reach. The first call goes on the first index, the index 0. Now we try to find
a way when we jump by 1 index, so we call the function with i+1, which is 1. Same thing
here, let's try with 1 index jump. Same thing here, 1 index jump. And here we have 0, we
have no way to jump, the loop won't be executed at all, so we return false and we backtrack.
Now we jump by 2 indexes. Now we jump by 1 index. But here we have 0, and it's not the
last index, so we return false. Here we could only jump by 1 index and we didn't find a
path, we backtrack with false. Same thing here, we didn't find a way with neither 1
nor 2, we return false. Now let's jump by 2 indexes, but we have a 0, we return false. Now we jump by 3 indexes. Now we jump by 1 index, but we have 0 and we didn't reach last
index, we return false. Same thing here we backtrack with false. And same thing here,
we have no way, we backtrack with false. We went back to the original call, and now we
jump by 2 indexes. Now we jump by 1 index, but we have 0, we return false. We jump by
2 indexes, then we jump by 1 index, but we have 0, we return false. We also return false
here, and we also return false here, because we can't jump by more than 2 jumps. Now we try with 3 indexes jump, but we have no more possibilities to continue jumping, we return
false. Now with 4 indexes jump, then we jump by 1 index, but no more possibilities to continue,
we return false. Same thing here, we return false. Now we jump by 5 indexes, but we have
0 and it's not the last index, we return false. Now we tried with 1 index, with 2, with 3,
with 4, and with 5, but none of them gave us a way to reach last index, so we return
false, we have no way to reach the last index. For the time complexity, you have to have
to know that a way to go from the first to last index is just a subset of arr, for example
we can have this way, this way, and this way, and many other ones. And as said in "subsets
that sum up to k" problem lecture, an array of size n has 2 power n subsets, this is why
our time complexity is O(2^n). For the space complexity, the maximum call stack size is reached when we are at the last element of the array, and the array has n elements, so
it's O(n). Now if we take the recursion tree of the last example, we can find a lot of
repeated function calls, so we will use dynamic programming to reduce time complexity. And
for that, we will use an array of booleans dp, where each element dp[i] says if there
is a way to reach the index i. Then at the end, if the last element of the array is true,
it means that we can reach the last index. For example with this array we've seen earlier,
we create an array dp of size n where all elements are initially false, because we don't
know yet if we can reach them. Then we set the first element to true, because we start at the first index, we are already on it. And now, we can jump by at most 3 indexes from
the first element, so we set the next 3 elements to true, we can reach them. Second element,
we have 0, we do nothing. Next element, 2, we set the next 2 elements to true. Next element,
0, we do nothing. Next element, 2, we set the next 2 elements to true. Next element,
1, we set the next element to true. Next element, 4, and we can see that 4 is greater than the
distance between the actual index and the last one, so we can directly jump from here
to the last index, we directly return true, we can reach last index. Let's apply it on
a second example. We have this array, we create our dp array, and we set the first element
to true. Now we have 2, we set the next 2 elements to true. Next element, we have 2,
we set the next 2 element to true. Next element, we have 1, we set the next element to true. Next element, we have 0, we do nothing. And in the next element, we have false, so we
initially can't go on this index, which means that we can't go to the last index at all,
because if we can't even reach this one, it's impossible for us to reach the last one, so
we directly return false. Now let's see the code, we define the function, then we create our boolean array of n elements. Then we set the first element to true, and we start iterating
over elements. Inside the loop, we said that if dp[i] is false, it's impossible to reach
last index, so we directly return false. Else if we can directly jump to the last index,
we directly return true, we found the way. Else, we have the general case, we iterate
over indexes that we can jump to, and we set to true in our dp array, i+j for the same
reason I explained in the first solution. After the loop, as said earlier, the last
element of dp array says if we can reach the last index, we return it, n-1 is the index
of the last element. For the time complexity, you can see that we have two nested for loops, because in the general case, for each element, we have to use another loop to set the possibles
indexes to true, which gives a time complexity of O(n²). And for the space complexity, we
are using the dp array that has n elements, so our space complexity is O(n). Even if we reduced time complexity here, it's not enough, we want to find a solution that runs on linear
time complexity. A solution that may come in your mind is to always jump by the maximum possible amount of indexes and see if we reach the last index, and it may work sometimes,
like with this array, we jump by 2, then by 3, then by 1, and we reached the last index. But it's not always the case, for example in this one, if we jump by 3 we arrive to
0, and we can't continue jumping, so the solution would return false, which is a wrong result
because we've seen earlier that there is a way, by jumping by only 2 indexes at the beginning,
so let's see another solution. The idea of this solution is that we take the one we've
seen now, the one that doesn't work, but by not ignoring in-between indexes. Because here,
for example for the first index, we directly jumped by 3 indexes further, because we assumed
that we can't do better when jumping by 1 or 2 indexes, which can be wrong as we've
seen with this array. So the idea now is to set a maxIndex variable, which represents
the maximum index that we can jump to until now. In this array, at the first element,
maxIndex will be replaced by 3, because we can directly jump to the index 3, but it doesn't mean that we will ignore the index 1 and 2, because they can provide a further maxIndex,
so we will pass from them anyway, and if we can jump to a further index from one of them,
it replaces maxIndex. We apply it on this example, the initial value of maxIndex is 0, the initial index. And we start, we have 3 greater than 0 so it replaces it, we now
found that we can jump until index 3. We continue, next element is 0, and 1 + 0 isn't greater
than 3, we do nothing. But from the next element, we can jump until index 4, because 2 + 2 is
4, so maxIndex becomes equal to 4. We continue, here 3 + 0 isn't greater we do nothing. We
continue, and here 4 + 2 is 6, which is greater than 4, so we can jump until the index 6.
We continue, 5 + 1 isn't greater, we do nothing. Next element, we have 4, and 6 + 4 is greater
than 6, so it replaces maxIndex. And now maxIndex became greater than or equal to the last index, which means that we can reach it, the function returns true. Let's apply it now on an array
where we can't reach, this one for example. The initial value is 0, and now with the first
element, the new value of maxIndex is 2. Next element, 1 + 2 is greater, maxIndex becomes
3. Next element, it doesn't replace. Next element, same thing. And with the next element,
we have a problem, it's that the index i is greater than maxIndex, which means that we
can't initially come to here at all, so it's impossible to reach last index, we return
false. For the code of this solution, we first define the function, then we initialize maxIndex
to 0. Then for each index i, if i is greater than maxIndex, we return false as explained
earlier. Else, we replace maxIndex with the maximum between the actual value of maxIndex,
and the maximum index that we can reach from the index i, which is i + arr[i]. After the loop, if maxIndex is greater than or equal to the last index, it means that we can reach
it, the function returns true, else it returns false, so we return the comparison test, maxIndex
greater than or equal to the last index. For the time complexity, here we are just traversing
the array normally, without a nested for loop, so it's O(n). And for space complexity, we
are using simple variables only, so it's constant, O(1). Before ending this video, I want to
show you another interesting solution, which is slower than this one but it's just for
information. The idea is to represent the indexes as a directed graph, where vertices
are indexes, and edges say if we can jump from an index to another one. For example
with this array, here are the nodes, they represent the different indexes. Then to
set the edges, we check the indexes. For example for the index 0, arr[0] is 3, so it can jump
to the next 3 indexes, the index 1, the index 2, and the index 3. Then for index 1, arr[1]
is 0, it doesn't jump to any index. Next index, 2, arr[2] is equal to 2, so it can jump to
index 3 and index 4. And we continue like this until we get all the edges, and after it, we just have to check if there's a path from index 0 to the last index, by using breadth
first search traversal for example. But because we didn't see how to traverse a graph yet
in this course, I won't explain it now, I'll let it for graph traversal lecture. I hope that hearing the word "index" 138 times didn't make your brain explode, yes you can count
if you want, and check the code below, and see you in the next lecture.
24. 23- Graph depth first search: Welcome to this new video where we will learn
how to traverse a graph in depth first search. So given an undirected graph of integers graph,
represented by an adjacency list, and an integer root, crea