How To Write Multi-threaded C# Code With Locks And Synchronization | Mark Farragher | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

How To Write Multi-threaded C# Code With Locks And Synchronization

teacher avatar Mark Farragher, Microsoft Certified Trainer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

19 Lessons (2h 34m)
    • 1. Course Introduction

      1:50
    • 2. About this course

      4:07
    • 3. Introduction to threading

      4:25
    • 4. How to start a thread

      8:54
    • 5. Race conditions

      3:50
    • 6. Passing data into a thread

      7:22
    • 7. Waiting on a thread

      5:43
    • 8. Joining and suspending threads

      5:05
    • 9. Interrupting and aborting threads

      7:43
    • 10. When should you lock threads?

      8:27
    • 11. The lock statement

      10:59
    • 12. Dealing with deadlocks

      7:28
    • 13. Using the Interlocked class

      10:46
    • 14. Thread synchronisation with AutoResetEvents

      15:18
    • 15. How to build a Producer/Consumer queue

      10:42
    • 16. The ManualResetEvent class

      11:59
    • 17. The CountdownEvent class

      12:57
    • 18. Thread rendezvous

      14:38
    • 19. Course recap

      1:54
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

87

Students

--

Projects

About This Class

cec5c78e

Do you know how to write robust multi-threaded C# code that does not crash?

Lets face it: writing multi-threaded code is hard. The sobering truth is that, unless you know exactly what you're doing, your code is pretty much guaranteed to crash in production.

Don't let this happen to you!

It doesn't have to be like this. If you have a good understanding of multi-threaded programming and follow a few simple industry best practices, you can write robust code that can take a beating.

I wrote a multi-threaded conversion utility a few years ago, that successfully migrated 100,000 documents from SharePoint 2010 to SharePoint 2013. The program worked flawlessly the first time, because I implemented all of the best practices for writing asynchronous C# code.

Sound good?

In this course I am going to share these practices with you.

In a series of short lectures I will cover many multi-threading topics. I will show you all of the problems you can expect in asynchronous code, like race conditions, deadlocks, livelocks and synchronisation issues. I'll show you quick and easy strategies to resolve these problems.

By the end of this course you will be able to write robust multi-threaded C# code that can take a beating.

Why should you take this course?

You should take this course if you are a beginner or intermediate C# developer and want to take your skills to the next level. Asynchronous programming might sound complicated, but all of my lectures are very easy to follow, and I explain all topics with clear code and many instructive diagrams. You'll have no trouble following along.

Or maybe you're working on a critical section of code in a multi-threaded C# project, and need to make sure your code is rock-solid in production? The tips and tricks in this course will help you immensely.

Or maybe you're preparing for a C# related job interview? This course will give you an excellent foundation to answer any threading-related questions they might throw at you.

Meet Your Teacher

Teacher Profile Image

Mark Farragher

Microsoft Certified Trainer

Teacher

Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 11 successful Udemy courses. He has been a Founder and CTO, and has launched two startups in the Netherlands. Mark became a Microsoft Certified Trainer in 2005. Today he uses his extensive knowledge to help tech professionals with their leadership, communication, and technical skills.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Introduction: Let me ask you a question. Would you like to become a C sharp? A sink? Architects. Okay, I made that up. There is no icing architect. There is no official facing architectural. But what I think that word means is that you are a C sharp developer who is really good as writing. Multi thread is code at writing a synchronous coz because writing multiple code is hard. Usually when you try to write multi threaded code, the cards immediately crashes production on they take you forever to debug. So learning the fundamental skills of multi threading is super important. And that's exactly what I'm going to teach you in this course. So in this course, we're going to focus on the fundamentals off threatening. I will show you how to work with the threat last, how to set up threads, locking on how to synchronize multiple threats so that they can exchange data. So this course contains lectures. It contains quizzes to test your knowledge on there is downloadable source codes that you can check out. Plus, I have added a couple of coding exercises, so you will be writing code sounds testing your coat against my solutions So are you interested in becoming in a sink architect? Then this is the course for you. So I've created this course for Julie developers Medio developer, senior developers. Doesn't matter what your level is. As long as you're interested in writing bulletproof rock Solid multi threaded coat. I will teach you exactly how to do that. This will be a huge boost to your career. So thank you for listening on. I hope i'll be seeing you in the course. 2. About this course: Hi. Welcome to this course how to rights. Buddhist proof, multi threaded C sharp code. In this course, I will teach you how to write bulletproof Multi for this c sharp codes by using threats. Locks on synchronization. About 10 years ago, I built a complicated, multi threaded, C sharp application for a trade show aunt. I could never get it completely stable. The sales team that's demo to the application. God's quite used to having to restart the program a couple of times to get it to work. So what was the problem? Well, I wrote multi threaded see shark codes in my program, but I had overlooked several critical sections in my coat that we're not threat safe in my tests, everything seems fine. Cuts in production. The program behaved erratically. Does that sound familiar? Ideas eventually fix the problem, but I created this course to make sure this kind of thing will never happen to you. I will teach you how to write a bulletproof multi for its code that works perfectly on your first try. I have set this course up so that you can follow it regardless. If you are a beginner or an advanced c sharp programmer. I will start with an introduction to threading, which teaches you some basic knowledge about writing asynchronous code. Next is a section on the threads class. The workhorse off multiplied his coat. I will show you how to create and start threats, how to pass data into a threat on how to join Suspend on aboard them. Even though this section starts out Very simple. You will already encounter some typical multi thread. It's problems. I will show you a very simple program with a for loop that does not behave like a shoot. Later in this section, I will show you how difficult it is to reliably wait for a threat to complete. I will end the section with a recap on a short quiz you can use to test your knowledge. If you fail the quiz, don't worry about it. Just re read the corresponding lecture and try again. And if anything is unclear to you, feel free to reach out on contact me. I'm happy to answer any questions you might have. So next up is threat. Looking After witnessing a race condition in the previous section, I will show you how a so called critical section in your coast can't solve this very problem. A critical section is a block of coat that can only be accessed by one threads at a time. I will discuss the when and why off threat looking How the look statements actually works on what the performance implications off thread looking are again. I'll start with an introduction, then move on to the various subject lectures on end with a recap on the quiz. By the end of the course, you will have learns all the common pitfalls off writing, multi threaded, see shark coat. With this knowledge, you will be able to write. Ball is proof multi threaded C sharp code. Okay, let's move on. In the next lecture, I'll introduce myself on talk a little bit about my background. 3. Introduction to threading: what is multi thread his code. Let's take a look at a typical C sharp program. I can write the code down one line at a time, as a vertical stack off source code lines like this. At any given point in time, a specific line off codes is being executed. So, for example, right now the Green Line is being executed, and I cant highlights this point of execution with an arrow, then a couple off nanoseconds. Later, the line is done on execution proceeds to the next line. When the codes calls him methods, the point off execution jumps into the method, runs that codes and returns back to the point of orgy. So over time the point of execution is following a kind off path through the code. The threat is nothing more than a specific path followed when executing a program. So in this example, we have the main program threat, running in the main methods down here at the bottom, then calling into a message at the top on returning back into the main method. So this is why it's called a thread. If you visualize your program as a kind off fabric, then the threads would be a single independence part of execution through the fabric off your coat. So far, this is all pretty straight forwards. You are probably familiar with the concept off computer codes with a point off execution stepping through your codes, line by line, updating variables, and it goes along. But here's the source. Why is there only one thread? Why can't we have more than one? For example, while the main programmes Friends is doing its thing in the main message, we could have another threat running a private message up here. It could be doing some kind off complicated calculation in the background, while the main programmes Fred's is busy updating the screen and handling user inputs. Then when the complicated calculation is done, there could be some kind off mechanism where the calculation result is passed on to the main program threats for display. Well, the good news is there is nothing stopping you from writing code like this. The C sharp language allows you to create as many threads as you like. On the DOT net framework provides a rich collection off classes for synchronizing threads on passing data between them. However, writing a robust multi threaded coat is not easy. Accessing variables from more than one thread opens the door to a whole array of potential problems. You must be prepared for all kinds of strange consequences, a simple if then else statements. Not no longer work variables randomly changing their value on parts of your code that suddenly hang for no apparent reason. In this course, I am going to show you how to rights. Multi threat is coat, but more importantly, I am going to show you how to write. A robust, bulletproof multi thread is cold. I will show you an array of techniques, including locking, signaling and synchronizing to keep the threads in line and make sure that your coat executes in a predictable and safe manner. 4. How to start a thread: in this lecture, I am going to take a look at how to start a new thread by using the threat class as you've already seen in three introduction. A thread is on Independence Execution Path Able to run simultaneously with other threats, a C sharp program starts in a single threat created automatically by the framework on operating system. This threat is called the main program threat. The program is made multi thread is by creating additional threats. Here's a simple example. The program starts here in the static main message. The program creates a new threat. Using the Threat class constructor, the constructor expects a thread start delegates as a parameter on this is the method that the new threat will run. An important thing to realize is that the threads will only run this single method. When the method is completed, the threads will automatically end. Aunt once ended, the threat cannot restart. So in this example, the new threat executes this loop. Here. Andi simply displays the letter B 1000 times, but at the same time, the main threat is also running. Executing this loop here hands and displaying the letter a 1000 times So what's output? Do you expect to see something like a B A B A B? Let's find out. I'm running the program now, and here are the results. You can see that the A's and B's are clumps together in groups on this is because threads are time sliced. The operating system runs a given threat for a while, and then suspense. It's aunt runs a different threat. Each run interval is called a time slice, which is the maximum time a threat can run uninterrupted. So in the output, each time slice is visible as a group off identical letters. Modern computers have multi core processors that can actually run several threads at once. But at any given time, there are hundreds off active threads in the operating system, many more than the available number off CPU course. So there is always a certain amount off time slicing going on. There are several ways to initialize the threat you've seen. The threat starts delegates for passing in the started methods into the threads constructor . However, we don't need to explicitly specify the threats. Start delegates. The C sharp compiler is smart enough to infer the delegates from the signature off the start of method itself, so this simplified codes will also work here. I passed the start of methods directly to the threats. Constructor, without specifying that it is a threat, starts delegates. The compiler figures that out all by itself, which makes the colds a lot cleaner. Another simplification is to remove the start of methods on to replace it with a Lambda expression like this. The entire thread start of methods is now on anonymous delegates. Again, this is not a problem. The compiler will figure out that the lumber expression matches the threat, starts delegates, and it will make everything work. Each threats has a name property that you can set. This is especially useful during debugging because the threats name is displayed in the threats window, you can set a threat name just once. Attempting to change it later will throw an exception. So here is a program that sets up to 10 named threats. When I run the program on, then interrupted in the debunker, I can open the threats panel to take a look at all running threats. You can see the threat names appearing on when I double click on a threat. The D Burger shows me which coat is currently being executed by that particular threat. Finally, let's look at four grounds on background threads. By default. Any threads you creates explicitly is a foreground. Threads. Foreground threats. Keep the application for life for as long as anyone off them is running. Compare this with background threads. Once all foreground threads finish, the application ends on any background. Threats that are still running at this point will abruptly terminate. You can clearly or change the threads background status by using the is backgrounds property. So here's an example. This program starts a single threat that waits for the user to press. Enter Note that I said to the is backgrounds property to true, which means this will be a background threads. While this backgrounds threads, runs the main threats continues executing the main method until it ends. At this points, the program terminates and aborts the running background threat. When I run this program, you can see that it's immediately terminates the background threads. Waiting for a key press has no effect at all. Now let me change my codes on creates a foreground threads instead, I still expect the main threats to end. But as that point, the foreground threat will still be active, waiting for a key press on because it is now a foreground threats. The program cannot end until I press enter, thereby ending the foreground threats and allowing the entire program to end. So now when I run the program, you see that it does not immediately terminate. It's continues to run, even though the main threat has already exited the main method. No. When I press dancer, the single remaining foreground threats can end, and now the program ends. So what have we learned? You create the threads by calling the threat constructor and specifying the methods to execute. You can also specify that Lambda Expression to execute threats can have names to eight in debugging threats can be four grounds or background threads on application cannot end until all foreground threats have ended. 5. Race conditions: each running threads gets its own private stock, so all local variables are kept strictly separate. Take a look at the following code. I have a method here called Do Work that outputs five stars in a row. I call this methods from a separate threat. Andi, at the same time from the main program threat. The loop variable high exists twice one's for each thread, so that's the threads will not interfere with each other. When I run the program, I guess 10 stars as outputs just as expected. Now watch this. I introduced the new class member a static integer variable called I and I modify the loop to use this shared variable instead. So now we have the interesting situation that both the created threads on the main program threat used the same variable I for their loop statements. What do you think the output is going to be? Check this out. The output is only six stars at. The reason for this is that both threats are implementing the same variable. So instead off counting from 0 to 5, the loops are actually counting from 0 to 5 in steps of to on the results off this is that only five or six stars are printed instead of 10. This is a classic example off what is called a race condition. Two threads are fighting for the same variable, and as a result, the code starts to behave in unpredictable ways. In this case, a simple for loop counting from 0 to 5 certainly only iterated its two or three times before finishing. The solution to raise conditions is a technique called locking but will get back to that in the next section. For now, let's just identify the problem. So a race condition is when program execution no longer follows a predictable path. Andi. As a result, Coast starts behaving in unexpected ways. Variables seemingly randomly changed their values. Four loops stop prematurely. Simple increments. Statements do nothing or increments by two instead of one. All of these things can happen nine times out of 10. These race conditions a cure because two or more threats are accessing the same variable. So the main take away off this lecture is keep the data you share between threats to an absolute minimum. In the next section, I will show you how to use looking to safely share data between threats. But before we get to that, let's finish our exploration off the threat class first 6. Passing data into a thread: In the previous lecture, you learns that threats can access share data like class members, and that's by accessing that data. You open up your codes to a possible race condition. When two or more threats try to access the same variable, your code starts behaving in unpredictable ways. So is there a safe way to past data into a threat? For example, initialization data when the threat starts. In all the examples you've seen so far, I started a new threat with the threat start Delicate signature, which looks like this. This delicates defines the methods with a void return time and no parameters. But what if I want to pass some kind off initialization parameter to the threat when starting? How would I do this? Well, fortunately, there is a seconds delegates that I can use for that. The premise arised threats start delegates, which looks like this. These delegates defines a single object parameter, which I can provide when starting the threat like this. So this method works fine, but it has two disadvantages. One I can only provides a single parameter and to the single parameter, has to be off type objects. So here's an example This is the same cult we've seen before with one threat writing the letter A on the other threads writing the letter B to the console. But instead of having two separate threats start methods, I have combined them into a single method with a character parameter that receives the letter to write to the console. I can no start both threats with different parameters. This simplifies the cold a loss. The only disadvantage is that the threat parameter has to be an object. So inside the work methods, I need to cast the objects back to a character. But the race another way to past initialization data to a threat. And that is to use a lambda expression that calls the desires message with the correct parameters filled in here is the example code. Again, The work message is still there, but now it's expects an actual car put Amazon. The threat is initialized with the Lambda expression that calls the work methods directly with the corrects character. Lambda expressions make. It's super easy to initialize your threads with any kind of data you like, but you have to be careful. Any variable you capture in the lumber expression is shared between the new threat on the main program threat. So any modifications to the variable are also shared between both threads, which may lead to unexpected results. Take this code. For example. I'm initializing 10 threads in a loop on to use a lambda expression to write each loop value to the console, so I expect each threat to outputs either. 012345 etcetera. I'm going to run the coals now Watch this bet you didn't expect that. We have 10 numbers, but notes all unique values between one and 10 are there. What's going on here? The answer is that the main program threat and the 10 created threats all share the same variable. I So this is what happens. Let's say we are in the thirds loop iteration with the variable I says to to the new threats gets created. But there is a slight delay before it gets executed because the operating system is doing something like this guy. Oh, in the meantime, then the main program threats gets a chance to run on it increments the very well, I And so by the time when we get back to our original threads. It writes the number three to the console and not to, And that is why some numbers are missing from the output. There's a very neat trick to fix this problem. All I need to do he's adds a temporary local variable inside the loop. Copy the value off I into its on, then use that variable in the lumber expression. Instead, I made these strangers on. Now, when I run the program, I get the expected sequence of numbers. The reason this works is because the local variable is local to the loop. Each loop iteration creates a new local variable in a new, unique memory location. So now all threats get their own unique variable to work with. On the program, outputs is as expected, so to summarize, the rules for passing data into threats are as follows. Use a chronometer rised threat, start delegates and then provides an object arguments to the Start methods bus. You can only provide one arguments, and the arguments has to be an object. The other option is to use a lander expression to person data into the thread work methods boots. You must either pass in constance war. Make sure you provide unique variables to each threads because the variables are shared with the main program threat 7. Waiting on a thread: in this lecture, I want to show you another common problem with multi thread is programming on. That is how to wait for a threat to complete. So let's try to tackle this problem. I am going to show you some code. Here is a program that sensor a single threat that calls the do work method. Let's pretend this method does lots of complicated work on, then finishes now to communicate this fact. So the main program threat. I use a shared Boolean member variable called Finished the Do Work methods checks the finished variable Andi. If it is false, it sets it to true and writes the text finished to the console. To make things more interesting, I also called the same do work methods on the main programmes friends. So now there are two threads simultaneously checking the finished variable. The idea behind this code is that I want the finished section to run only once. Regardless of how many threads are executing this methods. I want this coat to execute once, so let's run the program and see what happens. I'm running the program now and here is the airport. You're worse so I can use a shared 1,000,000,000 member variable to signal that a threat has completed. Or can I? This might surprise you, but the code is actually not working at all. The fact that we see finished appear only once in the output is a coincidence. When you take this called into production your end users will sometimes see the words finished. Appear twice. Don't believe me. I can demonstrate by making a small change to the code. I am going to edit this line here. Don't change the order off the statements first. All right. Finished to the console on. Then I'm setting the finished variable to through. Now let me run the program again. Hey, we go and shake it out. Finished Appears twice now. So what's going on here? The problem is again a race condition. There is a very small delay between the if statement checking the variable finished on the assignment, setting the valuable finished to true. So consider this scenario. The first threat checks the variable finished. That's this cars. It is false. So the threats proceeds to set the valuable to true. But before it gets the chance to do this, the operating system pre ends the threat suspends it on switches control to the other spread by pure coincidence, These second threats is also about to check the valuable finished because the first threats didn't get the chance to set the variable to true, The variable at this point is still false. On the second threats also proceeds to set the variable to true the results. Both threats write the words finished to the console, not sent the variable finished to true. The chance of this particular events happening go up when I increase the delay between checking the variable aunt setting it too true on. That's why I was able to trigger the events by putting the console dot right line statements between the if on the assignment, because writing through the console is a very slow operation. So here's the room you might think checking a variable on, then immediately setting. It is fast enough to make your cold run reliably, but you would be wrong. There is always a small delay between the check on the assignments hands. Another threat can always sneak in between these two actions. The odds of this happening are pretty small, but believe me, it will happen in production. To avoid this problem entirely, you have two options. One turned the check on the assignment into one atomic in device herbal operation. You will see in the next section how threat looking can make this happen. Or two half one threat. Wait for the other threats to finish before doing the check. You will see in the next lecture how the joint message does exactly that. 8. Joining and suspending threads: in the previous lecture you learned that's waiting for threads to complete is actually a complicated problem. The obvious way to do it is to use a shared and boolean variable, which you would set to true when the threads is finished. Bus checking on assigning a shared variable opens up your coats to raise conditions, which can lead your codes to behave in unexpected ways. In the previous code. Example, a final block off coat executed twice, even though I explicitly try to prevent this. So one possible solution to this problem is to wait for the threads to finish. I started threat. Let us do its thing on. Then I suspend the main threat until the other threats has finished. Once I know for sure that the threat is finished, I resume the main program threads on. Let's continue. There is a built in methods to do exactly that, and it is called join. If you put this line of code in your program, it will suspend the main threat. Wait until thread one is finished on, then resume the main programmes. Friends. You can use this method to reliably ensure that the threat has ended before continuing your coat. You can also include the timers when calling join, either in milliseconds or as a time span. It then returns. True if the threats and it's or fools, if the joint timed out when the current threads is waiting for the other threat to end, it does not consume any CPU resources. So here is the coat from the previous example again, the one that writes the words finished to the console. I modified the codes on AD. It's a joint statements here. So now the main program threat starts the seconds right. Then it waits for that second threat to finish, and then it's continues to call the do work methods again. Let me run the program. Here we go and you see the words finished appears only once. So with this set up, it is impossible for the words finished to appear twice because the main program threats is suspended while the second threat checks, the variable rights finished and sets the variable to true a race condition cannot secure. Of course, the disadvantage is that the program is now no longer really multi threaded. The two calls to do work now occur in sequence instead of in parallel. So this is a bit of a blunt solution to the problem that it does work in upcoming sections . I will show you a much better solution to this problem, so when you join the thread, you suspend your own threads while waiting for the other threats to complete. But you can also suspend your own threats for a specific length of time, effectively putting it to sleep. The methods to do this, perhaps unsurprisingly, is called sleep. You can either provide a time span value or specify the number off milliseconds for the thread to sleep. While a threat is sleeping, it does not consume any CPU. Resource is so let's summarize what we have learned so far to suspend the current threat until another threat has ended. Used the joint message to suspend the current threats for a specific length of time. Use the sleep message suspended threats do not consume any CPU resources 9. Interrupting and aborting threads: you have seen in the previous lecture that threats convey, be suspended with the sleep or join methods. If you do not specify a time out, the threats is blocked forever and will never be released. It can be useful to release a blocked threat prematurely. For instance, when we need to end the application. Two methods accomplish this. Interrupts turned abort. Let's start with interrupt when you interrupt the thread. What April happens depends on the state's off the threat. If you interrupt a suspended threads, which is executing on ongoing join or sleep statement, then the joint or sleep will immediately abort with a thread interrupted exception. You can catch this exception with a try catch block in the thread cold. If you do not handle this accession is this thread will end. But if you interrupt the threat that is not suspended, then nothing will happen. This reds will happily continue until it's encounters the next joint or sleep statements on at that point that joint or sleep statements will immediately abort with a threat interrupted exception again, you can either handle this exception in a catch block, or you can let the entire threads end at this point, so this sounds like a really useful feature. Boots. Keeping minds that's arbitrarily interrupting a threat is dangerous. You have no way of knowing what the threat is doing at the time. Off the interrupt. If the threat is executing third party coat that was never designed to be interrupted, then objects could be left in an unusable states or resource is could be incompletely released when the exception is so interrupting. The thread is usually unnecessary. If you are writing code that blocks, you can achieve the same results, unloads more safely with your own signaling framework, using A shares boolean variable to signal the threats to end. And if you want to unblock third party coat, the Boers method is nearly always the best choice. So let's take a look at the aborts method. A blocked threads can be forcibly released via the aborts methods. This has an effect similar to calling interrupts, except that's a threat of boards. Exception is thrown instead of a threads. Interrupt exception. Furthermore, if you try toe, catch the exception, this will work both. The exception will automatically be re thrown at the end off the catch block on this process will continue until there are no more catch. Blog's on the threads will finally divorced. You can cancel the aborts by calling the resets aboards methods off the threat class insight. The catch block. So the big difference between interrupts on the boards is what happens when the method is called on a threat that is not blocks. When you interrupt a threat that is not blocked, nothing will happen. The threat will continue normally until it encounters the next joint or sleep statements on . At that point, the threat interrupted exception will be thrown. However, if you call aboards on a threat that is not blocked, then it will immediately throw a threat aboard Exception, regardless off what the threat is executing on this could be a problem. For example, if the aborts happens right when a file stream is being constructed, it is possible that's on unmanaged file Handle will remain open. However, there are two cases where you can safely use aboard. The first is when you are willing to dispose the threats entire application domain after it is aborted by ending the corresponding process or application. A good example of this is in a units testing framework. If the aborts happens to a cure. Insides dot net Framework code, for example, in the file stream Constructor on you are left with an open file handle. This is no longer a problem, because the open file handle is in the app domain that you are disposing. Another case where you can safely call the boards is on your own threats, because in your own threats, you know exactly where you are. That's what the code is doing. The divorce gives you a handy accession that is automatically re phone after every catch block, and we'll continue to do so until your current application threads terminates. So what have we learned? Let's summarize the sleep. Earns joint statements, suspend a thread either indefinitely or when the time outs or external condition is met. Suspended threats can be released with the interrupt on the boards. Methods. Interrupts is potentially dangerous because it might aboard third party codes that is not designed for aborting. It is much safer toe. Build your own signaling framework or he was aboard instant. But the board is also dangerous because it might introduce a memory or resource league. However, you can safely use aboards when either you dispose the threads entire application domain after aborting or you use a board on your own threat. 10. When should you lock threads?: in the previous section. Working with threats, I showed you a couple off multi threaded programs on some of those programs used variables that were shared between individual threats. And sometimes this led to a specific multi threading problem called a race condition. So let's revisit that problem for a second. Let's say I have to threats executing the following coat. Initially, the variable high is zero. Then my two threats each call the do work methods on increments, the variable by one. So by the time they both end, I expect the variable I to hold the value too, right? Well, no. The problem is the I plus plus line, even though this is a single line, will see sharp codes behind the scenes. This line consists off a number off steps to execute. First, read the current value off I seconds, note the constant and add it to the current value. Three. Right the results of the addition into the variable I So consider this particular sequence off events, threats, one executes steps one hands, too. Then the operating system interrupts the threat on switches over to threads to at this point, the variable. I still contains the value zero even though Threat one has the results off its addition ready in memory waiting to be written back into the naval. So now threat to executes steps one, two and three. The variable I now contains the value. One threat to ends on the operating system switches back to threaten one threat. One executes the final Step three and writes the results off the increment ation, which is again the value one into the variable on this resistance. So now this variable I contains the value one, despite the fact that it has bean increment id twice. This is called the race condition, and it happens because incriminating the variable is not an atomic operation. It can be interrupted halfway, leading to unexpected results to solve the problem. All I need to do is converse. These three steps read the current value off. I load the constants and added to the current value right. The results into the variable high so three steps into one atomic operation that can never be interrupted by another threat. Now, a section off colds that cannot be interrupted by another thread is called a critical section, a piece of code that only one threat may execute as a time if a given thread is executing any line off codes in the critical section, all other threats must wait until that threat has completed. The section having other threads waits while a single threat is executing. A critical section is also called threads looking, and it's what this entire section is all about. So when should you look threats? The answer is very simple. You should look threats every time when two or more threats are reading and writing the same shared Very. If you do this, if you always look threats in this scenario, you will eliminate 99% off all race conditions in your coat. All you need to do is find all points in your code where you reads from on and or you rights, too. That's shared the table, and then you turn that codes into a critical section. C. Sharp has a very handy built in key wrote called Look to Do exactly that. So, for example, this single line of code, the familiar increment ation off the very well. I will become this. You should take special care when you compare on a sign valuable saying for example, this. You have to make sure that the entire compare on D A sign operation is contained inside the critical section. So the coach should be like this. Now maybe you're codes is performing other actions between the compare on the assign, maybe something like this. So there is a long running message being called after the compare on before we assign. In this scenario, you want to consider re factory your calls because you do not want the other threads still be waiting for too long. So if there is no dependency between the long running methods on the very well, I, you should pull the method. Call out off the critical section so that it becomes this. Now. The critical section only contains we compare on the assign operation. Then the critical section ends on all the threats. Get the chance to execute that section of codes, and then the long running methods is called. So to summarize, where should you look? Threats. You should look threats every time When you have two or more threads reading on writing the same shares variable. You should make sure that's every variable. Access on assignment is protected with a critical section. You should make sure your compare and assign operations are entirely embedded in a single critical section. Andi, you must keep critical sections short, so if you come pull out any long running methods. 11. The lock statement: in this lecture, I am going to show you how the look statements works in C Sharp. You've already seen the statements appear a few times in the previous lecture when I showed you how to lock several fragments of code to avoid a race condition. Let's start with the following code. I have to shared integer variables up here, both initialized to one. Then there's the do work message here that checks value to and then divides value one by value to and then sets value 2 to 0. By now, you should be alert to the possibility off the race condition in this coat. If two or more threats call do work, then it's perfectly possible for one threats to set value 2 to 0, just as another threat is busy executing the console dot right line method, The results. A division by zero exception. To fix this coat, I need to make the check on the Division one atomic operation by adding a lock statement around these lines of code. So let me do that right now. Now the look statement needs a synchronization object to work with. This could be any reference type. There are no restrictions. For now, I will simply add a private static class member off type objects and use that for synchronization there. That's it. The code is now guard it against a race condition. We say that the code is threat safe, meaning it can safely be calls from multiple threats simultaneously without crashing. Locking a section of code is a very first operation on a typical modern CPU. The operation takes around 20 nanoseconds to complete. That's pretty fast, so there's no need to worry about the performance. Overheads off, locking a section of code. The look statement is what we call syntactic sugar. This means the C Sharp compiler will actually expand the statement to a larger block off coat on look is simply a convenience provided by the compiler, so we don't have to type all of that code every time. The codes produced by the compiler is very simple, and you can easily type it by hand if you want. Here is the do work to methods with the codes off the previous example, with the possible division by zero. But now, with the expanded look statements, as you can see, a look is nothing more than a call to monitor dot Enter the monitor class in C Sharp provides for critical sections. The enter methods answers critical section at the exits, methods, excesses. There's an extra boolean variable here called Look Taken. This variable acts as a signal to the finally block down here. If the monitor was entered successfully, the Boolean will be set to True. On the finally block will exit the critical section. But if for whatever reason, the critical section could not be entered successfully, then the Boolean will be false. Andi finally block will do nothing. This set up with the bullion field prevents a look leak where a critical section is entered but never exited because the corresponding monitor DOT exit is not called the advantage off typing monitor dot enter on monitor dot Exit directly is that I can now use other features off the monitor class too. For example, there's a try Enter methods that I can use. Instead, try Enter expects a time out as the seconds parameter, either in milliseconds or as a time span value, and it returns the bullion. True, if the enter was successful on falls, if it was not at the operation timed out. Providing a time out is very important because you never know if the critical section will be released. If another threat is stuck in an infinite loop while holding the critical section, then your threat will wait indefinitely. The time outs will help you break out off this deadlock situation, but we'll cover deadlocks in greater detail in the next section. You've seen that the look statements requires a synchronization objects. I created a special private objects variable for that purpose, but you can also lock on any reference type you like, including the this value or perhaps even the type objects. So you might be wondering if there are any best practices in choosing the synchronization object and the fact that our you are a device toe always use a private field as a synchronization object. The reason for this is simple. Consider for a moment that you lock on a public field now because the field is public. Another threat could also look on that same field. Now, suddenly you have on unexpected dependency between two threats that might lead to both threats blocking on waiting for each other. This could easily happen when you always look on the this value. But when you look on a unique private field that you create specifically for the occasion, then you prevent any other threat to lock on that same object. So now the unexpected dependency between threads can never happen. It's just another safety net to protect your coat from unexpected results. Finally, let me show you another capability off the lock statements nested looks you could arbitrarily nest looks inside each other like this. The threat gains access to the critical section when the outermost look succeeds on each subsequent look is simply stacked on top off the 1st 1 The critical section is released when all stacked locks have exited in terms off the monitor class. This means you can have any number off monitor dot Enter statements on the critical section is only unlocked when a matching number off monitor adult exit statements have executed nesting looks is useful when you are calling a message from within Critical section to demonstrate. Here is the divide by zero coat again, but this time I have put the actual division in another method. I can look the original codes in the do work methods, but then call into another method from inside the critical section on half. That method also set up its own critical section, both critical sections of stacks so I can safely return from the innermost methods while still retaining the look only when I exit the outer critical section in the do work Mrs is the look released so in summary. Don't worry about nesting locks, but just make sure you use the same synchronization object for each critical section. If you do, you can legally stack the critical sections on the lock will not get released until you return from the first method. So what have we learned? The look Statements in C Sharp is same. Tactic sugar for a monitor. Adults enter on monitor, dealt exit pair on. It sets up a critical section. The monitor class also has a try. Enter methods that supports a lock time out value. The lock statements requires a reference type synchronization object. You can use any object you like, but a unique private objects. Fields is recommended. You can nest look statements. The critical section is unlocks only when you exit the outermost look 12. Dealing with deadlocks: in the previous lecture, you've seen how to use the look statements in C sharp to set up critical sections, which are blocks of codes which could only be accessed by a single threat at a time. Threat looking eliminates race conditions on makes your coat threat safe, meaning it can safely be executed by more than one thread. Simultaneously. However, threat looking introduces on entirely new problem that we need to address the possibility off a deadlock. A deadlock happens when two threads are waiting for each other indefinitely. You can get a deadlock when two threats needs to. Resource is A and B to proceed. The first threat grabs resource A using a lock on, then attempts to grab resource be. But the second threat has already grabbed B and is now attempting to grab resource A. So both threats have each locked a single resource on our each waiting for the other threat to release the other resource, which will never happen. A nice way to visualize a deadlock is by looking at the dining philosopher problem, which was introduced by the famous computer scientists escort Dextre in 1965. The center is as follows five philosophers are sitting at a round table. Each philosopher hasn't plate in front of you. On between the plates are single chopsticks, five in total and in the center, off the table stands a bowl of rice. Each philosopher will think for a given period of time, then attempt to grab two chopsticks, then eat for a given period of time. Then put the chopsticks down. Let's continue to think so in this model, the philosopher's represents. The threats on the chopsticks are the resource is like shared variables that the threads are attempting to access through a lock. You can easily visualize a deadlock by imagining each philosopher picking up the chopstick on his left. Now old, the chopsticks are gone on. Each philosopher will wait forever for the chopstick on his right to become available. This is a deadlock. Okay, so how do we fix this deadlock? Let's consider the following mitigation algorithm. If a philosopher cannot obtain the chopstick on his rights, he puts it down both chopsticks and tries again. We have now introduced a new problem called a Lifelock. A Lifelock happens when the resolution to a deadlock results in the exact same situation. All over again. The coat keeps cycling through the same sequence of steps forever. Consider. All philosophers are holding a chopstick in their left hand. There are no chopsticks available on the right, so all philosophers pointed down there. Chopstick and try again, starting with the one on the left. This succeeds and we're back where we started with all chopsticks in the left hand, off every philosopher hands. No chopsticks available on the right. If you didn't see that coming, don't worry about it. Resulting deadlocks is extremely difficult. It took 19 years for someone to provide an elegant solution to the dining philosopher problem. If you want to check it out, look up the Shunde, Missouri, solution on Wikipedia. For now, I will give you some helpful advice on how to avoid deadlocks. A quick and dirty solution to the dining philosopher problem is to introduce an element off randomness. If a philosopher cannot access both chopsticks, have him put both chopsticks down. Think for a random number off milliseconds and try again. The randomness will spread out the resource looks over time and allow a limited number off philosophers to access both chopsticks. We can implement this solution by using monitor dot try enter with a random time out and on failure. Call his sleep statements with another random time out before the threat tries again. Another solution is to use an arbiter, which you can visualize as a waiter. To pick up both chopsticks, the philosopher must ask the waiter for permission. First on, the waiter will only grants permission to a single philosopher at a time. We could implement this solution by introducing the waiter as another synchronization objects before picking up both job sticks. The threats attempts to get a lock on the waiter synchronization object. If the look succeeds, the threat proceeds to lock both chopsticks. So to summarize, a deadlock is when two or more threats are waiting for each other indefinitely by attempting to gain access to two or more. Resource is, you can mitigate a deadlock by using monitor dots, try enter with a random time out on, then sleep for a random number off milliseconds. If the answer fails, yeah, you can use an arbiter. A shared synchronization projects that each threat needs to look first before gaining access to all off. The resource is there are better solutions to for example, the Chandi Missouri algorithm, invented in 1984. And again check Wikipedia if you are interested in this algorithm. 13. Using the Interlocked class: in the first lecture off this section, the one abouts when you should look threats. We looked at this bits off example code. You learned that this cult introduces a race condition because the I Plus plus operation is in fact, three distinct micro operations. First reads the current value off I seconds. Load the constant one on, add it to the current value. If I ansari right the results of the addition back into the variable I. If another threat interrupts this process halfway, we can have to threats both increment ing the variable but writing the same new value into it. Twice the results. Two increments operations. But the valuable is only increment it. Once I could illustrate this effect quite elegantly with the foaming bit of code. Take a look at this now you might be wondering. These looks a little different from the coat examples we've seen so far on. The reason for this is simple. I've been using a boon to virtual machine running mono develop to demonstrates my multi threaded code examples. But a disadvantage off Boonchu in a virtual machine is that running threats get very long time slices before they are interrupted by the operating system. Demonstrating a race condition in Mobutu is really hard, because a threads often runs all the way to completion before the operating system switches over to the next threat. This puts threats in sequential order instead of executing them in parallel on. As a consequence, all race conditions disappear. That's very nice for program stability, but it makes it very hard for me to demonstrate locking techniques and raise conditions. So for this demo, I'm going to use Xamarin studio instead. Xamarin studio is the Mac version off Model Developed, which runs natively on my MacBook Pro. The OS X operating system on my laptop is much more performance than a virtualized Bhutto session that as a result, these threats under OS X tend to run in parallel Most of the time. This exposes my coat to a possible race condition, which I can resolve with gold thread looking. So let's take a closer look at the program, this program declares. A shared public variable cools counter the threat. Work methods here will increments this counter one hundreds, thousands times. The main program message is down here. This loop sets up 10 threads and starts them then waits for all threats to complete with this loop off calls to the joint methods. Once I know for sure that all threats have ended, I display the final value off the counter variable. So 10 threads each increment ing the counter 100,000 times. I'd expect the end result to be one million, right? That's fine. Does I will run the program now? Look at that. That's not one million. The reason for this behavior is that the 10 running threats will encounter many race conditions. Where to? Threats increment the counter simultaneously on the counter value is implemented only by one, and all these rays conditions together. You get this number. We've already seen that the results is very simple. Just slap the lock statements around the increments operation. So here I have a modified program that uses a look statements in the do work methods to make the increments on atomic operation that can only be executed by one threat at a time. I am using the best practice off, declaring a private static synchronization object for the lock. It's this sink object variable here. The rest of the coat is exactly the same. Still 10 friends doing one hundreds thousands increments. So now, with the additional locking, I expect to get the final result off exactly one million. Let me run the program and I was right. Exactly one million. So in this version off the program, the increments is an atomic operation that can only be executed by one threads at a time. This completely eliminates all raise conditions on gives the expected results off one million. But now look at the execution runtime here. This coat is slower than the unlocked version. The reason is simple. The look itself takes about 20 nanoseconds to complete, and this adds to the total run time. But the look also makes all increments sequential, meaning they occur one after another. Compare that to what? Race condition where two increments happen simultaneously by eliminating all race conditions. I actually made the code slower because I lost the benefits off having codes being run in parallel. It seems like I have no choice but to accept this drop in performance. I can't have race conditions disrupt my program. Results on locking is the only way to get rid of them entirely. Right? Well, fortunately, there's an alternative It's true that locking gets rid off all race conditions, but the look statements is over. Killed in this particular situation, Modern CP use have built in support for simple atomic operations like increment ing or decry menting variable. The dominant framework contains a special class that provides access to these atomic CPU operations, and the class is called interlocks. The intellect class has a number off methods for performing atomic operations on integers, including increments and documents. So now I can modify my program on instead off using a general purpose look statement. I changed the codes to use the interlocks class instead. The result looks like this. If you look at the threads work method, you see that I now use the interlocked adults increment methods to directly increment the counter variable. Note that this method requires a ref parameter because it's going to modify the value. The beauty off the interlocks class is that it is more than twice as fast as a general purpose. Look statements. So I still get the benefits off a locked atomic increment operation. But the performance of my coat will improve by at least a factor off to. So let me on the program. I still expect an output value off one million. But now I also expect a one time which is faster than my previous attempt. Let's check it out. I'm here, you have it. The output is still one million, so no race conditions occurs. But now the runtime is significantly faster than with the general purpose. Look not bad at all. I made some measurements previously without my screen recorder software running in the background, and I put them in the graph. Here are the results I measured, um, unlocks run time off five milliseconds. Then, by adding the general purpose look, the cold slows down to 80 milliseconds, which is 16 times slower does when I used the interlocked dot increments methods instead off the general purpose lock. This resulted in a run time off only 18 milliseconds, 3.5 times slower than the unlocks coat, but 4.5 times faster than the general purpose look. So let me summarize what we've learned on unprotected plus plus operation will introduce a race condition. You can eliminate the race condition with a lock statements, but this will make your cult 16 times slower. A much better alternative is to use the interlocked dot increments method. Now the code is only 3.5 times slower and you still have the benefits off, eliminating all race conditions. 14. Thread synchronisation with AutoResetEvents: in this lecture, I am going to cover a new multi threading topic called Thread Synchronization on This is the act off synchronizing two or more threats together in order for them to exchange data. You've already seen an example off threat synchronization when we looked at the threat dot join statements, which suspends the current threads until another threat has finished. This is a very simple form. Off threat synchronization. One threat waits until another threat has finished. Once the threat finishes, the other threat resumes. We can generalize this example to something more generic threads. Synchronization is the act off suspending one threat until a certain condition is met in another threat? So why would you need threats synchronization? Well, let me show you on example. This is a very common situation in multi threaded code. You have a main threat that launches second threat to do some complex work in the backgrounds. The threads loops on produces a new results every few milliseconds. The main threats simply waits for the results to become available and picks them off one by one. You already learned in the previous section that you can set up a shared variable to pass data between threads, and to avoid a race condition, you need to make sure to lock the variable every time when it is read or written to. So here is the code that implements my example. There is a shared variable up here, a simple integer that I'll use to past data between the work spreads on the main program. Threats. The threats, work methods is over here with a while loop that simply lose forever. During each Luke, it aeration the threads, does some work, which in this case is simply increment ing the variable. And then this thread goes to sleep for one milliseconds, and then it does the whole thing all over again. The main program message is down here. The program sets up Z threads and started and then loops 100 times to collect the results on right into the console. Let's pretend that the main program methods also does a lot of other stuff, which I simulate with this sleep statements here that suspends the threads for 10 milliseconds during each loop iteration. Now what do you expect to see when I run the program? If both threads line up perfectly, I expect to see the sequence 123456 etcetera. Then we run the program on Shake It out. That's not a very regular sequence that it makes perfect sense if you think about it. The work threats produces a new results every milliseconds, but the main program threads only collects the results once every 10 milliseconds, so I lose roughly 10 results during each loop. Reiteration. The problem here is that the two threats are not synchronized. Both threads are running freely, reading and writing into the same variable, and there is no guarantee that the results off one threads is being picked up by the other threats in time. So to fix the problem, we need to find a way to synchronize the to threats. What we need is some kind off simple communication channel between the two threads. Something like this. So the work thread starts. Aunt performs the very first calculation. Then instead off writing the results into the shared variable, it sends a message to the other threats something along the lines off. Are you ready to receive data? The work threat then suspends itself until it receives an answer. The main program threats enters his own loop on just before reading the shared valuable. It sends a signal to the work. Fred. Yes, I am ready to receive data. The work threat receives the signal. UN. Suspends itself, underwrites the first results into the shared variable. The main program threats then reads. The results from the variable on the cycle continues in the next loop. It oration. The good news is that there's a class in the dot net framework that provides this exact type of communication the auto reset event you can visualize on outer recent events as a turn style like you see in movie cinemas. One or more threads line up behind the turnstile, waiting to be let in on the act off inserting a tickets. Let's a single threat through the threat lines up behind the turnstile with a call to outer reset event dots. Wait one on a cool two out of reset event does set. Let's a single threat through. Wait one on set can be called from two different threats. I can implement the communication channel using an outer reset event. The work threat asks if the main program threat is ready by calling the weight one methods on the outer recent events. The main program threats, in turn, indicates it is ready to receive a result by calling the sets methods also on the same after recent events. So now the work methods patiently waits behind the turnstile until the main program threats inserts a ticket to indicate that it is ready. The turnstile then opens, allowing the work threats to write a result into the shared variable. Let me change my program to implement this communication channel. So what I need to do is onto a new auto reset event to my coat. Let's call. It's ready for results. The work threats will use this outer reset events toe. Ask if the main program threat is ready to receive a new result. If it is not ready, the work threat will suspend. This corresponds to a call to wait one. So let me add that to the word methods. So just before writing a new results into with shared variable, I call the weight one messes on the outer reset events. This will ask the main program threats if it is ready and suspend the threat. If it is no in the main program methods just before reading from the shared valuable I aunt . A call to the set methods off the outer research events. This will indicate to the work threads that the main thread is ready to receive data. It effectively opens the turnstile which UN suspends the word threat and allows it to right its results into the variable. And this is these two simple modifications will allow the two threats to synchronize on effectively past data between them. Despite the fact that they're loop timings do know the line. Let me run the program so you can see what happens now. And there you go. A perfect increment ing sequence of numbers. Problem solved, or is the problem really solved? Let me scroll back through the sequence of numbers all the way to the beginning. Look at this. The sequence starts at zero, then jumps to two and then increments by one as expected. What's going on? The beginning. What we're seeing here is another race condition. Until now, I've always assumed that the work threat starts right away that has a result ready before the main threat is able to pick it up. But in fact the reverse is also possible the main threat signals that it is ready to receive the results on then immediately reads the shared variable before the works threat has results available. So what we need is another communication channel. The work threat first asks the main threat. If it is ready to receive a results and then suspends until it receives a confirmation, then the work threat should write the results into the variable on. Then it should signal to the main threat that it has finished writing the results. The main threat should signal to the work threat that is, is ready to receive for results. Um, then it should ask the work threats if it has finished writing the results and suspend until it gets a confirmation. This bi directional communication channel with symmetrical signal and ask actions at both ends is a very common programming construct. It sets up a very robust communication channel between threads. So let me modify my coat to at this second channel. First, I need to add a second how to reach that event. I will call this one set results because it indicates that the work threat has set the new results in the shared variable. Next, I need to modify the work methods. After the work threat writes the results into the shared variable, it needs to signal toe the main program threats that it has done this. So I'll as a call to set here using the new set results variable in the main programmes threads. I also need to make a change after the main method signals to the work threat that it is ready to receive a new results. It needs to wait until this new originals becomes available. I will do that by calling Wait one again, using the new set results variable. And that's it. These two simple modifications set up the second communication channel on create a robust data channel between the two threats. Let me run my codes to see what happens now. Here we go. And there you have it. A perfect numerical sequence. But now when I scroll back all the way up to the beginning off the output, you can see that the sequence starts with 1234 etcetera. Perfect. So what have we learned? If you want to safely past data between two threats, looking is not enough. You also need to synchronize the threats. A simple form off threat synchronization is the threat dot join methods, which suspends one threat until another threat finishes more complex. Synchronization can be created using the auto reset events. Class A cool to wait one suspends a threat under cool to sit, resumes the threads. For a robust communication channel, you need at least two outer reset events with calls to wait one on set at both ends. 15. How to build a Producer/Consumer queue: in this lecture, I am going to show you a very common pattern in multi threaded programming called the Producer Consumer que. It looks something like this. The idea behind the Q is that we have a large amount off tasks that needs to be executed. A synchronously in the backgrounds. We have a main program threat to set up all the work that needs to be done on one or more consumer threats that do all the work in the background. So this pattern sets up a threat. Safe que off tasks. The main program threat adds new tasks to the Q one by one. Any time the new task is added to the Q one off, the consumers wakes up. He removes the task from the queue, executes its and then goes back to sleep until a new item is added. The nice thing about this pattern is that it is very scalable. You can either create a Q with only one producer on one consumer threat, but you can also have 10 or maybe even 100 consumer threads. The code stays exactly the same. This makes it very easy to accommodate increasing workload without having to completely re factor your coat. So I am going toe. Have two or more threats, reading and writing into the same shared task. You in the looking section I showed you that this will introduce a race condition unless I take care to look every reads on every right operation. So to implement the Q, I am going to need a synchronization object on a look statement every time I access the shared que. But I will also need something else. I want all consumers to sleep until a new task is add it to the Q. When I had the task, I want only one consumer to wake up. Execute the task. All other consumers should remain sleeping because we only need a single consumer to execute. The task sounds familiar, a kind off gate that blocks threads on, then, when signals lets the single thread through. Exactly. This is what we covered in the previous lecture on. I'll tell. Reset even is the perfect candidate to send a signal to the consumers that a new task is available. So I am also going to meet on out ill reset events in my coat to implement this communication channel I've already prepared some coat that sets up a producer consumer que , with one producer and three consumers. Let's take a look here. At the top of my program are the field declarations. Let's walk through them one by one. I start by declaring a list off threads. The program sets up three consumers, so I will have three threats in this array. When the program is running, it's useful to have access to all the threads. In case I want to shut down the program. I can loop through the array and call. Join on every running threat to make sure they all finish gracefully. My next field is the task you itself. I used the handy Q class from the generic collections name stays, and I used an action delegates for the type. The action delegates is a pre defined type that defines a method call without parameters that doesn't not returning results. So each task in the queue is actually a method l a bit that I can call directly toe execute the task. Next up is the synchronization objects for the Q. Since I'm going to be accessing the cue from more than one threat, I'm going to need lock statements around. Each reads or writes operation on a lot. These locks will need the synchronization objects, so that's why this variable exists. Next is the outer reset event to signal to the consumers that a new task is available, the consumers will wait on this signal and the producer will call the sets methods every time it adds a new task. And finally, I have this extra synchronization objects here. Now I need this because I added a cool feature to this program. Each off the three consumers will outputs its work in a unique color. Redd's green were blue, but unfortunately the console is not thread safe. So every time I want to change the console text foreground color, I need to look that operation to. So this is the stricken ization objects for locking the console color. Next up are the methods. This is the thank you task method, which adds a new task to the Q. You can see that it is simply a locked call to the MQ methods off the Q class. But then I set the auto reset events to signal to the waiting consumers that a new task is available. Next up is the work message for the consumers. The consumer will first look the queue and then attempts to get a new task from the Cube. If that succeeds, the consumer looks the console since the text foreground color and then executes the task. If there is no task available, the consumer waits with a call to the outer reset event dot wait one method. The main program message is down here on this is the producer parts off the program. The methods first sets up three consumers with a unique color for each one. Then it starts the consumers and proceeds to ads tusks to the Q. Each task is simply the outputs off a random number between zero and nine to the console. After adding the task, the producer sleeps for a random time interval to simulate the high production workloads. Okay, let me run this program so we can see if everything works. Here we go. That looks good, doesn't it? So right now the producer is adding a new random number task to the Q. Roughly every second on you can tell from the colors that all three consumers are picking up the tasks as they come available on executing them. The colors are evenly distributed over all tasks. Let me run that for a couple more seconds. Okay, So what have we learned in this lecture? A common pattern in multi threaded coz is the producer consumer Que This coast pattern supports multiple producers on multiple consumers and can easily scale up to high workloads . To build a producer, consumer que you will need one threat safe que off tasks protected with lock statements and one out l reset event to signal the consumers that a new task is available. And finally, as in the sight you saw that the console is not thread safe. If you change console settings in a threat, you will need to secure that operation with the lock statements. 16. The ManualResetEvent class: In the last lecture, you saw how easy it is to build a producer consumer que in dot net All you need is a threat safe que off tasks which I created with a generic que off action delegates on a single alto reset event to notify consumers that the new task is available. So far, we have only being using. I'll tell reset events to synchronize two or more threats on a auto reset events functions as a turnstile when closed, one or more threats line up behind the turnstile waiting to be let in when the turnstile is opens. With the call to the set method, a single thread is let through on the gates immediately closes again. So to let multiple threads through, I would have to call set repeatedly once for every waiting friend. So on outer reset event functions as a kind off synchronization channel between Onley to threats. One threat is waiting because of a call to wait one on the second threat opens the gate with the call to set in the case off the producer consumer que This was exactly what we needed. When a new task becomes available, we want only a single consumer to wake up on. Execute the task. If there are three suspended consumer threats, it's perfectly fine if to threats remain suspended and only a single threat wakes up to do the work but was, If I want to synchronized or consumers, let's say I want to enhance the producer consumer Q. With a pause function when a producer pauses the queue, all consumers should finish the task that they are working on and then suspend themselves until the Q resumes. When a producer resumes the queue, all consumers should wake up on Thursday work. You can visualize this new functionality like this. I add a gate to each. Consumers do work method. Initially, the gate is open so the consumers pick up available tasks like normal. But when a producer sends a signal, the gate closes, the consumers finish whatever they are working on on, then suspend out of the closed gate. Whenever a producer sends the resume signal, the gate opens on all consumers resume work. Now look what happens when I try to implement this with an auto reset event. Initially, the gate is open, but as soon as the first consumer passes through the open gate, it automatically closes. Remember, it's called on out L reset event. When the first consumer passes the gate, it automatically resets and closes so only a single consumer can execute tasks. With the gate closed, I have another problem. When a producer tries to resume the Q by calling the set method, only a single consumer wakes up. The other two gates remain closed. Even worse, After the single consumer passes through the open gate, it again closes automatically. So I think you get the points on a auto reset event will not work. Here, let me show you the ports functionality using an out L reset event so you can see for yourself that it doesn't work as intended. Here is the code from the previous lecture again that implements a producer consumer que I have added a new outer reset event here called Pause Consumes. You can see that I construct the fields with a true parameter, which means that the gate will initially be open. The next change is in the consumer do work methods. At the talk. Here is the core to the new force. Consumers wait handle. So when the gate is closed, the consumers will suspend at this line. The final change is in the main program message. Here at the bottom, I've added so called that pauses on resumes the Q. When you press a key, Andi, I added a Boolean variable called Consumers paused to keep track off the state off the Cube . Now, when I run the program, you can see that nothing happens. A single consumer past the gate, so it's automatically closed. Then the consumer discovered that that was no work to be done in the queue. And now all three consumers are suspended on. Absolutely nothing happens now. When I pressed a key, the guys closes that it was already closed. So again, absolutely nothing happens When I pressed the key again, the gays opens. But because I used an outer reset events, only a single consumer can execute a single task before the gates automatically closes again. So only one number appears on. Nothing else happens. Okay, so let's fix this code was I need is a gate that does not automatically close all the time on. Fortunately, such a gate exists. It is called a manual reset events, a manual. Recent events can be either open or closed munition Andi. When closed, it allows waiting threats to queue up behind the gate. But when the gate is opens with a call to set a lthough, threats are let through at the same time. On the gate remains open until someone manually calls the recent methods to close the gate again for our pause resume functionality. This is perfect. Initially, all the gates are open. But now, when the first consumer passes the gate, it will remain open for all other consumers to. So the open manual reset events will have no impact on the behavior off the consumers. When a producer calls reset, the gate closes on all consumers suspend, just like with the outer research event. But now, when a producer resumes, thank you with a call to set the Gates opens for all consumers and not just a single one. Every consumer will resume work, so I'm going to make changes to the coat to implement the pause. Resume functionality correctly. The only change I need to make is to replace the out of research event with a manual reset events. Instead, everything else is already in place, so let me make those changes. And no. When I run the program, you see that it starts up correctly. The producer is producing work on the three. Consumers are picking up the tasks one by one as now. When I press a key, you see that the consumers pause and all activity stops. The producer is still running in the backgrounds, adding more tasks to the Q. So I shouldn't wait too long when I pressed the key. For the second time, the consumers resume their work. You can see that one consumer will execute all tasks one after another until the Q is completely anti, then the other consumers joining on start executing new tasks. This behavior is a side effect off how the Q is designed. If there is a backlog off work, a single consumer will execute the entire backlog. And this is because the other consumers are still suspended and they will only wake up by new tasks. I could make further two weeks to the code to avoid this behavior, but that's outside the scope of this lecture. Okay, so what have you learned so far on a auto reset Events is a synchronization channel between two threads. One waiting threat resumes when another threat opens the gate. To synchronize an entire group off threats, use a manual reset events instead. No all waiting threats resume simultaneously. When a single threat opens the gate. Now tell reset events are perfect for instructing a single threat to do something in our case to execute in your task manual reset events are perfect for instructing all threats to do something in our case to pause and resume work. 17. The CountdownEvent class: Okay. So far, you have seen two types off resets. It s the also recent events where one threat signals another threat to do something in the producer consumer Que example I used on auto reset event to signal to consumers. That's new work has arrived on. You've seen the manual reset event where one threat signals a group of threats to do something. I use this recent events to add a new feature to the Q were pressing a key either suspends or resumes all consumers. But you might be wondering if there is a class for the opposite behavior off a manual reset events, a kind of events where a group of threads signal a single thread to do something. So here is how that might work. We can again visualize this reset events as a turn style. Initially, the gate is closed on threats cure waiting to be less through. However, the big difference with a manual research event is that the gate does not open when another threat insurance ticket. Instead, the gate requires more than one tickets to be inserted. We can configure how many tickets the gate requires during initialization, so let's say we require three tickets, so the gate is closed on threats are lining up, waiting to be let through. Another threat inserts tickets, but nothing happens. The gate requires three tickets, so there are still two more to go. Another threats, instruments, tickets and then another. When the third's tickets is insurance, the gays opens on a waiting. Threats are let through. You'll be happy to hear that there is a class for this specific behaviour, but it has only been introduced in the Donets four points. Go bring work, so make sure you are using this framework version for laser. The class is called the camps down. Even Here's how it works. You initialize a countdown event on specifying their tickets counts in the constructor. A threat lining up behind the gates cools the ways. Messes. Two requests to be let through. And all this reds inserts tickets into the gate with a call to the signal message. When signal has Bean called the correct number of times the gate opens on a waiting threads are led through. And to close the gates, the stress can call the reset message. This will reset the ticket counter back to the original value, they come down events, has a couple more features. You can actually increase the ticket counts as long as it is not zero Yes, with the coal, too. As Coombs. However, if the count has already reached zero, this will throw an exception. A safer way is too cool. Try as counts. Instead, this will attempt to increase the tickets counts as return false if the count has already reached zero. So the thing to remember is that, as counts will never close on, already opened gays. The only way to increase the ticket counts off. A countdown event that has already reached zero is by calling recess. This will close the gates and resets. The tickets counts back to the original value. Okay, so let's put the countdown events to good use. As I explained already, account on events can be used in scenarios where a group of threats needs to signal a single threat that something has happened. What needs to happen? A good implementation for a countdown event is to signal threads to quit. If we do not want to use threat, don't joint. So in this scenario, the countdown event gets initialized with the total number off running background threats. The main threat sends acquitted signal, for example, by setting a shared and Boolean to true the background threads. Pick up with this single and terminates one by one each. Calling the signal message before ending the main stress can simply wait on the countdown event. When all threats quits, the gates will open on the name threads will resume. I have modified the producer consumer calls to implement this behavior. Take a look at the following code I have removed. The bulls resume functionality off the previous lecture to make the codes more easy to read . Let's start by looking as the static feels so these 1st 4 feels have not changed. List off consumers on tasks a synchronization objects to look the queue hands on alto reset event to signal to consumers that a new task is available. Bus down here is where I added several new fields first quits consumers. This is a new countdown events that will be used by the consumers to any case that they have Quist. If you remember, I have three consumers now, each using distinct color to display their work on the console. So here I am initializing the countdown events to three, which corresponds to the total number of consumers executing tasks in the queue. Next is a Boolean flag called It's requested. I will use this flag to signal to the consumers that I want them to quit. This flag will be shares between the consumer threats on the main program threat, so I need to look all access to its to prevent a race condition. The quick look field is the synchronization objects for this lock operation. No, let's look at the message. The do work message has been modified to here at the talk. Each consumer will now periodically check the quit flat using a look. Statements quits if the flag is set to true while quitting, the consumers now call the new quits consumers recent events. I made a second change down here when waiting for a new task to appear in the Cube. Each consumer now uses a 1000 milliseconds time out. So even if the Q is completely empty, each consumer will still test the quitter requested flag once per seconds to check, it needs to quit. Then we gets with the main program Mrs Down here the main program threats in queues, Tasks for tear It orations and then signals the cuisine requested flag with the lock statements. It then suspends behind the countdown events gate until each consumer has signaled that it has quit successfully. Only after all three consumers have quits will this rents continue. Display a message on the console and then also quit. Okay, so let me run this program. I expect 10 tasks to be in Cute on executed before the Q quits First, all three consumers should quit and then the main program threats in that order. Here we go. I am running the program and you see that everything works. As expected, the Q worked for 10 iterations. Then the consumers quits one by one, and then the main programmes threat exited. So what have we learned? We have seen the following three different types of recent events, each suited for different tasks. The outer research event. One threat signals another threat to do something in the producer Consumer Cube I use on alto reset events to signal to consumers. That's new work has arrived the manual research events. One threat signals a group off threats to do something I used this recent events to suspends on resume all consumers When a key is pressed, that's finally the countdown events. A group of threats signal a single threat to do something. I used this even to make the name program threat. Wait until all consumers have quit. 18. Thread rendezvous: if you recall the first lecture on threat synchronization. In that lecture, we tackled the problem off passing data between two threads you learned. That's in order to reliably past data between two threats. You need to first synchronize the threats so that they are both executing compatible parts off their code. I used to out told resets events to synchronize the threads. The first threats performed a set on, then a weight one. The second threats performs a weight one and then the sit. This complimentary codes structure ensures that both threads will unblock at the same point in code simultaneously. The generic name for this mechanism is Threat Rendezvous, which is the process off aligning two or more threats in time to execute the same parts off code simultaneously. In this lecture, I will look at several ways to set up a threat rendezvous. I will start with familiar coat that's you've already seen before and then slowly make my colds more efficient and more generic. Okay, let's start with coat. That does not do any synchronization whatsoever. Take a look at this problem here. You can see that I have a threat work methods that simply counts from 0 to 5. As displays these numbers on the console, the main program methods starts to new threats with his work methods and then does nothing . So what's the outputs going to be? Obviously, two sets off numbers from 0 to 5, but they will be randomly mixed together. There is no threat synchronization whatsoever, so there is no way of knowing in what order the numbers will appear. That's all determined by the operating system that will decide how long to run a single thread before suspending it on moving to another threat. Okay, so let me run the program. Here we go. Andi, there is our answer. The operating system first runs one threats in its entirety. I'm good in the order. This should not surprise you when you realize that I am executing this coat in a virtual machine running on Lubutu. The virtual machine only has a single CPU core, and so the dot net a runtime has no choice but to run the threads one after another on because the work methods is so short, each messes runs in its entirety. So now I am going to modify the coat to implement threads run level. I will start with our familiar solution for implementing threat. Renee who to alto reset events with one threat. First calling set on, then wait one on the other. Threat first calling. Wait one and then set. Here is the modified code. I declare to out of recent events up here and then use this handy static signal on weight method off the event. Wait, handle class. These methods will signal the first handle on, then wait. On the second handle, you can see that I have to threat work methods with complementary single and wait calls. Now watch. When I run the program, you can see that the loops now line up each threat outwards. A single number on then waits for the other threats to catch up before moving to the next loop iteration. It's perfect, but ways this solution is not perfect at all. There are two big problems with this coat. One. The work message is no longer generic. I need two distinct versions off the work methods to implement the middle. Its signal and wait calls as to this technique can only synchronize to threats. What if I want three or more threats to fix this problem. I needs a more generic synchronization message that can scale up to any number off threats . Unfortunately, you've already seen the class that can do this for us, it is the countdown event. So the trick here works as follows. I set up a count down event with the number off threads I want to synchronize. This can be any number off threats I want. It's red. I have each threat first. Cool signal on, then call Wait all the same. Countdown events. Once the council off the countdown event reaches zero, the gate opens on all threats resume simultaneously on their execution lines up. So here is the coat that implements this technique. You can see up here that I removed the two out ill reset events and replace them with a single count down event initialized to three. There is now only a single do work, Mrs. On In each Lubitsch aeration it first calls signal and then waits on the same countdown events variable. The if statements enforces synchronization on the fourth loop it oration just when the threads are about to write the number four to the console. And down here is the main program message, which now starts three threats not to. Now, when I run the program, you can see that the fourth loop iteration lines up all threats, then simultaneously, right the number four to the console. This technique sold the following two problems. First, the work methods is now generic. I can use the same message for all threats. Seconds. The technique can synchronize any number of threats. Three. In my coat example. Unfortunately, I introduced a new problem. The rendezvous only works once after one rendezvous the gates is open on needs to be closed with a call to reset. But closing the gates in a threat safe manner is very complicated. I would have to add either a look or another out of reset event to ensure that only a single threads will reset the gate. Is there a better solution? Yes, there is. Microsoft's helpfully provides us with the barrier class, which solves all three problems. A barrier can use the generic work message. The barrier can synchronize any number of threats, and the barrier does not need to be reset and can be used multiple times. So here is the code again, now modified to use the barrier class instead. Here at the top, I declare a new barrier variable on initialize. Its threat counts to three. The work methods now calls the signal and wait methods off the barrier class, and there is no longer an if statement to ensure that the barrier is used only once. Barriers have an additional very nice feature. Take a look at the construction here. In the second parameter, I provide a function that writes a new line to the console. This function is automatically calls by the barrier at the precise moment when all threads lineup, but before they are resumes. So in the function, you do not have to worry about race conditions, because at this point all threads are still suspended. Let me run the coat. Check this out. Now all threads lineup in each loop iteration, but I also get a nice new line in my AL put right. After all, threats have executed a single look. It oration on just before they resume with the next iteration. So the recommended class to use to implement threads Rendezvous is the barrier class because it has many advantages over the other methods. The barrier class can be used with a generic work message. The barrier class can synchronize any number off threats. The barrier class does not need to be resets and can be used multiple times, and the barrier class can optionally execute custom code at the exact moments when all threads lineup at the barrier. Okay, so what have we learns? Threads. Rendezvous Is the process off aligning two or more threats in time to execute the same part of code simultaneously? Threat Rendezvous can be implemented with two complementary also research events. But this requires distinct work methods and only works for two threats. Threat Runde, who can also be implemented with a count down event. But this only works once on cannot be used in a loop. The recommended way to implement Threat Rendezvous is with the barrier class. 19. Course recap: congratulations. You have completed the entire course. You are now a certified multi threaded, bulletproof C sharp coder. I have shown you how the threat class works, how to use threats in your uncle, which specific problems you will encounter in multi thread is called and how you can resolve these problems. The problems we covered where race conditions, which are sections of codes running unpredictably when access by more than one threads raise conditions, can be resolved by using locks. We covered synchronization problems, threads waiting for other threats. I showed you house he used joint Interrupt the boars or a shares boolean variable to synchronize threats we covered that looks two or more threats waiting for each other indefinitely. Deadlocks can be resolved with randomness, an arbiter or a more advanced algorithm like the Chandi Misra solution. The skills you learned have given you a rich toolbox off knowledge and ideas that you can use when writing your own multi threat is cold or when collaborating in a development team , especially when you're working on Mission Critical coat, where high stability is crucial. If you discover some interesting insights of your own, please share them in the course discussion for him, for us all to enjoy. I hope you enjoyed the course and learn some useful new techniques that you can apply in your software development career. Now go on, build great things.