How To Write Asynchronous C# Code With Tasks and PLINQ | Mark Farragher | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

How To Write Asynchronous C# Code With Tasks and PLINQ

teacher avatar Mark Farragher, Microsoft Certified Trainer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

19 Lessons (2h 44m)
    • 1. Course Introduction

    • 2. About this course

    • 3. Introducing TPL and PLINQ

    • 4. How to start a thread

    • 5. Race conditions

    • 6. Resolve race conditions with thread locking

    • 7. The lock statement

    • 8. Thread synchronisation with AutoResetEvents

    • 9. How to start a task

    • 10. Working with tasks

    • 11. Initialising and cancelling tasks

    • 12. Parent and child tasks

    • 13. Task continuations

    • 14. When should you use tasks?

    • 15. When should you use PLINQ?

    • 16. Word reversal with PLINQ

    • 17. PLINQ and item ordering

    • 18. Limitations of PLINQ

    • 19. Course recap

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class


Today we have beautiful libraries for writing multi-threaded C#, and there is simply no excuse for writing bad asynchronous code. In this course I will teach you how to write rock-solid code using Tasks and Parallel LINQ that works perfectly on your first try.

I wrote a multi-threaded conversion utility a year ago that successfully migrated 100,000 documents from SharePoint 2010 to SharePoint 2013. The program worked flawlessly the first time because I used PLINQ.

Sound good?

Writing multi-threaded code by hand is hard. I’ll prove it to you by showing you some code that exchanges data between two threads. You will be surprised how difficult it is to do this reliably.

But then I’ll show you how trivially easy it is to write asynchronous C# code using the Task Parallel Library and Parallel LINQ. These amazing frameworks allow anyone to write robust multi-threaded code that can take a beating.

By the end of the course you will be fluent in both the Tasks Parallel Library and Parallel LINQ.

Why should you take this course?

You should take this course if you are a beginner or intermediate C# developer and want to take your skills to the next level. Working with Tasks and Parallel LINQ might sound complicated, but all of my lectures are very easy to follow, and I explain all topics with clear code and many instructive diagrams. You'll have no trouble following along.

Or maybe you're working on a critical asynchronous section of C# code in a lage project, and need to make sure your code scales reliably over multiple CPU cores? The tips and tricks in this course will help you immensely.

Or maybe you're preparing for a C# related job interview? This course will give you an excellent foundation to answer any asynchronous programming questions they might throw at you.

Meet Your Teacher

Teacher Profile Image

Mark Farragher

Microsoft Certified Trainer


Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 11 successful Udemy courses. He has been a Founder and CTO, and has launched two startups in the Netherlands. Mark became a Microsoft Certified Trainer in 2005. Today he uses his extensive knowledge to help tech professionals with their leadership, communication, and technical skills.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Course Introduction: Let me ask you a question. Would you like to become a C sharp? A sink? Architects. Okay, I made that up. There is no icing architect. There is no official facing architectural. But what I think that word means is that you are a C sharp developer who is really good as writing. Multi thread is code at writing a synchronous coz because writing multiple code is hard. Usually when you try to write multi threaded code, the cards immediately crashes production on they take you forever to debug. So learning the fundamental skills of multi threading is super important. And that's exactly what I'm going to teach you in this course. So in this course, we're going to focus on the fundamentals off threatening. I will show you how to work with the threat last, how to set up threads, locking on how to synchronize multiple threats so that they can exchange data. In this course. I will teach you about the tusk class. So I will show you how to do work with networks off tasks wire together on how you can create much reduced solutions using tasks In this course, I will show you put a little link or plink. I've no idea. How you pronounce is part of the link where you can basically frame problem as animal liberation as a link query on. Then run this link query on multiple si pues simultaneously on run the coast in parallel. The parallel link is super powerful. We're gonna take a look at that in this course. So this course contains lectures. It contains quizzes to test your knowledge on there is downloadable source codes that you can checkouts. Plus, I have added a couple of coding exercises, So you will be writing code sounds testing your cold against my solutions. So are you interested in becoming in a sink architect? Then this is the course for you. So I've created this course for Julie developers Medio developer, senior developers. Doesn't matter what your level is. As long as you're interested in writing bulletproof rock solid multi threaded coat. I will teach you exactly how to do that. This will be a huge boost to your career. So thank you for listening on. I hope i'll be seeing you in the course 2. About this course: welcome to this course rights asynchronous C sharp coat with tasks and parallel link. In this course, I will teach you how to rights robust, multi threaded C sharp codes by using the task Parallel Library and Parallel Link. 10 years ago, I built a complicated, multi threaded, C sharp application for a trade show, and I could never get it's completely stable. The sales team Earth demos, the up. I'm kind of good used to having to restart the program a couple of times to get it to work . So what was the problem? I wrote multi threaded C sharp coves, but this was years before the task. Parallel library ons parallel link were released, so I had to do everything manually locking shared fields, synchronizing threads you name is hands off course. I had overlooked several sections of code that we're not threat safe in my tests. Everything seems fine. Was in production. The program behaved erratically. Does that sound familiar? I did eventually fix the problem, but I created this course to make sure this kind of thing will never happen to you. I will teach you how to rights robust multi threaded coat using tasks and parallel link that works perfectly on your first try. I've set this course up so that you can follow it regardless. If you are a beginner war advanced, see short programmer. I will starts with an introduction to threading Looking on synchronization. This will refresh some basic knowledge about writing asynchronous cold. The section works towards a simple goal. Have one threads reliably return data to another threat. But even though this sounds very simple, you will learn that it is actually very hard to rights codes that does this reliably. We will ends the section with a working code example that uses locking on to way synchronization. To get the job done, I will ends the section with a re cup under short quiz you can use to test your knowledge If you fail the quiz, Don't worry about it. Just re read the corresponding lecture and try again. And if anything is unclear to you, feel free to reach out on contact me. I'm happy to answer any questions you might have in the next sections. Off the course, I will show you to asynchronous programming libraries in the DOT net framework that have a robust locking and synchronization built in. By using these libraries, you avoid many common pitfalls. Off multi threat is programming because all the complex code has already been written for you. In the first section, we will take a closer look at the task Parallel library. This library introduces the Tusk class, the workhorse off asynchronous programming in C Sharp tasks make it very easy to start on asynchronous operation on another thread, wait for completion and then return a result to the calling threats. I will take a closer look at how to creates aunt Start the task. Wait on the task. That's how to cancel the task. Later in this section, we will look at how to build complex map reduced operations using hierarchical on sequential networks, off tasks. In the final section, I will demonstrates the parallel link library or P link. This library is a parallel version off the regular link library on. It lets you execute link queries in parallel. This library makes it very easy to implements my produce operations in parallel. I will describe how to make your link queries run in parallel what the consequences are for the ordering off the results and what the limitations off the parallel linked library are by the end, off the course, you will be fluent in writing asynchronous c sharp coat. Using tasks on the parallel link library, you will be aware off the common pitfalls off writing multi threaded C sharp code by hands on, you will understand how the task parallel library on parallel link Automate this task away for you. With this knowledge, you will be able to write a robust, asynchronous C sharp coat. Okay, let's move on. In the next lecture, I will introduce myself. Andi, I will talk a little bit about my backgrounds. 3. Introducing TPL and PLINQ: Let's look at multi thread is code multi threaded coat is codes that is executed by two or more threads simultaneously. So, for example, in a Web server, each threads can service one browser request. So if hundreds people visit the website simultaneously, the server simply runs the same codes on 100 threads. Another example is in mobile application development. A single main threads is responsible for drawing on updating the user interface on responding to user input. Older backgrounds threads perform complicated calculations on operations in the background . Andi. Occasionally they return data to be displays on screen. The C sharp language provides a very handy thread class for starting a section of cold on a new threat you can create on start as many threads as you like. On the DOT Net framework provides a rich collection off classes for synchronizing threads and passing data between them. However, writing a robust, multi threaded cold is very difficult. Accessing variables from more than one threads opens the door to a whole array off potential problems. You must be prepared for all kinds of strange consequences. Simple. If then, else statements only no longer work valuables. A randomly change their values ons. Parts of your codes suddenly hang for no apparent reason. In the first part off this course, I am going to show you how to rights multi threaded codes. But more importantly, I am going to show you how to rights robust, multi threaded cold. I will show you on array off techniques, including locking, signaling on synchronizing to keep the threads in line and make sure that's our codes executes in a predictable manner. I will conclude the section with a code example that returns data from one threat to another. This seemingly simple operation actually requires a lot off work. To get rights to reliably return data across threads, we needs a locked critical section off codes, a shared variable and two complementary outer reset events for thread synchronization. Now imagine having to do this 1000 times to implement a complex operation, a synchronously. So it probably won't surprise you to hear that Microsoft has added several asynchronous libraries to the dot net framework to make writing multi thread is colds very easy. You can choose between the task parallel library, the parallel link library and the parallel class. In this course, we're going to take a close look at the task. Parallel library and parallel link. The task Parallel Library introduces the task class. The task is the workhorse off a synchronous programming. It's represents a single units off work that can be executed. A synchronously Andi. It has built in capability to return days er from one threats to another. So the tense parallel library makes it very easy to string thousands off tasks together in large and complex patterns. But the parallel link library goes one step further. It lets you describe what kinds of operations you want to perform on. The items in a large data sets parallel link will then create the entire task network for you. It will feed the data into the network, aggregate the results and feed them back to you. All of this happens automatically in the background. In this course, I am going to show you how to implement my produce operations using either the task Parallel library. What parallel link. I will list the pros and cons off each framework on, and I will give you some guidelines on how to pick the best library for any given problem. 4. How to start a thread: in this lecture, I am going to take a look at how to start a new thread by using the threat. Class A thread is on Independence Execution Path Able to run simultaneously with other threats, a C sharp program starts in a single threat created automatically by the framework on operating system. This threat is called the main program threat. The program is made multi thread is by creating additional threats. Here's a simple example. The program starts here in the static main method. The program creates a new threat. Using the Threat class constructor, the constructor expects a thread start delegates as a parameter on this is the method that the new thread will run. An important thing to realize is that the threat will only run this single method. When the method is completed, the threads will automatically end. Andi once ended, the threat cannot restart. So in this example, the new threat executes this loop Here. Andi simply displays the letter b 1000 times, but at the same time, the main threat is also running. Executing this loop here downs displaying the letter a 1000 times. So what outputs do you expect to see something like a B A B a B. Let's find out. I'm running the program now, and here are the results. You can see that the A's and B's are clumps together in groups on this is because threads are time sliced. The operating system runs a given threat for a while, and then suspense. It's aunt runs a different threat. Each run interval is called a time slice, which is the maximum time a threat can run uninterrupted. So in the output, each time slice is visible as a group off identical letters. Modern computers have multi core processors that can actually run several threads at once. But at any given time there are hundreds off active threads in the operating system, many more than the available number off CPU course. So there is always a certain amount off time slicing going on. There are several ways to initialize the threat you've seen. The threat starts delegates for passing in the started methods into the threads constructor . However, we don't need to explicitly specify the threats. Start delegates. The C sharp compiler is smart enough to infer the delegates from the signature off the start of method itself, so this simplified codes will also work here. I passed the start of methods directly to the threats constructor, without specifying that it is a threat. Start delegates. The compiler figures that out all by itself, which makes the codes a lot cleaner. Another simplification is to remove the start of methods on to replace it with a Lambda expression like this. The entire thread start of methods is now anonymous delegates. Again, this is not a problem. The compiler will figure out that the lumber expression matches the threat, starts delegates, and it will make everything work. Each threats has a name property that you can set. This is especially useful during debugging because the threats name is displayed in the threats window. You can set a threat name just once. Attempting to change it later will thrown exception. So here is a program that sets up to 10 named threats. When I run the program on, then interrupted in the debunker, I can open the threats panel to take a look at all running threats. You can see the threat names appearing on when I double click on a threat. The D Burger shows me which coat is currently being executed by that particular threat. Finally, let's look at foreground on background threads by default. Any threads you creates explicitly is a foreground. Threads. Foreground threats. Keep the application for life for as long as anyone off them is running. Compare this with background threads. Once all foreground threads finish, the application ends on any background. Threats that are still running at this point will abruptly terminate. You can clearly or change the threads background status by using the is backgrounds property. So here's an example. This program starts a single threat that waits for the user to press. Enter Note that I said to the is backgrounds property to true, which means this will be a background threads. While this backgrounds threads, runs the main threats continues executing the main method until his ends. At this points, the program terminates on aborts the running background threat. When I run this program, you can see that it's immediately terminates the background threads. Waiting for a key press has no effect at all. Now let me change my codes on creates a foreground threads instead, I still expect the main threats to end, but as that points to the foreground threat will still be active, waiting for the key press on because it is now a foreground threats. The program cannot end until I press enter, thereby ending the foreground threats and allowing the entire program to end. So now when I run the program, you see that it does not immediately terminate. It's continues to run, even though the main threat has already exited the main method. And now when I press answer, the single remaining foreground threats can end, and now the program ends. So what have we learned? You create the threads by calling the threat constructor, now specifying the methods to execute. You can also specify that Lambda Expression to execute threats can have names to eight in debugging threads can be four grams or background threads on application cannot end until all foreground threats have ended. 5. Race conditions: each running threads gets its own private stock, so all local variables are kept strictly separate. Take a look at the following code. I have a method here called Do Work that outputs five stars in a row. I call this methods from a separate threat. Andi, at the same time from the main program threat. The loop variable high exists twice once for each thread. So that's the threads will not interfere with each other. When I run the program, I guess 10 stars as outputs just as expected. Now watch this. I introduced the new class member a static integer variable called I and I modify the loop to use this shared variable instead. So now we have the interesting situation that both the created threads on the main program threat used the same variable I for their loop statements. What do you think the output is going to be? Check this out. The output is only six stars at. The reason for this is that both threats are implementing the same variable. So instead off counting from 0 to 5, the loops are actually counting from 0 to 5 in steps of to on the results off this is that only five or six stars are printed instead of 10. This is a classic example off what is called a race condition. Two threads are fighting for the same variable, and as a result, the code starts to behave in unpredictable ways. In this case, a simple for loop counting from 0 to 5 certainly only iterated its two or three times before finishing. The solution to raise conditions is a technique called locking but will get back to that in the next section. For now, let's just identify the problem. So a race condition is when program execution no longer follows a predictable path. Andi. As a result, Coast starts behaving in unexpected ways. Variables seemingly randomly changed their values. Four loops stop prematurely. Simple increments. Statements do nothing or increments by two instead of one. All of these things can happen nine times out of 10. These race conditions a cure because two or more threats are accessing the same variable. So the main take away off this lecture is Keep the data you share between threats to an absolute minimum. In the next section, I will show you how to use looking to safely share data between threats 6. Resolve race conditions with thread locking: Let's say I have to threats executing the following coat. Initially, the variable high is zero. Then my two threats each call the do work methods on increments, the variable by one. So by the time they both end, I expect the variable I to hold the value too, right? Well, no, The problem is the I plus plus line, even though this is a single line, will see sharp codes behind the scenes. This line consists off a number off steps to execute. First, read the current value off I seconds, load the constant and add it to the current value. Three. Right the results of the addition into the variable I So consider this particular sequence off events, threats one executes steps one hands, too. Then the operating system interrupts the threat on switches over to threads to at this point, the variable. I still contains the value zero, even though Threat one has the results off its addition ready in memory, waiting to be written back into the naval. So now threat to executes steps one, two and three. The variable I now contains the value one. Threats to ends on the operating system switches back to threads. One threat one executes the final Step three and writes the results off the increment ation , which is again the value one into the variable, and the threat ends. So now this variable I contains the value one, despite the fact that it has bean increment id twice. This is called the race condition, and it happens because increment ing the variable is not an atomic operation. It can be interrupted halfway, leading to unexpected results to solve the problem. All I need to do is convert these three steps, read the current value off I load the constants and added to the current value right. The results into the variable high so three steps into one atomic operation that can never be interrupted by another threat. Now, a section off codes that cannot be interrupted by another thread is called a critical section, a piece of code that only one threat may execute. Asked a time if a given threat is executing any line off codes in the critical section, all other threats must wait until that threat has completed. The section having other threads waits while a single threat is executing. A critical section is also called threads locking. So when should you look threats? The answer is very simple. You should look threats every time when two or more threats are reading and writing the same shared Very. If you do this, if you always look threats in this scenario, you will eliminate 99% off all race conditions in your coat. All you need to do is find all points in your code where you reads from Aunt four. You rights, too. That's shared the table, and then you turn that codes into a critical section. C Sharp has a very handy built in key wrote called Look to Do Exactly that. So, for example, this single line of codes the familiar increment ation off the very well. I will become this. You should take special care when you compare on a sign valuable saying, for example, this. You have to make sure that the entire compare on D a sign operation is contained inside the critical section. So the coach should be like this. Now maybe you're codes is performing other actions between the compare on the assign, maybe something like this. So there is a long running message being called after the compare on before we assign. In this scenario, you want to consider re factory your cold because you do not want the other threads still be waiting for too long. So if there is no dependency between the long running methods on the very well, I, you should pull the method. Call out off the critical section so that it becomes this. Now. The critical section only contains we compare on the assign operation. Then the critical section ends, and all the threats get the chance to execute that section of codes on then the long running method is called. So to summarize, when should you look threats? You should look threats every time. When you have two or more threads reading on writing the same shares variable. You should make sure that's every variable. Access on assignment is protected with a critical section. You should make sure your compare on a sign Operations are entirely embedded in a single critical section. Andi. You must keep critical sections short, so if you can pull out any long running methods 7. The lock statement: in this lecture, I am going to show you how the look statements works in C Sharp. You've already seen the statements appear a few times in the previous lecture when I showed you how to lock several fragments of code to avoid a race condition. Let's start with the following code. I have to shared integer variables up here, both initialized to one. Then there's the do work message here that checks value to and then divides value one by value to and then sets value 2 to 0. By now, you should be alert to the possibility off the race condition in this coat. If two or more threats call do work, then it's perfectly possible for one threats to set value 2 to 0, just as another threat is busy executing the console dot right line method, The results. A division by zero exception. To fix this coat, I need to make the check on the Division one atomic operation by adding a lock statement around these lines of code. So let me do that right now. Now the look statement needs a synchronization object to work with. This could be any reference type. There are no restrictions. For now, I will simply add a private static class member off type objects and use that for synchronization there. That's it. The code is now guard it against a race condition. We say that the code is threat safe, meaning it can safely be calls from multiple threats simultaneously without crashing. Locking a section of code is a very first operation on a typical modern CPU. The operation takes around 20 nanoseconds to complete. That's pretty fast, so there's no need to worry about the performance Overheads off, locking a section of code. The look statement is what we call syntactic sugar. This means the C Sharp compiler will actually expand the statement to a larger block off coat. Andi lock is simply a convenience provided by the compiler, so we don't have to type all of that code every time. The codes produced by the compiler is very simple, and you can easily type it by hand if you want. Here is the do work to methods with the codes off the previous example, with the possible division by zero. But now, with the expanded look statements, as you can see, a look is nothing more than a call to monitor dot Enter the monitor class in C Sharp provides for critical sections. The enter methods answers critical section at the exits, methods, excesses. There's an extra boolean variable here called Look Taken. This variable acts as a signal to the finally block down here. If the monitor was entered successfully, the Boolean will be set to True. On the finally block will exit the critical section. But if for whatever reason, the critical section could not be entered successfully, then the Boolean will be false. Andi finally block will do nothing. This set up with the bullion field prevents a look leak where a critical section is entered but never exited because the corresponding monitor DOT exit is not called the advantage off typing monitor dot enter on monitor dot Exit directly is that I can now use other features off the monitor class too. For example, there's a try Enter methods that I can use. Instead, try Enter expects a time out as the seconds parameter, either in milliseconds or as a time span value, and it returns the bullion. True, if the enter was successful on falls, if it was not at the operation timed out. Providing a time out is very important because you never know if the critical section will be released. If another threat is stuck in an infinite loop while holding the critical section, then your threat will wait indefinitely. The time outs will help you break out off this deadlock situation. You've seen that the look statements requires a synchronization objects. I created a special private objects variable for that purpose, but you can also lock on any reference type you like, including the this value or perhaps even the type objects. So you might be wondering if there are any best practices in choosing the synchronization object and the fact that our you are a device to toe always use a private field as a synchronization object. The reason for this is simple. Consider for a moment that you lock on a public field now because the field is public. Another threat could also look on that same field. Now, suddenly you have on unexpected dependency between two threats that might lead to both threats blocking on waiting for each other. This can easily happen when you always look on the this value, but when you look on a unique private field that you create specifically for the occasion. Then you prevent any other threat to lock on that same object. So now the unexpected dependency between threads can never happen. It's just another safety net to protect your coat from unexpected results. Final. Let me show you another capability off the lock statement nested looks you could arbitrarily nest looks inside each other like this. The threat gains access to the critical section when the outermost look succeeds on each subsequent look is simply stacked on top off the 1st 1 The critical section is released when all stacked locks have exited in terms off the monitor class. This means you can have any number off monitor dot Enter statements on the critical section is only unlocked when a matching number off monitor adult exit statements have executed nesting locks is useful when you are calling a methods from within critical section to demonstrate. Here is the divide by zero coat again, but this time I have put the actual division in another method. I can look the original codes in the do work methods, but then call into another method from inside the critical section on half that method also set up its own critical section, both critical sections of stacks. So I can safely return from the innermost methods while still retaining the look only when I exit the outer critical section in the do work Mrs is the look released so in summary. Don't worry about nesting locks, but just make sure you use the same synchronization object for each critical section. If you do, you can legally stack the critical sections on the lock will not get released until you return from the first method. So what have we learned? The look Statements in C Sharp is same. Tactic sugar for a monitor. Adults enter on monitor, dealt exit pair on. It sets up a critical section. The Martyr class also has a try. Enter methods that supports a lock time out value. The lock statements requires a reference type synchronization object. You can use any object you like, but a unique private objects. Fields is recommended. You can nest look statements. The critical section is unlocks only when you exit the outermost look 8. Thread synchronisation with AutoResetEvents: in this lecture, I am going to cover a new multi threading topic called Thread Synchronization on this is the act off synchronizing two or more threats together in order for them to exchange data. We can generalize this to something more generic threads. Synchronization is the act off suspending one threat until a certain condition is met in another threat? So why would you need threats synchronization? Well, let me show you on example. This is a very common situation. In multi threaded code. You have a main threat that launches second threat to do some complex work in the backgrounds. The threads loops on produces a new results every few milliseconds. The main threats simply waits for the results to become available. Hands picks them off one by one. You already learned in the previous section that you can set up a shared variable to pass data between threads. And to avoid a race condition, you need to make sure to lock the variable every time when it is read or written to. So here is the code that implements my example. There is a shared variable up here, a simple integer that I'll use to past data between the work spreads on the main program threads. The threats, Work methods is over here with a while, loop that simply loose forever. During each Luke, it aeration the threads, does some work, which in this case is simply increment ing the variable. And then the thread goes to sleep for one middle second, and then it does the whole thing all over again. The main program message is down here. The program sets up Z threads and started and then loops 100 times to collect the results on right into the console. Let's pretend that the main program methods also does a lot of other stuff, which I simulate with this sleep statements here that suspends the threads for 10 milliseconds during each loop Iteration. Now what do you expect to see when I run the program? If both threads line up perfectly, I expect to see the sequence 123456 etcetera. Then we run the program and shake it out. That's not a very regular sequence that it makes perfect sense. If you think about it. The work threats produces a new results every milliseconds, but the main program threads only collects the results once every 10 milliseconds. So I lose roughly 10 results during each loop iteration. The problem here is that the two threats are not synchronized. Both threads are running freely, reading and writing into the same variable on. There is no guarantee that the results off one threads is being picked up by the other threats in time. So to fix the problem, we need to find a way to synchronize the to threats. What we need is some kind off simple communication channel between the two threads. Something like this. So the work threat starts. Aunt performs the very first calculation. Then instead off writing the results into the shared valuable, it sends a message to the other threats something along the lines off. Are you ready to receive data? The work threat then suspends itself until it receives an answer. The main program threats enters his own loop aunt just before reading the shared valuable, it sends a signal to the work Friends. Yes, I am ready to receive data. The work threat receives the signal. UN suspends itself, underwrites the first results into the shared variable. The main program threats, then reads the results from the variable on the cycle continues in the next loop inspiration. The good news is that there's a class in the dot net framework that provides this exact type of communication the auto reset event you can visualize on outer recent events as a turnstile like you see in movie cinemas, one or more threads line up behind the turnstile, waiting to be let in on the act off, inserting a tickets. Let's a single threat through the threat lines up behind the turnstile with the call to our toll reset event dots. Wait one on a cool, too. I'll tell research event those set. Let's a single threat through. Wait one on set can be called from two different threats. I can implement the communication channel using an outer reset event. The work threat asks if the main program threat is ready by calling the weight one methods on the outer recent events. The main program threats, in turn, indicates it is ready to receive a result by calling the sense methods also on the same after recent events. So now the work methods patiently waits behind the term style until the main program threats, inserts a ticket to indicates that it is ready. The turnstile then opens, allowing the work threats to write a result into with shared variable. Let me change my program to implement this communication channel. So what I need to do is onto a new out Oh reset event to my coat. Let's call. It's ready for results. The work threats will use this outer reset events toe. Ask if the main program threat is ready to receive a new result. If it is not ready, the work threat will suspend. This corresponds to a call to wait one. So let me add that to the work methods. So just before writing a new results into with shared variable, I call the weight one methods on the outer reset event. This will ask the main program threats if it is ready and suspend the threat if it is not in the main program methods. Just before reading from the shared valuable I aunt, a call to the set metres off the outer research event. This will indicate to the work threats that the main thread is ready to receive data. It effectively opens the turnstile, which unsuspected the word threat and allows it to right, its results into the variable. And this is these two simple modifications will allow the two threats to synchronize on effectively past data between them. Despite the fact that they're loop timings do know the line. Let me run the program so you can see what happens now. And there you go. A perfect increment ing sequence of numbers. Problem solved, or is the problem really solved? Let me scroll back through the sequence of numbers all the way to the beginning. Look at this. The sequence starts at zero, then jumps to two and then increments by one as expected. What's going on this beginning? What we're seeing here is another race condition. Until now, I've always assumed that the work threat starts right away that has a result ready before the main threat is able to pick it up. But in fact, the reverse is also possible. The main threat signals that it is ready to receive the results on then immediately reads the shared variable before the works threat has results available. So what we need is another communication channel. The work threat first asks the main threat if it is ready to receive the results and then suspends until it receives a confirmation. Then the work threat should write the results into the variable on. Then it should signal to the main threat that it has finished writing the results. The main threat should signal to the work threat that is is ready to receive for results. Um, then it should ask the work threats if it has finished writing the results and suspend until it gets a confirmation. This bi directional communication channel with symmetrical signal and ask actions at both ends is a very common programming construct. It sets up a very robust communication channel between threads, so let me modify my coat to at this second channel. First, I need to add a second out a retort event. I will call this one such results because it indicates that the work threat has set the new results in the shared variable. Next, I need to modify the work methods after the work threat writes the results into the shared variable. It needs to signal toe the main program threats that it has done this. So I'll as a call to set here using the new set results variable in the main programmes threads. I also need to make a change after the main method signals to the work threat that it is ready to receive a new results. It needs to wait until this new rituals becomes available. I will do that by calling Wait one again, using the new set results variable. And that's it. These two simple modifications Set up the second communication channel on Create a robust data channel between the two threats. Let me run my codes to see what happens now. Here we go. And there you have it. A perfect numerical sequence. But now when I scroll back all the way up to the beginning off the output, you can see that the sequence starts with 1234 etcetera. Perfect. So what have we learned? If you want to safely past data between two threats, walking is not enough. You also need to synchronize the threats. Synchronization can be created using the auto reset events. Class a cool to wait. One suspends the threat and a call to set resumes the threads. For a robust communication channel, you need at least two outer reset events with calls to wait one on set at both ends 9. How to start a task: you've seen in the previous section that it is possible for two or more threats to share data, but it takes a little work to make everything function reliably. Two starts you needs a looked shared variable that's both threads can access. Any reads on rights have to be protected with a critical section because otherwise both threads might read and writes the variable at the same time. On there is the risk off a race condition, but that's not all. You also need to alto reset events to synchronize the threads. When the thread that is providing data has actual data available, it first needs to send a signal to the receiving threat that it is ready to sends data. You can easily implement this with the weight. One methods off on auto reset event variable. The providing thread, then suspends itself, waiting for an answer. The other threads, the one that's waiting for the data eventually reaches a point in colds where it needs the data. The first thing it does is signal to the providing threat that it is ready to receive by calling set on the same auto reset events. So now the providing threats wakes up knowing that the receiving threats is ready to receive data. But we're not done yet to make sure that the receiving threats reads from the shared variable. Only after the providing threat has written into it we need another alto reset events. Now the receiving threats signals that it is waiting for the data by calling Wait one on the second outer reset events. The receiving threat then suspends itself, waiting for the data to become available. In the meantime, the providing threats has already started writing to the shared variable. When it finishes. It calls the set message on the second outer reset event to signal to the receiving threats that the shared variable has been updated and is ready to be read. The receiving threads wakes up, reads the shared valuable 100 streets. The data. This design provides a very robust communication channel between two threads not need to exchange data. Now you might be wondering in a synchronous applications, threats are exchanging data all the time. Aren't there any pre built classes that can help us with this coat? Oddly enough, until C sharp version four, there wasn't and you had to rely on third party Tools bus in version four, Microsoft released The task Parallel Library, or TPL on it is packed with features that make writing asynchronous colds a lot easier. In this section, I am going to take a closer look at the Tusk class, the workhorse off asynchronous programming in C sharp. So how would you make to threads exchange data with the task class? Let's take a look. I've written a simple program that uses tasks to calculate data a synchronously. You can see up here that I have no field declarations whatsoever. There's no need for Al tell resets events or synchronization valuables for critical sections. The task will do all off that internally. Here is the main program loop. Let's go through it line by line. The first thing that happens here is that I creates a new task by calling task dot factory dot start new. This will create a new task and immediately started in the background. You can see that task is a generic class with a string type arguments. Now this string type is the type off data that the task will return. The start. New method expects a single argument, which is a function delegates that must return a string value. You can see here that my delegates is very simple. It sleeps for two seconds and then returns Mark. So the main program starts. Aunt runs This new task on then immediately continues down here it's writes your name is on the console and then accesses the results off the task. Now notice that there is no synchronization logic anywhere. No alto. Recent events, no calls two sets or wait one. I simply create a new task and Harvey to return my first name. And down here I retrieve the task results and display it on the console without waiting. If the task is finished or not, you might be wondering How reliable is this colt? What happens when the main program asks for the results before the task is ready? The results value will probably be no Onda. We would get a no reference exception in the console. Writes method. Well, okay, let's find out. I am going to run the program. No, Andi, there's our answer. The main program threads actually weights two seconds until the result is available on, then displays the first name on the consult. So there is some kinds off waiting logic builds right into the result. Property hands, I can actually show you that's cold. If I set a break point on the final line of the program on, then start to de Bugger, then I can request to view the definition off the results property on. And here it is. You can see that the results Property first checks if the task results is available. If it is not, it's calls Wait, which will block until the task is ready. The codes then checks if an exception has occurred on, If everything is OK, it returns. The task results here. You can also see here that's the sets. Methods off the property is internal. This means that's on Lee. The task itself can right to the results field on. Therefore, that is no need for a lock Statements Onda critical section because to separate threads can never read and write the result. At the same time, you can see that the task class is a huge improvement. When writing asynchronous coat, we no longer have to worry about threats synchronization on shared variable looking. All you need to do is define the work to be done in a London expression creates on start the task and then access the results whenever you need it. The results property out. Tell Matic Aly blocks until the data is available. So when should you use tasks? Well, any time when you have discrete units off work that can be executed a synchronously on and that may or may not return, a result to the calling threat tasks are lightweights abstractions around a single unit off work on you can create as many as you like. Hundreds or thousands off tasks are no problem at all. Even millions of tasks is possible if you are careful. So to summarize, the Tusk parallel library provides a very handy tusk class for a synchronously performing a unit off work on returning the results to another thread. You access the results property when you need the result off the task. The property out illmatic Aly blocks, if a result is not yet available, tasks are lightweight obstructions off on a synchronous units off work, and you can safely create hundreds or thousands off tasks 10. Working with tasks: you saw in the previous lecture how extremely easy it is to do some asynchronous work on the threads in the background on return the results to another threat. Simply run a new task with the task dot factory dot start your method on when you're ready to use the results. Simply access the result. Property off the task variable. All the necessary synchronization is done for you. Behind the scenes. The two auto reset events that we needed previously to reliably past data between threads are not necessary here because all the synchronization logic is implemented inside the results. Property. The property out automatically blocks until the data is available. The task class is very versatile. Andi In this lecture, I'd like to show you some MAWR use cases. I'll start with tasks that do not return. The results you learned in the previous lecture that the task class is generic on that is the single type parameter indicates the type off data to be returns. I used a string task because my coat example returns my first name, which is string data, but what's if you want to run a task that does not return any data at all. It's actually very simple. Take a look at this code here. So you've already seen the generic task class, which is for returning data. But there is also a non generic task class on this class is specifically for tasks that's do not return any data. So once I need to do is run a non generic task, have its perform some work, say, sleeping for two seconds on, then writes a text to the console. My task writes Hello wills. Now, after I've run the task, I need to wait until it has completed. There is no result property this time because the task does not return a result. But we saw in the previous lecture that the results property simply calls await methods to block until data is available. This weight method is public, so I can call it directly like this. So now the main program runs a non generic task that will not return any data, wait for it to complete and then end. Let's see if everything works, I will run the program and there you go. Hes worse. Now you might be wondering what happens if I remove the weight message from my codes. What happens to a running task when the application ends? It is very easy to test. All I need to do is comment out. The call to the weight message here on didn't run the program again. So here we go and there's our answer. The task a Borse when the application ends and does not get a chance to finish at all. And the reason for this is that tasks run on background threats from the runtime threat pool on a lot. Background threads are automatically aborted when the application ends. To find out on what's kind of threads. My task is running. I can add some diagnostic codes. I will change my task to display information about the current threads. I'll adds two lines, one for displaying if the current threads is a background threads on one for displaying. If the current threat is a threat in the runtime threat pool, let me run the program again. Here we go on, check it out. The task is running on a background threat from these standards. Runtime thread pool now threats. Pool threads are intended for short running on the computational e heavy work, but you might have a long running task that's perhaps blocks on Io. These kinds of tasks can kill the performance off the spread pool, so ideally, you would want to run them on a separate thread outside off the threat pool. Is there any way to do this? Yes, there is. And again, it's very easy. All I need to do is change how I run the task. This line here asks the Default Task factory to creates and start a new task, but I can provides a second argument, writes after the London expression on specify task creation options. I will use the value long running, which tells the factory this is going to be a long running task that should not be executed on the default threat pool. So now when I run the application, you can see that I still get a background threads. But the threads is no longer parts off this thread pool. If you have a task that is performing a long running operation or the task is interacting with external systems on blocking on my uncle, then make sure you always provides the long running value. Otherwise, the performance off the threat pool is going to degrades considerably. Finally, let me show you what happens when a task throws an exception. I will modify my program on and add a very simple throw statement out of the end off the London expression. So now the task will run Andi immediately crash. After displaying the threat information on the console, I'm going to run the codes in debug modes so that the Mona de Burger can catch the exception on. We can inspect the call stock. Here we go. I'm running the program on Check it out. The debunker courts on aggregates exception. This is simply a rapper on. When I look Inside, you can see this inner exceptions property that contains the actual invalid operation exception that got thrown in the task. But look at the highlighted source line in the developer. The exception God's courts in the weight methods and not in the task itself. On when I look at the Call stack, you can see that the task doesn't even appear in the stack at all. We go directly from Maine to wait and then to one internal overload, its weight on then the stock ends. This is a very nice feature off tasks they propagates. Exceptions, which means any exceptions are automatically courts on grief thrown in the wait method. This propagates the exception from the task into the calling coast. The same thing happens with tasks that return a value. The exception gets re thrown in the results property, and so the exception propagates to the codes receiving the data. This makes it very easy to handle exceptions in tasks. You simply kick off the task, and any time you access the weight methods or results property, you wrap that cold in an exception. Handler. So what have we learned in this lecture? Use the non generic task class for tasks that do not return. The result. The weight methods blocks until the task has completed. Used the genetic task class for tasks that do return results. The results. Property also blocks until the task has completed by default tasks. Executes on the dot nets runtime Threat pool for long running aunts. Io bounced tasks. You can provide the long running option to execute the task on a non pool threat. Any exceptions thrown by a task will propagate to the calling coat and are automatically re thrown in the weight methods on the results. Property 11. Initialising and cancelling tasks: in this lecture, I'd like to show you how you can initialize on cancel a task. It starts with initializing when I say initializing. What I mean is a way to pass initialization data into a task On start up with threads. You can provide a state object when the threat starts. But is there a similar mechanism for tasks? In fact, there is. Providing initialization data to a task is very similar to how you would initialize a threat. There are two ways how you can do this. The first way is to start the task with a task factory and then provides two arguments to the start new methods, a task delegates ums on initialization objects like this. Here I starts a new task for executing the do task work methods on. I provided the string my message as the initialization data. The Duke task work methods needs to have the following signature. When the task starts, the initialization string gets passed along in the object arguments. The second way to initialize the task is to provide a Lunda expression. Andi directly capture the variables you need like this. Now let me show you a cool trick when I use Lunda expressions. The initialization objects is not used, but I'm going to provide it anyway. Like this. The initialization objects now gets stored in a special property called Task Adults. Facing States on the visual studio D Bugger shows this property in the parallel tasks window, so this little trick makes it very easy to identify tasks while debugging on This can be a lifesaver. If you have thousands off running tasks now, let's look at how to cancel the task. The preferred way to cancel the task is by using a cancellation. Token. Cancellation tokens often show up all over the place in the task Parallel library on. They are a standardized way to cancel many types off operations. They are shared between threads on allow one threat to cancel the operation off another. Here's how they work. The first thing you need to do is instance, creates a new cancellation tokens source. This is a class that broadcasts a cancel to anyone who uses the token. Then you use the token property. To create a new token, you use the cancellation source to either manually cancel an operation or to specify an automatic cancel after a given time out and then you start the task like this so that our two changes. You pass the cancellation talking to the start new methods ums in a long running block of colds. You add a call to the throw if cancellation requested methods. This will cancel the task when the cancellation source requests. Cancel when you cancel a task. The throw. If cancellation requested methods throws a task counseled exception. This will aboard the task Andi. If any calling coat is waiting for the tasks to complete, then the exception is re thrown in the corresponding weight methods or results property. The calling codes can recognize a cancel task by inspecting the is cancelled property off the task class. This is a Boolean property on. It will be true if the task was cancelled due to a cancellation token. Let me summarize what we have learns in this lecture. Tasks can be initialized with either a start of objects or by capturing valuables in their Lunda expression. Visual studio displays the A sink state task property in the parallel tasks window. If you store a meaningful task name here, it will greatly AIDS debugging tasks can be cancelled with the cancellation token 12. Parent and child tasks: So far, we've only been working with single tusks. But where the task parallel library really shines is in weaving together hundreds or thousands of tasks in complex patterns to connect tasks together. You basically have two dimensions to work with. You can create a hierarchical layering off tasks as half parent tasks break down complex work into asynchronous units off work that can run in parallel. The parent tasks can then assign these units to child's tasks. Once all Children have completed their work, the parents tasks. Agra gates the results and continues. The other dimension is sequential. Not all complex tasks can be broken down into independence components. Sometimes the task consists off a discreet number off steps that have to be performed in a specific order. For this scenario, you can link tasks together into so called continuations sequences of tasks that executes one after another. You can freely mix and match from both dimensions. The patterns task could assign work to three Childs tasks, wait until their complete and then move on to the next task in a continuation sequence. In this lecture, we're going to look at the first dimension the hierarchy off parent and child's tasks. So how does this hierarchy work? Well, let me show you another diagram. Here we have a parent task that is working on some kind off complex operation. The work can be split up into three asynchronous units off work that's kind execute in parallel. So the task starts. Three child tasks on assigns one units to each of them. Now here's the same. The parents task will not complete until the final Childs task completes. So if you are waiting for the parents task to complete, you are also automatically waiting for all child's tasks. Another cool feature. He's Exception. Ahn's cancellation. Handley. If a child's task gets canceled, the task canceled. Exception will bubble up to the parents task and get re thrown in the calling codes, either at the Ways, methods or in the results property. The same applies to exceptions. Any exception thrown by a child's task bubbles up to the parents on gets re thrown. Now you might be sinking, huh? But what happens when more than one child's task throws an exception? And this brings me to another nice feature off tasks. They always wrap exceptions in an aggregate exception, so if all three child tasks failed. The three exceptions gets wraps up in a single aggregates. Exception on that exception is re thrown at the weight methods or the results property. Okay, it's time for some codes. I am going to write a simple application to demonstrates parent child tasks. Let's do something really simple. I am going to write a program that takes a line of text on. Then it reverses the characters in each words. Finally, it joins the reversed words back together in a sentence. The reversing off characters, in the words, is a step we can assign to child's tasks for each words in the sentence I will create on to run a new task and have that task reverse that single word. Finally, the parents task will gather up all the results and reassemble the sentence. Let's go through the codes line by line. At the top here is a genetic list off tasks that return stream. I will use this list to store my Childs tasks, so that's I access them from the main program method. Next is a simple string reversal methods called reverse stream. You can see that I loop through all the characters off the input string in reverse order appends them to a stream builder on return the reversed stream. But here is a line I'd like to direct your attention to the reverse string methods. First goes to sleep for one second on, then it starts reversing the stream. So for a 10 words sentence, I would expect a total run time off 10 seconds. Now let me show you the main program message down here. I starts with this hard coded sentence. The quick brown fox jumped over the lazy dog. Now remember, the reverse string methods takes one second to process each words on this sentence has nine words in it. So I expect a total run time off at least nine seconds. Next I run a new task that calls the process sentence methods to work on the sentence knows the cool to wait here at the end so this methods will block until the parents task has completed. And there's also a stop bush here that measures the total run time off the parents task in milliseconds. So, after having waited for the Parents task to complete, I loop through the generic task collection on display each words. This will display the sentence with the characters in each word reversed. Okay, so all the magic happens in the process. Sentence methods Let's take a look this for each loop here splits the sentence into individual words on loops through them. For each words, I start a new string task that returns the reversed words. Plus it's railing space as a result. Now, normally, when you start a task inside another running task, you simply creates on unconnected task that runs on its own. But when I provides this setting here, task creation options don't attach to parents. Then the new task will be a child's off. The running pattern task and remember, parent tasks will only complete after all, off their child's tasks have completed. So this setting extends the run time off the parents task until every child has finished reversing their words. You'll also notice this other setting here, long running. This forces the task scheduler to assign a new threads to each task. Instead of using the threat pool toe, execute the tasks. I do this because my tasks take one second to complete, and therefore I consider them to be long running Okay, let's run the program and see what happens. Here we go. I'm running. No. And here are the results. The parents task took slightly over one seconds to produce the reverse stream. This proves that each Charles task ran on a separate threat. Consider there are nine words in the sentence. On each river string methods takes one seconds to complete. The only way to reverse the entire sentence in one second is by using nine threads. The parent's child's hierarchy ensures that the parent task can only complete when all child's tasks have completed. So in my main program, Methods writes after the weight statements, I know for sure that at this point in my coat, all child's tasks have completed. To prove this to you, I'll answer more. Coz Here is a loop that goes through all child's tasks on displace their completion status . I execute this loop the rights after waiting on the parent, So I expect each child's task in the collection to display. True, because the parents can only complete after all, child's tasks have completed. Let me run the program again, and there's a proof there is a one second delay and then the parents task completes. At this point, all child tasks have also completed. Now watch what happens when I remove the attached to parents option Oh replaces with a nun . So now the child's tasks are completely unconnected to the parents. The parents will now complete immediately after it has started the nine child tasks without waiting for them to complete. I'll run the program again. Here we go, and now you see completely different behavior the parents completes immediately after much less than a second. All child's tasks are still running at this point. But when I try to gather the results off the child tasks, you can see that the program blocks for a second. These are the nine results properties that are blocking because their child tasks are still running. Okay, so what have we learns in this lecture? Tasks can be connected. Hierarchy. Carly Aunt Sequentially. Here are difficult tasks our calls parents on child tasks. A parent's task will not complete until all its child tasks have completed exceptions on cancellations, bubble up from child tasks and are wrapped in an aggregate exception in the parent task 13. Task continuations: one of the great features off the task Parallel library is that is can weave together hundreds or thousands of tasks in complex patterns to connect tasks together. You have two dimensions to work with. You can create a hierarchical layering off tasks and have parents tasks break down complex work into asynchronous units off work that can run in parallel on the other. Dimension is sequential. Sometimes the task consists off a discreet number off steps that's have to be performs in a specific order. For this scenario, you can link tasks together into so called continuations sequences off tasks that execute one after another. You can freely mix and match from both dimensions. A parent ask, can assign work to three child tasks, wait until they complete and then move on to the next task in a continuation sequence. In this lecture, we're going to look at the seconds dimension task continuations. You can visualize task continuations like this. So in this diagram, I have three tasks, and each subsequent task is going to run. When the previous task completes the three tasks run sequentially one after another. You might think this is all a bit silly. Sequential tasks cannot be run in parallel, so there's no point assigning them to different threats and the codes is functionally the same as simply putting everything in a single method. But the reason continuations exist is because most parallel work consists off three discrete steps. One split a problem into many units off work that's can execute in parallel to execute the units of work. Three. Assemble the finished work into a results. That's even a name for this process. My produce The words map refers to breaking up a problem into units of work on the words Reduce refers to aggregating the finished work back into results. You can easily implement this process with tasks by creating a three step continuation, the map step, the execution step and the reduced step. The execution step will be a parent's task with many child's tasks to execute the work, and because it's a permanent task, it will not continue until all child's tasks have completed their work. When the final task executes, it knows for a fact that all child's tasks in step to have completed so now it can safely collect all the results and aggregate them together. Your Corning codes can simply call the result property off the final task in the continuation to collect the results. Let's build a demo application that combines continuations on child tasks to perform some kind of work. I'll use the words reversing codes from the previous lecture and build upon this. If we look at the words reversing problem, then there are essentially three steps to it. One split the sentence into words to reverse each words with child tasks. Three. Assemble the reversed words back into a sentence. In the previous lecture, I only talked about parent and child tasks because we didn't cover continuations yet, So I split the sentence in the parents task and assembled the results in the main program methods. Now let's clean up the cold. I will create three discrete tasks that perform each step off the map. Reduce process. So here is my modifieds words reverser that I have called bench or words Reversal at the top here is again the string reversal methods calls River Stream. This methods is exactly the same as in the previous lecture. Even the one second sleep is still there. Then we get down to business. The map reduced process has three discrete steps, which I have called map process, and to reduce. Let's start with them up. This is a very simple method that expects the string, which is the sentence, and it returns a string arrayed off all the words in the sentence. The body off the methods is simply a call to split to create the list of words. Then comes the process methods. You can see that the methods receives an array of words in the first arguments, and it returns the Modifieds array off words with the characters in each word reversed. It's accomplishes this by starting a long running Childs task for each words. Each child task picks up a single words, reverses its and puts it back into your A. The process task then returns the words array. As a result, the task will complete only when all off the child's tasks have completed. So by the time this results becomes available, all words will already have bean reversed. Finally, there is the reduced method. It's accepts an array of words. In the first argument, Andi returns the assembled sentence. The method Bozzi uses a string builder to depend all the words together delimited by spaces . And now, for the cool stuff, check out the main program method here that chains these methods. Together, these three lines create the complete process I starts by creating a task to call the map method. The task creates a word list, so I use the generic task with a string array type parameter. The London expression simply calls Mup with the hard coded sentence. Andi returns the resulting word list on the next line. I continue with the continue with method. This sets up a continuation. Andi adds another task to the sequence. I use the generic methods with a string array type parameter because the process step returns a string array. The London expression calls the process methods feeds in the results off the first task in the sequence and returns the results, which will be the word list. With all words reversed. I end the continuation with the sword line. Again, there's a continue with methods this time with a generic string type parameter. Because the reduced methods returns the stream, the lumber expression calls the reduce methods feeds. In the results off the previous step, Andi returns the result, which will be a new sentence containing all the reversed words. Notice that each continuation lumber expression receives on arguments t which is the task off the previous step in the sequence. With this arguments, I can access the results off the previous task on use them for the next operation in the continuation to get at the results off this entire chain of tasks. All I need to do is store the results off all the continuations in a local variable calls task. This variable will refer to the final task in the sequence and therefore, when I access its results property, I get the assembled string with all the words reversed. Let me run the program to prove that everything is working. Here we go and again There you have it. A one second delay and then the sentence with all the words reversed. So what have we learned in this lecture? Tasks can be connected hard Article Early on sequentially, sequentially connected tasks are cools continuations. Each task in the continuation starts after the previous task in the sequence has completed , The London expression in each continuation can access the results off. The previous task Continuations are perfect for setting up multi thread. It's ma produce processes in C sharp 14. When should you use tasks?: you've seen that tasks make it very easy to write. Multi thread is codes. Tasks represent a low level units off work that's can executes a synchronously and safely return a result across threads. The task synchronizes automatically, and you don't have to do a thing to make the colds work reliably. This is a huge advantage over what we've seen in the fundamental section, manually setting up a locked shares variable on adding robust inter threads communication with two also recent events. He's a lot of work. The task class automates all of this away. It won't surprise you to hear that the task class is the foundation for many older asynchronous frameworks. In C Sharp, the parallel link library, or P link, is built on top off the task framework. There's also a class called Parallel that creates tasks for you and even the A sink and await keywords in C Sharp Version five use tasks. The task has become the workhorse off asynchronous programming in C sharp. So when should you use them? I mean, any coding problem can be expressed as a series of tasks or perhaps a parallel link expression, or you could use the parallel class to start tasks instead of doing it manually. What library should you use and when To answer this question? Let's look at the three fundamental steps off on a synchronous process. One. Much of a problem into independents. Units of work on a sign each unit's toe was asked to perform a series off operations on each units three. Or reduce the units into a results. Each asynchronous library in C sharp automates one or more off these steps, but the tasks library is the foundation for everything else, and therefore it does not automate anything. You have to perform all steps yourself. You could clearly see this in my words. Reversing program. I had to manually perform both the mother step and the reduced step with tasks. There was no automatic function to do this for me. So if I put each library in a table, I can compare them like this. You can see that the task library does no mapping and no reducing the parallel class. Those automatic mapping and no reducing and parallel link does automatic mapping and automatic reducing. So when should you use tasks? Simply put whenever you want to imperative Lee declare all operations and you have no needs for automatic mapping and reducing automatic mapping is only necessary when you have much more than, say, 10 thousands units of work. Once you start needing million's off tasks to execute your codes, you run the risk off over nodes. You will have too many tasks. Fighting for access to system resource is a good analogy. Is the following. Consider a team off people moving a log pile from one location to another. This is an easy job. If you have 10 people, everybody grabs a lock on runs to the other location, and the pile is moved in no time but now do the same with 10,000 people. What you will get is a huge crowds between the two piles, shoulder to shoulder and unable to move. Everybody is getting in each other's way, and all work will effectively stop. A multi threaded application works exactly the same way. If you have too many threads, four tasks working on a set of data, then they can actually get in each other's way and slow down the work. And so the parallel class has built in functionality to partition millions off units of work over only a couple of thousands of tasks. Each task receives a bundle of work and processes many units one after another. This is much more efficient than trying to do everything. In parallel, the parallel link library goes one step further. It's can petition millions off units off work over a limited number of tasks, but it also automatically aggregates these bundles back into a final results. As the developer, you are completely unaware that this process is going on behind the scenes. But if you only have a couple of 1000 units of work, there is no need for bundling, and you can safely use the task parallel library. So to summarize, there are three asynchronous frameworks in C sharp, the test popular library, the parallel class and the parallel link library. The parallel class automatically partitions millions off units off work onto thousands of tasks. The parallel link library does this, too, but it's also automatically aggregates the bundles into a final results. You should use tasks if you have up to 10 thousands units off work. Andi, you do not need automatic mapping or reducing 15. When should you use PLINQ?: in the previous section, we learned that there are three asynchronous frameworks in C sharp, the task parallel library, the parallel class and the parallel Link library. We have looked in detail at the task Parallel library. The differences between these three libraries are in the three fundamental steps, often a synchronous process, one much a problem into independence units of work and a sign each units to a task to perform a Siri's off operations on each units. So you reduce the units into it results each asynchronous luxury in seashore automates one or more off these steps. The task library is the foundation for everything, and therefore it does not automate anything. You have to perform all steps yourself. The parallel class automatically partitions millions of units of work on to thousands of tasks. The parallel link library does this, too, but it's also automatically aggregates the bundles into a final result. So if I put each library in a table, I can compare them like this in this section, we're going to take a closer look at the parallel link library. You can see from the table that the library does everything for you. It automatically maps your data across many threads. It's executes the work, and then it's automatically reduces the results to a final value. This makes parallel link quite a remarkable tool in that it is completely declarative. That means that you only need to describe the operations you want to perform on data Parallel leak. We'll do all the heavy lifting for you by setting up a large network off interconnected tasks. Feeding your data means without network on the reducing Who results for you? So so when should you use parallel link? We learned in the previous section that the task parallel library breaks down. When you create and starts millions of tasks, everything will keep working. But the tasks will fight each other for access to system. Resource is, and this will seriously degrades system performance. The parallel class ons data link solve this problem by combining units off work into bundles. Andi assigning a single bundle tour threads. This trick keeps the threat counts down to thousands, even though that are millions off units of work, we're reducing the bundle to results into a final value can be tricky, but this is where parallel link shines. It does this for you completely automatically. You don't have to do a thing yourself, so there's our answer. Use para link if you need to work on the data sets that contains millions off individual values. Another thing you should consider is that parallel link automatically creates the network off tasks. To execute the parallel link expression. You have no imperative control over the task in its work. You can only declare the sequence off operations to execute on the items in the data set, so you should only use Para Link if you actually needs to do just that. Perform a sequence off operations on the Eisen's in a large data sets for anything more complicated than that. You should fall back to the task. But on a library, let me summarize what we've learned in a table. I will put scenarios in the left most column on the recommended asynchronous library in the right hand column. So this is what we get for complex processes that use shared data. You should use the task pedal library for complex processes that use large shared data. You should use the platter will class to perform a sequence off tasks on items you know large data sets, you should use putting a link ums to perform a sequence of tasks on items in a small data sets. You can choose either use link or the task parallel library, so the key is in the size off the data. We can implement a complex pattern of tasks operating on shared data with the task are in the library. No problem. But if the shared data is really large, then we're going to need the automatic mapping function off the parallel library. But if we are simply executing a sequence of operations on the items in a large data sets, then the parallel link library is fine for small data sets you can choose. You can either use popular link or the task battle a library because you won't need the automatic mapping on reducing functions off parallel link 16. Word reversal with PLINQ: we learned in the previous lecture. That's parallel. Link is ideal for performing a sequence of operations on a large data sets for smaller data sets. We can either use parallel link or the task pattern library. Now let's take a look at the words reversing program I wrote in the previous section. Words. Reversal. It's basically a single operation on a data set with a small sequence of items, which are the individual words in the sentence. So here is the words reversal program rewritten in link. Let's take a look. You can see that's what the coach is. Super simple. There are no methods. Hands, no fields. Declarations. Only a main program, Mrs. The first thing I do is declare a string variable with the test sentence, which is again the quick brown fox jumps over the lazy dog. Then it's time for a link expression. I start by taking the test sentence and splitting its on whitespace characters. Two. Creates a list of words. Then I use the select methods to project the sequence off words into a new sequence off items. The act off projecting in link is nothing more than running a task on every single item in the set. So in my case, I want to reverse the characters in the words. I can accomplish this by simply calling the reverse methods. Now this is a little bit off dark link magic a string he is essentially a sequence of characters on the string class is very helpful in that it's implements a generic I innumerable interface off type character Link provides the reverse extension methods, which takes any in admiration as inputs and returns the reversed set. So in this case, I reverse all the characters. Industry on the methods returns a new character in admiration. I would like to reassemble these characters back into a string, but unfortunately the string constructor cannot handle a character in admiration as impose . So I use the to arraign methods to convert the enumeration to a character array. Andi, feed that character or a into the string constructor. So, no, I have the reversed word as a string. You can see here that I store the results off this entire link expression in a local variable cold words. This will be a generic I innumerable interface off type string. The final step is to reassemble the words into a sentence. For this, I used the join methods off the string class. The methods expects to arguments the dilemma to character bonds on, in Admiration of Strings, the results he's on assemble sentence with all the words delimited by spaces. Now let me show you that everything works as intended. I will builds the program on or run it to see what happens. You we go and you could see that everything works as it should. Here is my test sentence, with the characters in each word reversed. But now it's time for a reality check. I am not using part of the link at all. This application is just a regular link application, the entire operation off splitting the string into words. We're reversing each words on reassembling the words into. A sentence runs on a single threads. I am not harnessing the power off multiple processors at all, so the next step is to turn this link expression into a parallel ling expression, and this is incredibly easy effect. You can do this transformation at any point in the sequence of operations. You could run the first half off your expression as a regular sequential link Zen, execute the middle portion as parallel link on, then execute the final bids as again regular link. There are two methods that allow you to switch between sequential link on Parallel league as parallel, which switches from link to Parallel. Inc found a sequential, which switches from paralleling back to regular link. I only have a single operation, which is the character reversal, so I need to put on as parallel method. Cool between the split missives on the select methods like this. Now the expression is a parallel link expression, and it will run on multiple threads on CP. Of course, it will scale up gracefully. Two sentences with millions of words. Let me demonstrate that everything is still working. I will compile the program on Run it again. Okay, here we go. And there's a proof. The parallel link implementation off the program works like a charm. I always tell everyone to use link as much as possible. The reason for this is that it makes it really easy to convert parts of your application into a synchronous codes say you have a performance bottleneck in a section of colds, and that section is already implemented as a regular sequential link expression. Then you can simply add a single as parallel cool to convert the entire expression into parallel link and is well run on multiple CPO course on threads. However, a word of warning parallel link will always decide for itself if it's going to run a query in parallel or sequentially. The choice depends on the's size. Off the data sets on the complexity off the operations to perform on each item. My test application, where I reversed nine words in a test sentence, will always run sequentially on. This is because the overheads off creating the task network, mapping the data and assigning it to individual tasks and then reducing the results takes so much time that it would actually slow down the performance instead of increasing it. If you want to overrule Pattern L Links built in creedy planner, then you can actually force it to always execute creating in parallel modes. The methods with execution modes lets you specify the desires Execution modes default, which lets parallel link decide on its own whether to execute clearly in parallel or sequentially, or force parallelism, which will always execute the creating in parallel So let me modify the program to actually run the word reversal in parallel. All I need to do is adds the with execution modes, methods, rights. After the as parallel methods on then, specify the force parallelism execution modes like this. Now let me rebuild the program on to run it again. Okay, here we go. And here are the results. Now the expression runs in parallel on multiple threads on CPU course. Okay, so what's half reassurance in this lecture? Paralleling is ideal for performing a sequence of operations on a large data sets. But you can also use it for small data sets. You can paralyze any link expression with the ass. Parallel methods on your story to sequential Moz with the as sequential message. Part of their link decides whether or not to pattern allies agree. But you can force it to always part on Isaac really whiskey with execution modes, methods 17. PLINQ and item ordering: Okay, I have to admit this. I tricked you guys. There was something extremely weird about the program outputs in the previous lecture, but I only briefly showed you the outputs on. Then I moved on to the summary slides. How many of you saw us? It happens when I add it. The with execution molds message to my link expression to force part of realization. In all cases, I ran the program again and it displayed the sentence with all the words reversed. But there was something odds about the results. Did you spot it? Let me show you the screen shots off the final results off the previous lecture. So this was the output off my application. Take a closer look. Okay, so by now, I'm sure you've all spotted it. The words are in the wrong order. What's going on here? Let's reconstruct the task pattern created by parallel link to execute the query. I have nine words in my test sentence which will each get signs to a unique task. The task parallel library will run the tasks on the default Don't nets threat, so we will probably have to threads each executing four or five tasks. Each threads executes its tasks, but it uses of results. Turns passes them on to paralleling for reduction. But here's the kicker. The second threats finishes before the first threats. So what you get? He's a sentence with an entire block off four or five Jason's words reversed on This leads me to a very important limitation off. Parallel link Perella Linked does not reserve the order off the items in the data sets. This is quite different from plain old regular sequential ing, which will always preserves the order off items. The reason that parallel link does not preserve item order is that it improves performance . Paralleling can combine the results from each threads without having to worry about the order in which they arrive. This speeds up the execution off the entire creating nine times out of 10. The exact order off the items in the output data doesn't matter. For example, if you aggregates items with a counts some for max operation, then it really doesn't matter in which order the items are. But in my program, the order is important because I want the test sentence. Words order to be preserved in the output is there any way to accomplish this? Yes, there is. There are two methods that instructs paralleling how to order items as orders which tells patter link to preserve the order off the items in the data sets hands as a non orders which tells partner link to not preserve the order off the items in the data sets. You can insert these methods at any point in your creating any operations between as orders ons as own orders will have their item order preserves in the articles. So in my program, all I have to do he is inserts a cool to ask orders right after the call to ask Parallel. Let me do that now and now. When I rebuilt the program sounds, then run, it's You can see that the sentence is back to normal with all the words in the right order . Okay, So what have we learned when para will link paralyzes speak clearly it may not preserve the order off the items in the data sets. It does this for performance reasons. Quite often, the order off the items in the outputs will have no effect on the final results off the creating in cases where item order is important, you can use a cool two as orders to preserve item order. A subsequent call to ask in orders will restore the pattern a link, default hoarding behavior. 18. Limitations of PLINQ: So far, Carol, a link looks like a magical library. You take any link expression on a nickel to ask parallel, and all of a sudden it's executes in parallel on multiple threads on CPU cores. Behind the scenes parlor lending creates large networks off insert operating tasks it feeds . The data sets into these tasks. The signs groups off tasks to threads on the nets. Right? Cool executes that asks. I'm gonna go to the results ons presents everything back to you as an output sequence, but it's probably won't surprise you. That's parallel. Link has several limitations. A library off this complexity simply cannot work correctly every time. And there are certain link expressions that simply cannot be paralyzed. Let's start with the limitation that we discovered in the previous lecture. Parallel Link does not preserve the order off the items in the upper sequence. In some cases, you're outwards. Items will be in a different order than the import Eisen's. Now, most of the time, this will not be a problem. Many link aggregations, like some marks on discount, do not care about the order off the impetus Eisen's because it has no effect on the final results. But sometimes the outputs order is important chance For those occasions. You can enable order preserving with the as orders methods pounds. You will take a small performance right now. Let's look at some more limitations. Check out the following link operations Con cats first first, so defaults, lust last or defaults skip, skip while take Take oil sounds trip. Even though they look innocent enough, these operations cannot always be paralyzed. Indoor nets Parallel link will attempt to run them in parallel, but it's kind of depends on the precise ordering off operations in the link expression. And if you're calling complex functions in any off the link methods, depending on these factors, parallel link may or may not execute the operations in parallel. If parallel linked, decides not to run the operations in parallel, then the entire Creedy chain, up to and including the offending operation, is around sequentially ons anything to the right off the offending operation. Is it running part of no, However, the good news is that Microsoft's improved paralleling, enormously indoor nets 415 From this version onwards, Paranal Link will always paralyze queries with the above operations. But now look at these operations. Positional select positional selects many positional skip, while positional take well ons. Positional. Where the term positional refers to the fact that's these operations use a predator, its function that includes the position off the item in the distance, the pathetic. It's receives the item position as a zero based integer, and you can use this value to determine which outward items to produce. The occurrence off these operations in any link expression may cause paralleling to fall back to sequential execution. But regardless off the dot net framework. Lucien. Even in version 4.5, these operations cannot be reliably paralyzed. Let me summarize what we have learned in this lecture. Some link operations cannot reliably be paralyzed in all scenarios, and we'll occasionally cause a full back to sequential processing. When this happens, the entire chain of operations up to and including the offending operation will run sequentially, Has anything to the rights off the offending operation. We'll run in parallel. The following operations may cause a full back to sequential mold in dot net framework versions up to version four point. So come cuts first, first or defaults, lust last or defaults skip skip boil take take while sounds a ship in dot net version 4.5. These operations will always run in parallel. The following operations may cause a full back to a sequential mold in all framework versions. The positional versions off. Select, select many skip while take while hands where. 19. Course recap: Congratulations. You have completed the entire course. You are now a certified task. Parallel library ons. Parallel link C sharp coder. I have shown you how hard it is to rights Asynchronous code in C sharp And how the task parallel library ons parallel link simply five. The job enormously by automating away all the hard stuff like locking shared variables, aunt synchronizing threads, you have masters a number of topics in this course multi threaded code foundations. You saw the problems you can run into when writing multi threaded codes like race conditions, aunt synchronization issues. And I showed you how you can resolve these problems by hand. The task battle a library. You learned how tasks make it trivial to rights complex asynchronous colds in C sharp, I showed you how to build a small, hierarchical and sequential task network to reverse the characters in each words in a sentence. You also learns for which use cases. The task parallel library is the best choice. The parallel link library. You learned that the parallel link library is ideal for implementing mapper juice operations. A synchronously I showed you how you can re implement the words reversal program in only a few lines off parallel link codes. Sounds how you can fine tune the parallel execution off your Cleary's. The skills you learned have given you a rich toolbox of knowledge and ideas that you can use when writing your own coat or when collaborating in a development team, especially when you're working on mission critical asynchronous cold, where scalability is crucial. If you discover some interesting insights of your own, please share them in the course discussion forums for us all to enjoy. Goodbye. I hope we meet again in another course.