Do you know how to write robust multi-threaded C# code that does not crash?
Lets face it: writing multi-threaded code is hard. The sobering truth is that, unless you know exactly what you're doing, your code is pretty much guaranteed to crash in production.
Don't let this happen to you.
It doesn't have to be like this. If you have a good understanding of multi-threaded programming and follow a few simple industry best practices, you can write robust code that can take a beating.
Do you know how to write robust multi-threaded C# code that does not crash?
Lets face it: writing multi-threaded code is hard. The sobering truth is that, unless you know exactly what you're doing, your code is pretty much guaranteed to crash in production.
Don't let this happen to you.
It doesn't have to be like this. If you have a good understanding of multi-threaded programming and follow a few simple industry best practices, you can write robust code that can take a beating.
I wrote a multi-threaded conversion utility a few years ago, that successfully migrated 100,000 documents from SharePoint 2010 to SharePoint 2013. The program worked flawlessly the first time, because I implemented all of the best practices for writing asynchronous C# code.
Sound good?
In this course I am going to share these practices with you.
In a series of short lectures I will cover many multi-threading topics. I will show you all of the problems you can expect in asynchronous code, like race conditions, deadlocks, livelocks and synchronisation issues. I'll show you quick and easy strategies to resolve these problems.
By the end of this course you will be able to write robust multi-threaded C# code that can take a beating.
Why should you take this course?
You should take this course if you are a beginner or intermediate C# developer and want to take your skills to the next level. Asynchronous programming might sound complicated, but all of my lectures are very easy to follow, and I explain all topics with clear code and many instructive diagrams. You'll have no trouble following along.
Or maybe you're working on a critical section of code in a multi-threaded C# project, and need to make sure your code is rock-solid in production? The tips and tricks in this course will help you immensely.
Or maybe you're preparing for a C# related job interview? This course will give you an excellent foundation to answer any threading-related questions they might throw at you.
In this lecture I explain how this course is organised and I describe each of the upcoming sections in detail.
In this lecture we're going to look at the theory behind asynchronous programming. What exactly is multithreaded code, and how does it work?
Many lectures in this course contain source code examples. Feel free to download the code and follow along. And here's the good news: it doesn't matter if you have a Window, Mac or Linux computer. The code will run on all three operating systems.
In this lecture I demonstrate how my solutions and projects run on all operating systems. I will show you how to build and run the source code on a Mac, on Linux and in Visual Studio running on Windows 8.
At the end of this lecture you will have learned that .NET code is portable and can run on at least five different operating systems.
Welcome to the Thread Class section. I will give a quick introduction on how the section is organized before we get started.
In this lecture, I will teach you how to start new threads using the System. Thread class: the workhorse of multi-threaded programming in C#.
I will also show you how you can give a descriptive name to a thread, to aid in debugging.
Finally, you will learn that there are two classes of threads: foreground- and background threads. I will show you the difference in behavior between these two classes of threads.
In this lecture I am going to show you the most common multi-threading programming problem: a race condition.
A race condition happens when 2 or more threads are trying to access and modify the same variable. I will demonstrate a race condition with a very simple program, with 2 threads accessing a shared integer class member.
In the next section I will show you a comprehensive solution for dealing with race conditions. For now I will leave you with a tip on how to minimize the impact of race conditions in your code.
In this lecture you will learn how to safely pass in initialization data to a thread. You will learn about the ParameterizedThreadStart delegate, and how to use a lambda expression to initialize a thread.
Captured variables in lambda expressions are shared between the new thread and the main program thread, and so this opens us up to a possible race condition.
I will show you a short program that uses a lambda expression and introduces a race condition. Then I'll show you a cool trick, where I only change 2 lines of code, to make the race condition disappear.
In this lecture I am going to show you another common multi-threading programming problem: checking if a thread has finished.
I will show you a short multi-threaded program with a block of code that I want to be executed once. Two threads check each other's state to ensure that the code executes only a single time. I'll show you a working solution that actually has a big hidden problem. You will learn that the program only works by pure coincidence.
I will conclude the lecture with some advice on the best way to check if a thread has finished.
In this lecture you will learn how to suspend the current thread until another thread has completed, using the Join method. We will revisit the multi-threaded program with the race condition from the previous lecture. I will show you how a single strategically placed Join statement resolves the race condition.
In the second part of this lecture I will show you how you can suspend the current thread for a given time interval by using the Sleep method.
We'll conclude with a summary of what we have learned.
The previous lecture introduced the Join and Sleep methods which suspend the current thread until either another thread ends, or when a given timeout expires.
In this lecture you will learn how to interrupt and abort suspended threads. We will look in detail at what precisely happens when you interrupt or abort a suspended or a non-suspended thread.
Even though the Interrupt and Abort methods look really useful, using them in practice is somewhat risky. We'll look at how an unexpected interrupt or abort can introduce resource leaks, and I will provide two scenarios in which you can safely abort a thread without having to worry about leaks.
Congratulations on finishing this section. This is a recap of what we have learned.
Test your knowledge of the thread class with this short quiz.
Welcome to the Thread Locking section. I will give a quick introduction on how the section is organized before we get started.
In the previous section I showed you several multi-threaded programs that were prone to a specific problem called a 'race condition'.
In this lecture I revisit the race condition and I'll demonstrate how a special technique called 'thread locking' resolves the problem. I will show you several short examples of code prone to race conditions, and then I'll add thread locking to the code to fix the problem.
At the end of this lecture you will know exactly what thread locking is, how it resolves a race condition, and when you should implement it yourself.
In this lecture we are going to take a closer look at the lock statement in C#. You will learn that "lock" is in fact syntactic sugar for a pair of Monitor.Enter and Monitor.Exit calls. I will demonstrate several example programs using either the compact "lock" syntax, or the more verbose code that uses the Monitor class.
You will learn all the essentials of thread locking, including what code to lock, which synchronisation object to use, and what the advantages are of calling the Monitor class directly.
By the end of the lecture you will be proficient in thread locking, and you will be able to set up critical sections in your code with ease.
This lecture explains how to deal with deadlocks. A deadlock is a problem that occurs when two or more threads are waiting indefinitely for each other, trying to access and lock two or more resources.
I will explain deadlocks in detail using the famous thought experiment created by Edsger Dijkstra in 1965: the "Dining Philosopher" problem. I will show you what a deadlock looks like when represented in the context of the dining philosophers.
We will then examine two quick-fix strategies for resolving deadlocks: introducing randomness, or use an arbiter.
After completing this lecture you will have a thorough understanding of what a deadlock is, and you will know two strategies for resolving deadlocks in your code. You will also be aware of the Chandy / Misra algorithm, which is the reference solution for the Dining Philosopher problem.
In this lecture we revisit the Dining Philosopher problem. I have written a simulation program that sets up all 5 philosophers and chopsticks, and implements the random sleep mitigation strategy that we discussed earlier. You will see that my implementation has terrible performance, with all 5 philosophers fighting for the chopsticks more than 90% of the time.
Can you do better than me?
Your assignment is to take my code as a starting point and write your own improved deadlock resolving strategy. The objective is to have all philosophers eat for as long as possible. The highest score you can achieve is slightly over 20 seconds.
Good luck!
In this lecture we are going to take a closer look at a specific scenario: locking and incrementing a single variable. You already learned that you can make an increment operation thread-safe by using a lock statement. But unfortunately a lock has a performance overhead which will slow down your code.
Fortunately there is an alternative. For simple scenarios like incrementing, decrementing, reading or writing a single variable, you can also use the Interlocked class. The Interlocked class exposes low-level thread-safe CPU operations which perform much better than a generic lock statement.
In this lecture we are going to take a closer look at the performance difference between thread-unsafe and thread-safe code, and between the generic lock statement and the Interlocked class.
By the end of this lecture you will have learned if using the Interlocked class is worth the effort.
Test your knowledge of thread locking with this short quiz.
Welcome to the Thread Synchronisation section. I will give a quick introduction about how the section is organised before we get started.
In this lecture I am going to take a look at thread synchronisation. The need for thread synchronisation arises when two or more threads need to exchange data in a controlled manner. I will show you a simple example program that attempts to exchange data between threads without any synchronisation, and you will see how the data transfer completely fails.
Next we will cover the workhorse of thread synchronisation: the AutoResetEvent. I will show you how you can line up two threads with a single AutoResetEvent variable, to ensure that you'll never lose any data. Then I'll show you how the single remaining race condition can be resolved by adding a second AutoResetEvent.
By the end of the lecture you will have a deep understanding of thread synchronisation: what it is, when you need it, and how you can implement it yourself.
In this lecture I am going to build a producer/consumer queue which is a very popular multi-threaded coding pattern. The queue features one or more 'producers' which add tasks to a shared queue, and a pool of 'consumers' that retrieve tasks from the queue and execute them in the background.
I will show you how you can build a producer/consumer queue in .NET with only a simple thread-safe queue of delegates, and one AutoResetEvent to notify consumers that a new task is available.
By the end of the lecture you will be able to build your own producer/consumer queue, and you will also have learned a surprising fact about the console.
So far we've only used AutoResetEvents to synchronise two or more threads. In this lecture I'm going to take a closer look at the ManualResetEvent, a wait handle similar to the AutoResetEvent but with a slightly different behaviour.
I will start with the producer/consumer queue from the previous lecture, and add new pause and resume functionality. I will show you what happens when you try and build that functionality with an AutoResetEvent (hint: it doesn't work). Then I'll show you how the ManualResetEvent behaves, and I will change the code to make the queue work as intended.
By the end of the lecture you will have a clear understanding of the differences between an AutoResetEvent and a ManualResetEvent, and you will have learned how to use the latter in the producer/consumer queue to make all consumers pause or resume their work.
You have seen how the AutoResetEvent can be used to signal an event from one thread to another, and how the ManualResetEvent can be used to signal from one thread to an entire group of threads.
In this lecture I am going to cover a third scenario: how to signal from a group of threads to a single thread. I will show you how the CountdownEvent, a new type of wait handle, can be used to implement this scenario. I will revisit the producer / consumer queue, and use a CountdownEvent to add a new feature that lets me quit all consumers simultaneously.
By the end of the lecture you will have a clear understanding of the differences between the AutoResetEvent, the ManualResetEvent and the CountdownEvent.
In this lecture I will describe 'Thread Rendezvous' which is the process of aligning two or more threads in time, to execute the same part of code simultaneously.
There are several ways to implement thread rendezvous, and you have already seen one method that uses two complimentary AutoResetEvents to synchronise two threads. I will show you two other techniques that solve several problems with the AutoResetEvents solution.
By the end of the lecture you will have learned what the Barrier class is for, and how it solves several problems that pop up when you try to implement thread rendezvous with AutoResetEvents or a CountdownEvent.
In this lecture I would like to thank you for finishing the course and offer some final words.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.