Threading in C#

http://www.albahari.com/threading/

Threading in C#

Joseph Albahari

 

Getting Started

Basic Synchronization

Using Threads

Advanced Topics

Overview and Concepts

Creating and Starting Threads

Synchronization Essentials

Locking and Thread Safety

Interrupt and Abort

Thread State

Wait Handles

Synchronization Contexts

Apartments and Windows Forms

BackgroundWorker

ReaderWriterLockSlim

Thread Pooling

Asynchronous Delegates

Timers

Local Storage

Non-Blocking Synchronization

Wait and Pulse

Suspend and Resume

Aborting Threads

Translations: Russian | Chinese | Persian

(Like to write another translation? Let me know!)

Last updated: 2007-12-1

 

Overview and Concepts

C# supports parallel execution of code through multithreading. A thread is an independent execution path, able to run simultaneously with other threads.

A C# program starts in a single thread created automatically by the CLR and operating system (the "main" thread), and is made multi-threaded by creating additional threads. Here's a simple example and its output:

All examples assume the following namespaces are imported, unless otherwise specified:

using System;
using System.Threading;

Would invading Iraq result in a quagmire?

"Very likely" - Dick Cheney, U.S. Vice President, April 1994

 

Find out who said what.

www.takeonit.com

class ThreadTest {
  static void Main() {
    Thread t = new Thread (WriteY);
    t.Start();                          // Run WriteY on the new thread
    while (true) Console.Write ("x");   // Write 'x' forever
  }
 
  static void WriteY() {
    while (true) Console.Write ("y");   // Write 'y' forever
  }
}

xxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyy
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
yyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
...

The main thread creates a new thread t on which it runs a method that repeatedly prints the character y. Simultaneously, the main thread repeatedly prints the character x.

The CLR assigns each thread its own memory stack so that local variables are kept separate. In the next example, we define a method with a local variable, then call the method simultaneously on the main thread and a newly created thread:

static void Main() {
  new Thread (Go).Start();      // Call Go() on a new thread
  Go();                         // Call Go() on the main thread
}
 
static void Go() {
  // Declare and use a local variable - 'cycles'
  for (int cycles = 0; cycles < 5; cycles++) Console.Write ('?');
}

??????????

A separate copy of the cycles variable is created on each thread's memory stack, and so the output is, predictably, ten question marks.

Threads share data if they have a common reference to the same object instance. Here's an example:

class ThreadTest {
 bool done;
 
 static void Main() {
   ThreadTest tt = new ThreadTest();   // Create a common instance
   new Thread (tt.Go).Start();
   tt.Go();
 }
 
 // Note that Go is now an instance method
 void Go() {
   if (!done) { done = true; Console.WriteLine ("Done"); }
 }
}

Because both threads call Go() on the same ThreadTest instance, they share the done field. This results in "Done" being printed once instead of twice:

Done

Static fields offer another way to share data between threads. Here's the same example with done as a static field:

class ThreadTest {
 static bool done;    // Static fields are shared between all threads
 
 static void Main() {
   new Thread (Go).Start();
   Go();
 }
 
 static void Go() {
   if (!done) { done = true; Console.WriteLine ("Done"); }
 }
}

Both of these examples illustrate another key concept – that of thread safety (or, rather, lack of it!) The output is actually indeterminate: it's possible (although unlikely) that "Done" could be printed twice. If, however, we swap the order of statements in the Go method, then the odds of "Done" being printed twice go up dramatically:

static void Go() {
  if (!done) { Console.WriteLine ("Done"); done = true; }
}

Done
Done   (usually!)

The problem is that one thread can be evaluating the if statement right as the other thread is executing the WriteLine statement – before it's had a chance to set done to true.

The remedy is to obtain an exclusive lock while reading and writing to the common field. C# provides the lock statement for just this purpose:

class ThreadSafe {
  static bool done;
  static object locker = new object();
 
  static void Main() {
    new Thread (Go).Start();
    Go();
  }
 
  static void Go() {
    lock (locker) {
      if (!done) { Console.WriteLine ("Done"); done = true; }
    }
  }
}

When two threads simultaneously contend a lock (in this case, locker), one thread waits, or blocks, until the lock becomes available. In this case, it ensures only one thread can enter the critical section of code at a time, and "Done" will be printed just once. Code that's protected in such a manner – from indeterminacy in a multithreading context – is called thread-safe.

Temporarily pausing, or blocking, is an essential feature in coordinating, or synchronizing the activities of threads. Waiting for an exclusive lock is one reason for which a thread can block. Another is if a thread wants to pause, or Sleep for a period of time:

Thread.Sleep (TimeSpan.FromSeconds (30));         // Block for 30 seconds

A thread can also wait for another thread to end, by calling its Join method:

Thread t = new Thread (Go);           // Assume Go is some static method
t.Start();
t.Join();                             // Wait (block) until thread t ends

A thread, while blocked, doesn't consume CPU resources.

How Threading Works

Multithreading is managed internally by a thread scheduler, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked – for instance – on an exclusive lock, or on user input – do not consume CPU time.

On a single-processor computer, a thread scheduler performs time-slicingrapidly switching execution between each of the active threads. This results in "choppy" behavior, such as in the very first example, where each block of a repeating X or Y character corresponds to a time-slice allocated to the thread. Under Windows XP, a time-slice is typically in the tens-of-milliseconds region – chosen such as to be much larger than the CPU overhead in actually switching context between one thread and another (which is typically in the few-microseconds region).

On a multi-processor computer, multithreading is implemented with a mixture of time-slicing and genuine concurrency – where different threads run code simultaneously on different CPUs. It's almost certain there will still be some time-slicing, because of the operating system's need to service its own threads – as well as those of other applications.

A thread is said to be preempted when its execution is interrupted due to an external factor such as time-slicing. In most situations, a thread has no control over when and where it's preempted.

Get the whole book

Introducing C#
C# Language Basics
Creating Types in C#
Advanced C# Features
Framework Fundamentals
Collections
LINQ Queries
LINQ Operators
LINQ to XML
Other XML Technologies
Disposal & Garbage Collection
Streams and I/O
Networking
Serialization
Assemblies
Reflection & Metadata
Threading
Asynchronous Methods
Application Domains
Integrating with Native DLLs
Diagnostics
Regular Expressions

Threads vs. Processes

All threads within a single application are logically contained within a process – the operating system unit in which an application runs.

Threads have certain similarities to processes – for instance, processes are typically time-sliced with other processes running on the computer in much the same way as threads within a single C# application. The key difference is that processes are fully isolated from each other; threads share (heap) memory with other threads running in the same application. This is what makes threads useful: one thread can be fetching data in the background, while another thread is displaying the data as it arrives.

When to Use Threads

A common application for multithreading is performing time-consuming tasks in the background. The main thread keeps running, while the worker thread does its background job. With Windows Forms or WPF applications, if the main thread is tied up performing a lengthy operation, keyboard and mouse messages cannot be processed, and the application becomes unresponsive. For this reason, it’s worth running time-consuming tasks on worker threads even if the main thread has the user stuck on a “Processing… please wait” modal dialog in cases where the program can’t proceed until a particular task is complete. This ensures the application doesn’t get tagged as “Not Responding” by the operating system, enticing the user to forcibly end the process in frustration! The modal dialog approach also allows for implementing a "Cancel" button, since the modal form will continue to receive events while the actual task is performed on the worker thread. The BackgroundWorker class assists in just this pattern of use.

In the case of non-UI applications, such as a Windows Service, multithreading makes particular sense when a task is potentially time-consuming because it’s awaiting a response from another computer (such as an application server, database server, or client). Having a worker thread perform the task means the instigating thread is immediately free to do other things.

Another use for multithreading is in methods that perform intensive calculations. Such methods can execute faster on a multi-processor computer if the workload is divided amongst multiple threads. (One can test for the number of processors via the Environment.ProcessorCount property).

A C# application can become multi-threaded in two ways: either by explicitly creating and running additional threads, or using a feature of the .NET framework that implicitly creates threads – such as BackgroundWorker, thread pooling, a threading timer, a Remoting server, or a Web Services or ASP.NET application. In these latter cases, one has no choice but to embrace multithreading. A single-threaded ASP.NET web server would not be cool – even if such a thing were possible! Fortunately, with stateless application servers, multithreading is usually fairly simple; one's only concern perhaps being in providing appropriate locking mechanisms around data cached in static variables.

When Not to Use Threads

Multithreading also comes with disadvantages. The biggest is that it can lead to vastly more complex programs. Having multiple threads does not in itself create complexity; it's the interaction between the threads that creates complexity. This applies whether or not the interaction is intentional, and can result long development cycles, as well as an ongoing susceptibility to intermittent and non-reproducable bugs. For this reason, it pays to keep such interaction in a multi-threaded design simple – or not use multithreading at all – unless you have a peculiar penchant for re-writing and debugging!

Multithreading also comes with a resource and CPU cost in allocating and switching threads if used excessively. In particular, when heavy disk I/O is involved, it can be faster to have just one or two workers thread performing tasks in sequence, rather than having a multitude of threads each executing a task at the same time. Later we describe how to implement a Producer/Consumer queue, which provides just this functionality.

Creating and Starting Threads

Threads are created using the Thread class’s constructor, passing in a ThreadStart delegate – indicating the method where execution should begin.  Here’s how the ThreadStart delegate is defined:

public delegate void ThreadStart();

Calling Start on the thread then sets it running. The thread continues until its method returns, at which point the thread ends. Here’s an example, using the expanded C# syntax for creating a TheadStart delegate:

class ThreadTest {
  static void Main() {
    Thread t = new Thread (new ThreadStart (Go));
    t.Start();   // Run Go() on the new thread.
    Go();        // Simultaneously run Go() in the main thread.
  }
  static void Go() { Console.WriteLine ("hello!"); }

In this example, thread t executes Go() – at (much) the same time the main thread calls Go(). The result is two near-instant hellos:

hello!
hello!

A thread can be created more conveniently using C#'s shortcut syntax for instantiating delegates:

static void Main() {
  Thread t = new Thread (Go);    // No need to explicitly use ThreadStart
  t.Start();
  ...
}
static void Go() { ... }

In this case, a ThreadStart delegate is inferred automatically by the compiler. Another shortcut is to use an anonymous method to start the thread:

static void Main() {
  Thread t = new Thread (delegate() { Console.WriteLine ("Hello!"); });
  t.Start();
}

A thread has an IsAlive property that returns true after its Start() method has been called, up until the thread ends.

A thread, once ended, cannot be re-started.

Passing Data to ThreadStart

Let’s say, in the example above, we wanted to better distinguish the output from each thread, perhaps by having one of the threads write in upper case. We could achieve this by passing a flag to the Go method: but then we couldn’t use the ThreadStart delegate because it doesn’t accept arguments. Fortunately, the .NET framework defines another version of the delegate called ParameterizedThreadStart, which accepts a single object argument as follows:

public delegate void ParameterizedThreadStart (object obj);

The previous example then looks like this:

class ThreadTest {
  static void Main() {
    Thread t = new Thread (Go);
    t.Start (true);             // == Go (true) 
    Go (false);
  }
  static void Go (object upperCase) {
    bool upper = (bool) upperCase;
    Console.WriteLine (upper ? "HELLO!" : "hello!");
  }

hello!
HELLO!

In this example, the compiler automatically infers a ParameterizedThreadStart delegate because the Go method accepts a single object argument. We could just as well have written:

Thread t = new Thread (new ParameterizedThreadStart (Go));
t.Start (true);

A feature of using ParameterizedThreadStart is that we must cast the object argument to the desired type (in this case bool) before use. Also, there is only a single-argument version of this delegate.

An alternative is to use an anonymous method to call an ordinary method as follows:

static void Main() {
  Thread t = new Thread (delegate() { WriteText ("Hello"); });
  t.Start();
}
static void WriteText (string text) { Console.WriteLine (text); }

The advantage is that the target method (in this case WriteText) can accept any number of arguments, and no casting is required. However one must take into account the outer-variable semantics of anonymous methods, as is apparent in the following example:

static void Main() {
  string text = "Before";
  Thread t = new Thread (delegate() { WriteText (text); });
  text = "After";
  t.Start();
}
static void WriteText (string text) { Console.WriteLine (text); }

After

 

Anonymous methods open the grotesque possibility of unintended interaction via outer variables if they are modified by either party subsequent to the thread starting. Intended interaction (usually via fields) is generally considered more than enough! Outer variables are best treated as ready-only once thread execution has begun – unless one's willing to implement appropriate locking semantics on both sides.

Another common system for passing data to a thread is by giving Thread an instance method rather than a static method. The instance object’s properties can then tell the thread what to do, as in the following rewrite of the original example:

class ThreadTest {
  bool upper;
 
  static void Main() {
    ThreadTest instance1 = new ThreadTest();
    instance1.upper = true;
    Thread t = new Thread (instance1.Go);
    t.Start();
    ThreadTest instance2 = new ThreadTest();
    instance2.Go();        // Main thread – runs with upper=false
  }
 
  void Go() { Console.WriteLine (upper ? "HELLO!" : "hello!"); }

Naming Threads

A thread can be named via its Name property. This is of great benefit in debugging: as well as being able to Console.WriteLine a thread’s name, Microsoft Visual Studio picks up a thread’s name and displays it in the Debug Location toolbar. A thread’s name can be set at any time – but only once – attempts to subsequently change it will throw an exception.

The application’s main thread can also be assigned a name – in the following example the main thread is accessed via the CurrentThread static property:

class ThreadNaming {
  static void Main() {
    Thread.CurrentThread.Name = "main";
    Thread worker = new Thread (Go);
    worker.Name = "worker";
    worker.Start();
    Go();
  }
  static void Go() {
    Console.WriteLine ("Hello from " + Thread.CurrentThread.Name);
  }
}

Hello from main
Hello from worker

Foreground and Background Threads

By default, threads are foreground threads, meaning they keep the application alive for as long as any one of them is running. C# also supports background threads, which don’t keep the application alive on their own – terminating immediately once all foreground threads have ended.

Changing a thread from foreground to background doesn’t change its priority or status within the CPU scheduler in any way.

A thread's IsBackground property controls its background status, as in the following example:

class PriorityTest {
  static void Main (string[] args) {
    Thread worker = new Thread (delegate() { Console.ReadLine(); });
    if (args.Length > 0) worker.IsBackground = true;
    worker.Start();
  }
}

If the program is called with no arguments, the worker thread runs in its default foreground mode, and will wait on the ReadLine statement, waiting for the user to hit Enter. Meanwhile, the main thread exits, but the application keeps running because a foreground thread is still alive.

If on the other hand an argument is passed to Main(), the worker is assigned background status, and the program exits almost immediately as the main thread ends – terminating the ReadLine.

When a background thread terminates in this manner, any finally blocks are circumvented. As circumventing finally code is generally undesirable, it's good practice to explicitly wait for any background worker threads to finish before exiting an application – perhaps with a timeout (this is achieved by calling Thread.Join). If for some reason a renegade worker thread never finishes, one can then attempt to abort it, and if that fails, abandon the thread, allowing it to die with the process (logging the conundrum at this stage would also make sense!)

Having worker threads as background threads can then beneficial, for the very reason that it's always possible to have the last say when it comes to ending the application. Consider the alternative – foreground thread that won't die – preventing the application from exiting. An abandoned foreground worker thread is particularly insidious with a Windows Forms application, because the application will appear to exit when the main thread ends (at least to the user) but its process will remain running. In the Windows Task Manager, it will have disappeared from the Applications tab, although its executable filename still be visible in the Processes tab. Unless the user explicitly locates and ends the task, it will continue to consume resources and perhaps prevent a new instance of the application from starting or functioning properly.

A common cause for an application failing to exit properly is the presence of “forgotten” foregrounds threads.

Think in LINQ

LINQPad

Use LINQPad to interactively query your databases, and within a week, you'll be thinking in LINQ!

Written by the author of this article, and packed with more than 200 samples.

Free!

Thread Priority

A thread’s Priority property determines how much execution time it gets relative to other active threads in the same process, on the following scale:

enum ThreadPriority { Lowest, BelowNormal, Normal, AboveNormal, Highest }

This becomes relevant only when multiple threads are simultaneously active.

Setting a thread’s priority to high doesn’t mean it can perform real-time work, because it’s still limited by the application’s process priority. To perform real-time work, the Process class in System.Diagnostics must also be used to elevate the process priority as follows (I didn't tell you how to do this):

Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;

ProcessPriorityClass.High is actually one notch short of the highest process priority: Realtime. Setting one's process priority to Realtime instructs the operating system that you never want your process to be preempted. If your program enters an accidental infinite loop you can expect even the operating system to be locked out. Nothing short of the power button will rescue you! For this reason, High is generally considered the highest usable process priority.

If the real-time application has a user interface, it can be undesirable to elevate the process priority because screen updates will be given excessive CPU time – slowing the entire computer, particularly if the UI is complex. (Although at the time of writing, the Internet telephony program Skype gets away with doing just this, perhaps because its UI is fairly simple). Lowering the main thread’s priority – in conjunction with raising the process’s priority – ensures the real-time thread doesn’t get preempted by screen redraws, but doesn’t prevent the computer from slowing, because the operating system will still allocate excessive CPU to the process as a whole. The ideal solution is to have the real-time work and user interface in separate processes (with different priorities), communicating via Remoting or shared memory. Shared memory requires P/Invoking the Win32 API (web-search CreateFileMapping and MapViewOfFile).

Exception Handling

Any try/catch/finally blocks in scope when a thread is created are of no relevance once the thread starts executing. Consider the following program:

public static void Main() {
  try {
    new Thread (Go).Start();
  }
  catch (Exception ex) {
    // We'll never get here!
    Console.WriteLine ("Exception!");
  }
 
  static void Go() { throw null; }
}

The try/catch statement in this example is effectively useless, and the newly created thread will be encumbered with an unhandled NullReferenceException. This behavior makes sense when you consider a thread has an independent execution path. The remedy is for thread entry methods to have their own exception handlers:

public static void Main() {
   new Thread (Go).Start();
}
 
static void Go() {
  try {
    ...
    throw null;      // this exception will get caught below
    ...
  }
  catch (Exception ex) {
    Typically log the exception, and/or signal another thread
    that we've come unstuck
    ...
  }

From .NET 2.0 onwards, an unhandled exception on any thread shuts down the whole application, meaning ignoring the exception is generally not an option. Hence a try/catch block is required in every thread entry method – at least in production applications – in order to avoid unwanted application shutdown in case of an unhandled exception. This can be somewhat cumbersome – particularly for Windows Forms programmers, who commonly use the "global" exception handler, as follows:

using System;
using System.Threading;
using System.Windows.Forms;
 
static class Program {
  static void Main() {
    Application.ThreadException += HandleError;
    Application.Run (new MainForm());
  }
 
  static void HandleError (object sender, ThreadExceptionEventArgs e) {
    Log exception, then either exit the app or continue...
  }
}

The Application.ThreadException event fires when an exception is thrown from code that was ultimately called as a result of a Windows message (for example, a keyboard, mouse or "paint" message) – in short, nearly all code in a typical Windows Forms application. While this works perfectly, it lulls one into a false sense of security – that all exceptions will be caught by the central exception handler. Exceptions thrown on worker threads are a good example of exceptions not caught by Application.ThreadException (the code inside the Main method is another – including the main form's constructor, which executes before the Windows message loop begins).

The .NET framework provides a lower-level event for global exception handling: AppDomain.UnhandledException. This event fires when there's an unhandled exception in any thread, and in any type of application (with or without a user interface). However, while it offers a good last-resort mechanism for logging untrapped exceptions, it provides no means of preventing the application from shutting down – and no means to suppress the .NET unhandled exception dialog.

In production applications, explicit exception handling is required on all thread entry methods. One can cut the work by using a wrapper or helper class to perform the job, such as BackgroundWorker (discussed in Part 3).

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值