并发编程:几个关键概念和三个经典问题

几个关键概念

Race Condition

A race condition or race hazard is a flaw in an electronic system or process whereby the output and/or result of the process is unexpectedly and critically dependent on the sequence or timing of other events. The term originates with the idea of two signals racing each other to influence the output first

Critical section

In concurrent programming a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.

Mutex
Hardware solutions

On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This prevents interrupt code from running in the critical section, that also protects against interrupt-based process-change.

In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. When the code leaves the critical section, it clears the flag. This is called a "spinlock" or "busy-wait".

Similar atomic multiple-operation instructions, e.g., compare-and-swap, are commonly used for lock-free manipulation of linked lists and other data structures.

Software solutions

Beside the hardware supported solution, some software solutions exist that use "busy-wait" to achieve the goal. Examples of these include the following:

Unfortunately, spin locks and busy waiting are wasteful of processor time and power and are considered anti-patterns in almost every case. In addition, these algorithms do not work if Out-of-order execution is utilized on the platform that executes them. Programmers have to specify strict ordering on the memory operations within a thread.

The solution to these problems is to use synchronization facilities provided by an operating system's multithreading library, which will take advantage of hardware solutions if possible but will use software solutions if no hardware solutions exist. For example, when the operating system's lock library is used and a thread tries to acquire an already acquired lock, the operating system will suspend the thread using a context switch and swaps it out with another thread that is ready to be run, or could put that processor into a low power state if there is no other thread that can be run. Therefore, most modern mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches. However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than the time that must be waited for a thread to become ready to run after being blocked in a particular situation, then spinlocks are a fine solution for that situation only.

Advanced mutual exclusion

Synchronization primitives can be built like the examples below by using the solutions explained above:

 

Lock

In computer science, a lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution.

Locks typically require hardware support for efficient implementation. This usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.

The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:

if (lock == 0) lock = myPID; /* lock free - set it */

The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.

Semaphore

In computer science, a semaphore is a protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in a parallel programming environment. A semaphore generally takes one of two forms: binary and counting. A binary semaphore is a simple "true/false" (locked/unlocked) flag that controls access to a single resource. A counting semaphore is a counter for a set of available resources. Either semaphore type may be employed to prevent a race condition. On the other hand, a semaphore is of no value in preventing resource deadlock, such as illustrated by the dining philosophers problem.

Counting semaphores are accessed using operations similar to the following Pascal examples. Procedure V will increment the semaphore S, whereas procedure P will decrement it:

 procedure V (S : Semaphore);
 begin
   (* Atomic operation: increment the semaphore. *)
   S := S + 1;
 end;
 
   (* Atomic operation: decrement the semaphore. *)
 procedure P (S : Semaphore);
 begin
   (* Whole operation is atomic *)
   repeat
     Wait();
   until S > 0;
   S := S - 1;
 end;

To guarantee that two or more processes do not attempt to simultaneously modify the same semaphore, the operations that actually increment or decrement the semaphore are designated as atomic, meaning they cannot be interrupted, such as by preemption. This requirement may be met by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized by temporarily suspending preemption or, less desirably, by temporarily disabling hardware interrupts.

Semaphores remain in common use in programming languages that do not intrinsically support other forms of synchronization. They are the primitive synchronization mechanism in many operating systems. The trend in programming language development, though, is towards more structured forms of synchronization, such as monitors (though these advanced structures typically employ semaphores behind the scenes). In addition to their inadequacies in dealing with (multi-resource) deadlocks, semaphores do not protect the programmer from the easy mistakes of taking a semaphore that is already held by the same process, and forgetting to release a semaphore that has been taken.

A mutex is a binary semaphore that usually incorporates extra features, such as ownership, priority inversion protection or recursivity. The differences between mutexes and semaphores are that semaphores are operating system dependent, though mutexes are implemented by specialized and faster routines. Mutexes are meant to be used for mutual exclusion (post/release operation is restricted to thread that called pend/acquire) only and binary semaphores are meant to be used for event notification (post-ability from any thread) and mutual exclusion.

Events are also sometimes called event semaphores and are used for event notification.

Monitor

In concurrent programming, a monitor is an object intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods.

Monitors also provide a mechanism for threads to temporarily give up exclusive access, in order to wait for some condition to be met, before regaining exclusive access and resuming their task. Monitors also have a mechanism for signaling other threads that such conditions have been met.

For many applications, mutual exclusion is not enough. Threads attempting an operation may need to wait until some assertion P holds true. A busy waiting loop

   while not( P ) do skip

will not work, as mutual exclusion will prevent any other thread from entering the monitor to make the condition true.

The solution is condition variables. Conceptually a condition variable is a queue of threads, associated with a monitor, upon which a thread may wait for some assertion to become true. Thus each condition variable c is associated with some assertion Pc. While a thread is waiting upon a condition variable, that thread is not considered to occupy the monitor, and so other threads may enter the monitor to change the monitor's state. In most types of monitors, these other threads may signal the condition variable c to indicate that assertion Pc is true.

Thus there are two main operations on condition variables:

  • wait c is called by a thread that needs to wait until the assertion Pc is true before proceeding. While the thread is waiting, it does not occupy the monitor.
  • signal c (sometimes written as notify c) is called by a thread to indicate that the assertion Pc is true.

As an example, consider a monitor that implements a semaphore. There are methods to increment (V) and to decrement (P) a private integer s. However, the integer must never be decremented below 0; thus a thread that tries to decrement must wait until the integer is positive. We use a condition variable sIsPositive with an associated assertion of PsIsPositive = (s > 0).

monitor class Semaphore {
  private int s := 0
  invariant s >= 0 
  private Condition sIsPositive /* associated with s > 0 */
  
  public method P()
  {
    if s = 0 then wait sIsPositive 
    assert s > 0
    s := s - 1
  }
  
  public method V() {
    s := s + 1
    assert s > 0
    signal sIsPositive 
  }
}

When a signal happens on a condition that at least one other thread is waiting on, there are at least two threads that could then occupy the monitor: the thread that signals and any one of the threads that is waiting. In order that at most one thread occupies the monitor at each time, a choice must be made. Two schools of thought exist on how best to resolve this choice. This leads to two kinds of condition variables which will be examined next:

  • Blocking condition variables give priority to a signaled thread.
  • Nonblocking condition variables give priority to the signaling thread.

http://www.artima.com/insidejvm/ed2/threadsynch.html

http://msdn.microsoft.com/en-us/library/ms173179%28VS.80%29.aspx

http://cseweb.ucsd.edu/classes/fa05/cse120/lectures/120-l6.pdf

http://www.cs.mtu.edu/~shene/NSF-3/e-Book/SEMA/TM-example-buffer.html

http://www.cs.mtu.edu/~shene/NSF-3/e-Book/MONITOR/ProducerConsumer-1/MON-example-buffer-1.html

三个经典问题

Producer-Consumer Problem

信号量版本

 

 

Monitor 版本

 

 

Note the use of while statements in the above code, both when testing if the buffer is full or empty. With multiple consumers, there is a race condition where one consumer gets notified that an item has been put into the buffer but another consumer is already waiting on the monitor so removes it from the buffer instead. If the while was instead an if too many items may be put into the buffer or a remove might be attempted on an empty buffer.

 

java 版本 (ArrayBlockingQueue)

 

 

可以被中断的lock,被中断后将signal progagate 到其他thread

 

尝试性质的操作

poll()  -   offer(E)

poll(long, TimeUnit) - offer(E, long, TimeUtil)

 

wait知道成功的操作

take() - put(E)

 

throw exception的操作

add(E) - remove(E)

 

java wait 指定时间的版本

 

 

 

Reader-Writer Problem

信号量版本

  • 首先要有读者计数和写着计数,readCnt, writeCnt
  • 相应的需要两个mutex semaphore来保护这个变量的修改, m1, m2
  • 需要两个信号量w, r 进行读者和写者之间的同步(wait和唤醒)
    • 第一个读者,则w.p,写者无法进入
    • 最后一个读者,则w.v,唤醒可能的写者
    • 第一个写者,则r.p,所有之后的读者无法进入
    • 最后一个写者,则r.v,唤醒可能的读者
    • 写者之间互斥
  • 最后,还需要一个mutex semaphore来保证只有一个读者被写者wait,等待写者的通知。后来的读者读者被该读者wait,等待该读者的通知。因为写者之前已经互斥,所以最多只有一个写者被读者wait,等待读者的通知,后来的写者被该写者wait,等待该写者的通知。

 

 

 

信号量版本2

去掉写者计数器和保护mutex

合并w,r,mutex3为一个w_or_r互斥量

 

 

Monitor 版本

 

 

Java 版本

understanding java read-write lock 1

understanding java read-write lock 2

 

 

jdk1.5实现用采用lock free

 

 

 

 

 

 

http://cs.gmu.edu/cne/modules/ipc/orange/readmon.html


Dining philosophers problem

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

FireCoder

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值