from:http://stackoverflow.com/questions/1640389/pthreads-pthread-cond-signal-from-within-critical-section
Question1:
I have the following piece of code in thread A, which blocks using pthread_cond_wait()
pthread_mutex_lock(&my_lock);
if ( false == testCondition )
pthread_cond_wait(&my_wait,&my_lock);
pthread_mutex_unlock(&my_lock);
I have the following piece of code in thread B, which signals thread A
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_cond_signal(&my_wait);
pthread_mutex_unlock(&my_lock);
Provided there are no other threads, would it make any difference if pthread_cond_signal(&my_wait) is moved out of the critical section block as shown below ?
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_mutex_unlock(&my_lock);
pthread_cond_signal(&my_wait);
Answer1:
My recommendation is typically to keep the pthread_cond_signal() call inside the locked region, but probably not for the reasons you think.
In most cases, it doesn't really matter whether you call pthread_cond_signal() with the lock held or not. Ben is right that some schedulers may force a context switch when the lock is released if there is another thread waiting, so your thread may get switched away before it can call pthread_cond_signal(). On the other hand, some schedulers will run the waiting thread as soon as you call pthread_cond_signal(), so if you call it with the lock held, the waiting thread will wake up and then go right back to sleep (because it's now blocked on the mutex) until the signaling thread unlocks it. The exact behavior is highly implementation-specific and may change between operating system versions, so it isn't anything you can rely on.
But, all of this looks past what should be your primary concern, which is the readability and correctness of your code. You're not likely to see any real-world performance benefit from this kind of micro-optimization (remember the first rule of optimization: profile first, optimize second). However, it's easier to think about the control flow if you know that the set of waiting threads can't change between the point where you set the condition and send the signal. Otherwise, you have to think about things like "what if thread A sets testCondition=TRUE and releases the lock, and then thread B runs and sees that testCondition is true, so it skips the pthread_cond_wait() and goes on to reset testCondition to FALSE, and then finally thread A runs and calls pthread_cond_signal(), which wakes up thread C because thread B wasn't actually waiting, but testCondition isn't true anymore". This is confusing and can lead to hard-to-diagnose race conditions in your code. For that reason, I think it's better to signal with the lock held; that way, you know that setting the condition and sending the signal are atomic with respect to each other.
On a related note, the way you are calling pthread_cond_wait() is incorrect. It's possible (although rare) for pthread_cond_wait() to return without the condition variable actually being signaled, and there are other cases (for example, the race I described above) where a signal could end up awakening a thread even though the condition isn't true. In order to be safe, you need to put the pthread_cond_wait() call inside a while() loop that tests the condition, so that you call back into pthread_cond_wait() if the condition isn't satisfied after you reacquire the lock. In your example it would look like this:
pthread_mutex_lock(&my_lock);
while ( false == testCondition ) {
pthread_cond_wait(&my_wait,&my_lock);
}
pthread_mutex_unlock(&my_lock);
(I also corrected what was probably a typo in your original example, which is the use of my_mutex for the pthread_cond_wait() call instead of my_lock.)
Question2:
为了判断所有线程已经终止而又不用pthread_join,只好又引入另外一个pthread同步对象:条件变量.
/* current working threads number */
static int curnum = 0;
/* mutex lock for curnum */
static pthread_mutex_t mutex_curnum = PTHREAD_MUTEX_INITIALIZER;
/* cond for curnum */
static pthread_cond_t cond_curnum = PTHREAD_COND_INITIALIZER;
变量curnum代表当前活动线程数,在线程scan函数结尾增加一下代码:
pthread_mutex_lock(&mutex_curnum);
if (!--curnum)
pthread_cond_signal(&cond_curnum);
pthread_mutex_unlock(&mutex_curnum);
return NULL;
每个线程终止时把curnum减1,如果curnum等0,那么通知main所有线程都已
终止. main在创建完线程后就已以下代码阻塞在cond_curnum上:
pthread_mutex_lock(&mutex_curnum);
while(curnum)
pthread_cond_wait(&cond_curnum,&mutex_curnum);
pthread_mutex_unlock(&mutex_curnum);
********************************************
pthread_cond_signal 一定要放在pthread_mutex_unlock之前吗?
Answer2:
pthread_cond_signal()的具体位置?
"pthread_cond_signal()必须要放在pthread_mutex_lock() 和pthread_mutex_unlock() 之间, " 这个做法有个问题,举个例子 简单假设线程1、2,curnum 值为 1, 语句执行顺序如下: |
T2-->;pthread_mutex_lock(&mutex_curnum);
T2-->;while(curnum)
T2-->; pthread_cond_wait(&cond_curnum,&mutex_curnum);/*T2解锁,睡眠,等信号*/
T1-->;pthread_mutex_lock(&mutex_curnum); /*轮T1运行,T1上锁*/
T1-->;if (!--curnum) /*条件成立*/
T1-->; pthread_cond_signal(&cond_curnum); /*T1向线程T2发信号*/
T2-->;pthread_cond_wait(&cond_curnum,&mutex_curnum); /*T1时间片用完,换T2执行,但发觉不能上锁,因为T1持有锁*/
T1-->;pthread_mutex_unlock(&mutex_curnum); /*T1解锁*/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
问题在于一个条件变量和一个互斥锁关联,
一个线程在持有锁的情况下调用pthread_cond_signal(),
则等待条件的线程有可能得不到锁。
就上面的特定例子来,配合每条语句的执行顺序,
虽然T1调用了pthread_cond_signal(), 但是T2 100%不能获得锁。
《UNIX网络编程第2卷:进程间通信》提了一个改进方法
pthread_mutex_lock();
判断是否别个线程等待的条件发生,是的话设 "发生标志" 为 "是";
pthread_mutex_unlock();
if (发生标志 == 是)
{
pthread_cond_signal(...);
}
例子如下:
shows an example of how to use condition variables and mutexes together to synchronize threads.
The condition is the state of the work queue. We protect the condition with a mutex and evaluate the condition in a while loop. When we put a message on the work queue, we need to hold the mutex, but we don't need to hold the mutex when we signal the waiting threads. As long as it is okay for a thread to pull the message off the queue before we call cond_signal, we can do this after releasing the mutex. Since we check the condition in a while loop, this doesn't present a problem: a thread will wake up, find that the queue is still empty, and go back to waiting again. If the code couldn't tolerate this race, we would need to hold the mutex when we signal the threads.
gure 11.14. Using condition variables#include <pthread.h> struct msg { struct msg *m_next; /* ... more stuff here ... */ }; struct msg *workq; pthread_cond_t qready = PTHREAD_COND_INITIALIZER; pthread_mutex_t qlock = PTHREAD_MUTEX_INITIALIZER; void process_msg(void) { struct msg *mp; for (;;) { pthread_mutex_lock(&qlock); while (workq == NULL) pthread_cond_wait(&qready, &qlock); mp = workq; workq = mp->m_next; pthread_mutex_unlock(&qlock); /* now process the message mp */ } } void enqueue_msg(struct msg *mp) { pthread_mutex_lock(&qlock); mp->m_next = workq; workq = mp; pthread_mutex_unlock(&qlock); pthread_cond_signal(&qready); }
pthread_cond_wait必须放在pthread_mutex_lock和pthread_mutex_unlock之间,因为他要根据共享变量的状态来觉得是否要等待,而为了不永远等待下去所以必须要在lock/unlock队中
共享变量的状态改变必须遵守lock/unlock的规则
pthread_cond_signal即可以放在pthread_mutex_lock和pthread_mutex_unlock之间,也可以放在pthread_mutex_lock和pthread_mutex_unlock之后,但是各有有缺点。
之间:
pthread_mutex_lock
xxxxxxx
pthread_cond_signal
pthread_mutex_unlock
缺点:在某下线程的实现中,会造成等待线程从内核中唤醒(由于cond_signal)然后又回到内核空间(因为cond_wait返回后会有原子加锁的行为),所以一来一回会有性能的问题。但是在LinuxThreads或者NPTL里面,就不会有这个问题,因为在Linux 线程中,有两个队列,分别是cond_wait队列和mutex_lock队列, cond_signal只是让线程从cond_wait队列移到mutex_lock队列,而不用返回到用户空间,不会有性能的损耗。
所以在Linux中推荐使用这种模式。
之后:
pthread_mutex_lock
xxxxxxx
pthread_mutex_unlock
pthread_cond_signal
优点:不会出现之前说的那个潜在的性能损耗,因为在signal之前就已经释放锁了
缺点:如果unlock和signal之前,有个低优先级的线程正在mutex上等待的话,那么这个低优先级的线程就会抢占高优先级的线程(cond_wait的线程),而这在上面的放中间的模式下是不会出现的。
所以,在Linux下最好pthread_cond_signal放中间,但从编程规则上说,其他两种都可以