Multi-Threaded Programming With POSIX Threads 尚...

<html>
<head>
<title>Multi-Threaded Programming With POSIX Threads</title>
</head>

<body>

<p align=center><img src=http://users.actcom.co.il/~choo/lupg/images/lupg_toolbar.gif height=40 width=360 alt="" usemap="#lupg_map"><map name=lupg_map>
<area shape=rect coords="3,0 37,39" href=http://users.actcom.co.il/~choo/lupg alt="LUPG home">
<area shape=rect coords="67,0 102,39" href=http://users.actcom.co.il/~choo/lupg/tutorials/index.html alt="Tutorials">
<area shape=rect coords="138,0 170,39" href=http://users.actcom.co.il/~choo/lupg/related-material.html alt="Related material">
<area shape=rect coords="213,0 232,39" href=http://users.actcom.co.il/~choo/lupg/project-ideas/index.html alt="Project Ideas">
<area shape=rect coords="272,0 290,39" href=http://users.actcom.co.il/~choo/lupg/essays/index.html alt="Essays">
<area shape=rect coords="324,0 355,39" href=mailto:choo@actcom.co.il alt="Send comments">
</map>
<br>[<a href=http://users.actcom.co.il/~choo/lupg/index.html>LUPG Home</a>]  [<a href=http://users.actcom.co.il/~choo/lupg/tutorials/index.html>Tutorials</a>]  [<a href=http://users.actcom.co.il/~choo/lupg/related-material.html>Related Material</a>] [<a href=http://users.actcom.co.il/~choo/lupg/essays/index.html>Essays</a>] [<a href=http://users.actcom.co.il/~choo/lupg/project-ideas/index.html>Project Ideas</a>] [<a href=mailto:choo@actcom.co.il>Send Comments</a>]<br><img src=http://users.actcom.co.il/~choo/lupg/images/good_bar.gif alt=""></p>

v1.2

<h1>Multi-Threaded Programming With POSIX Threads</h1>
<p>
Table Of Contents:
<ol>
<li> <a href="#preface">开始之前...</a>
<li> <a href="#definition">什么是线程,为什么要使用线程?</a>
<li> <a href="#thread_create_stop">创建及销毁线程</a>
<li> <a href="#thread_mutex">用互斥来同步线程</a>
     <ol>
     <li> <a href="#thread_mutex_whatis">什么是Mutex?</a>
     <li> <a href="#thread_mutex_creation">创建及初始化'Mutex'</a>
     <li> <a href="#thread_mutex_lock_unlock">加锁及解锁一个'Mutex'</a>
     <li> <a href="#thread_mutex_destroy">销毁一个 'Mutex'</a>
     <li> <a href="#thread_mutex_complete_example">使用互斥--一个完整的例子</a>
     <li> <a href="#thread_mutex_starvation_deadlock">饥饿及死锁状态</a>
     </ol>
<li> <a href="#thread_condvar">更优同步-使用条件变量</a>
     <ol>
     <li> <a href="#thread_condvar_whatis">什么是条件变量?</a>
     <li> <a href="#thread_condvar_creation">创建及初始化条件变量(Condition Variable)</a>
     <li> <a href="#thread_condvar_signal">发信一个条件变量</a>
     <li> <a href="#thread_condvar_wait">等待条件变量</a>
     <li> <a href="#thread_condvar_destroy">销毁条件变量</a>
     <li> <a href="#thread_condvar_condition">A Real Condition For A Condition Variable</a>
     <li> <a href="#thread_condvar_example">使用条件变量--完整实例</a>
     </ol>
<li> <a href="#thread_tss">"私有" 线程数据 - Thread-Specific Data</a>
     <ol>
     <li> <a href="#thread_tss_overview">Overview Of Thread-Specific Data Support</a>
     <li> <a href="#thread_tss_create">Allocating Thread-Specific Data Block</a>
     <li> <a href="#thread_tss_access">Accessing Thread-Specific Data</a>
     <li> <a href="#thread_tss_delete">Deleting Thread-Specific Data Block</a>
     <li> <a href="#thread_tss_example">A Complete Example</a>
     </ol>
<li> <a href="#thread_cancel">线程取消及终止</a>
     <ol>
     <li> <a href="#thread_cancel_cancel">取消一个线程</a>
     <li> <a href="#thread_cancel_setstate">设置线程取消状态</a>
     <li> <a href="#thread_cancel_points">取消点(Cancellation Points)</a>
     <li> <a href="#thread_cancel_cleanup">设置线程清理函数-Setting Thread Cleanup Functions</a>
     <li> <a href="#thread_cancel_join">同步线程退出-Synchronizing On Threads Exiting</a>
     <li> <a href="#thread_cancel_detach">分离一个线程--Detaching A Thread</a>
     <li> <a href="#thread_cancel_example">线程取消--- 完整例子:Threads Cancellation - A Complete Example</a>
     </ol>
<li> <a href="#thread_user_interface">采用线程处理用户接口编程--Using Threads For Responsive User Interface Programming</a>
     <ol>
     <li> <a href="#thread_user_interface_example">用户接口---一个例子 User Interaction - A Complete Example</a>
     </ol>
<li> <a href="#thread_3rd_party">Using 3rd-Party Libraries In A Multi-Threaded Application</a>
<li> <a href="#thread_debugger">Using A Threads-Aware Debugger</a>
</ol>
</p>

<hr size=4>

<a name="preface">
<font color=brown><h2>Before We Start...</h2></font>
</a>
<p>
This tutorial is an attempt to help you become familiar with multi-threaded
programming with the POSIX threads (pthreads) library, and attempts to show
how its features can be used in "real-life" programs. It explains the
different tools defined by the library, shows how to use them, and then gives
an example of using them to solve programming problems. There is an implicit
assumption that the user has some theoretical familiarity with paralell
programming (or multi-processing) concepts. Users without such background
might find the concepts harder to grasp. A seperate tutorial will be prepared
to explain the theoreticl background and terms to those who are familiar only
with normal "serial" programming.
</p>

<p>
I would assume that users which are familiar with asynchronous programming
models, such as those used in windowing environments (X, Motif), will find it
easier to grasp the concepts of multi-threaded programming.
</p>

<p>
When talking about POSIX threads, one cannot avoid the question "Which draft
of the POSIX threads standard shall be used?". As this threads standard has been
revised over a period of several years, one will find that implementations
adhering to different drafts of the standard have a different set of functions,
different default values, and different nuances. Since this tutorial was
written using a Linux system with the kernel-level LinuxThreads library, v0.5,
programmers with access to other systems, using different versions of pthreads,
should refer to their system's manuals in case of incompatibilities. Also, since
some of the example programs are using blocking system calls, they won't
work with user-level threading libraries (refer to our
<a href="http://users.actcom.co.il/~choo/lupg/tutorials/parallel-programming-theory/parallel-programming-theory.html#multi_thread_lib">parallel programming theory tutorial</a> for
more information).<br>
Having said that,
i'd try to check the example programs on other systems as well (Solaris 2.5
comes to mind), to make it more "cross-platform".
</p>

<hr size=4>

<a name="definition">
<font color=brown><h2>What Is a Thread? Why Use Threads</h2></font>
</a>
<p>
A thread is a semi-process, that has its own stack, and executes a given
piece of code. Unlike a real process, the thread normally shares its memory
with other threads (where as for processes we usually have a different memory
area for each one of them). A Thread Group is a set of threads all executing
inside the same process. They all share the same memory, and thus can access
the same global variables, same heap memory, same set of file descriptors,
etc. All these threads execute in parallel (i.e. using time slices, or if
the system has several processors, then really in parallel).
</p>

<p>
The advantage of using a thread group instead of a normal serial program
is that several operations may be carried out in parallel, and thus events
can be handled immediately as they arrive (for example, if we have one thread
handling a user interface, and another thread handling database queries,
we can execute a heavy query requested by the user, and still respond to
user input while the query is executed).
</p>

<p>
The advantage of using a thread group over using a process group is that
context switching between threads is much faster than context switching
between processes (context switching means that the system switches from
running one thread or process, to running another thread or process).
Also, communications between two threads is usually faster and easier to
implement than communications between two processes.
</p>

<p>
On the other hand, because threads in a group all use the same memory space,
if one of them corrupts the contents of its memory, other threads might
suffer as well. With processes, the operating system normally protects
processes from one another, and thus if one corrupts its own memory space,
other processes won't suffer. Another advantage of using processes is that
they can run on different machines, while all the threads have to run
on the same machine (at least normally).
</p>

<hr size=4>

<a name="thread_create_stop">
<font color=brown><h2>Creating And Destroying Threads</h2></font>
</a>
<p>
When a multi-threaded program starts executing, it has one thread running,
which executes the main() function of the program. This is already a
full-fledged thread, with its own thread ID. In order to create a new thread,
the program should use the <code><u>pthread_create()</u></code> function.
Here is how to use it:
</p>

<hr width=40%>

<pre><code>
#include &lt;stdio.h&gt;       <font color=brown>/* standard I/O routines                 */</font>
#include &lt;pthread.h&gt;     <font color=brown>/* pthread functions and data structures */</font>

<font color=brown>/* function to be executed by the new thread */</font>
void*
do_loop(void* data)
{
    int i;            <font color=brown>/* counter, to print numbers */</font>
    int j;            <font color=brown>/* counter, for delay        */</font>
    int me = *((int*)data);     <font color=brown>/* thread identifying number */</font>

    for (i=0; i&lt;10; i++) {
    for (j=0; j&lt;500000; j++) <font color=brown>/* delay loop */</font>
        ;
        printf("'%d' - Got '%d'\n", me, i);
    }

    <font color=brown>/* terminate the thread */</font>
    pthread_exit(NULL);
}

<font color=brown>/* like any C program, program's execution begins in main */</font>
int
main(int argc, char* argv[])
{
    int        thr_id;         <font color=brown>/* thread ID for the newly created thread */</font>
    pthread_t  p_thread;       <font color=brown>/* thread's structure                     */</font>
    int        a         = 1;  <font color=brown>/* thread 1 identifying number            */</font>
    int        b         = 2;  <font color=brown>/* thread 2 identifying number            */</font>

    <font color=brown>/* create a new thread that will execute 'do_loop()' */</font>
    thr_id = pthread_create(&amp;p_thread, NULL, do_loop, (void*)&amp;a);
    <font color=brown>/* run 'do_loop()' in the main thread as well */</font>
    do_loop((void*)&amp;b);
    
    <font color=brown>/* NOT REACHED */</font>
    return 0;
}
</code></pre>

<hr width=40%>

<p>
A few notes should be mentioned about this program:
<ol>
<li> Note that the main program is also a thread, so it executes the
     <code>do_loop()</code> function in parallel to the thread it creates.
<li> <code>pthread_create()</code> gets 4 parameters. The first parameter is
     used by <code>pthread_create()</code> to supply the program with
     information about the thread. The second parameter is used to set some
     attributes for the new thread. In our case we supplied a NULL pointer to
     tell <code>pthread_create()</code> to use the default values. The third
     parameter is the name of the function that the thread will start
     executing. The forth parameter is an argument to pass to this function.
     Note the cast to a 'void*'. It is not required by ANSI-C syntax, but is
     placed here for clarification.
<li> The delay loop inside the function is used only to demonstrate that the
     threads are executing in parallel. Use a larger delay value if your CPU
     runs too fast, and you see all the printouts of one thread before the
     other.
<li> The call to <code><u>pthread_exit()</u></code> Causes the current thread
     to exit and free any thread-specific resources it is taking. There is no
     need to use this call at the end of the thread's top function, since when
     it returns, the thread would exit automatically anyway. This function is
     useful if we want to exit a thread in the middle of its execution.
</ol>
</p>

<p>
In order to compile a multi-threaded program using <code><u>gcc</u></code>,
we need to link it with the pthreads library. Assuming you have this library
already installed on your system, here is how to compile our first program:
<br><br>
<code>
gcc pthread_create.c -o pthread_create -lpthread
</code>
<br><br>
Note that for some of the programs later on this tutorial, one may need to
add a '-D_GNU_SOURCE' flag to this compile line, to get the source compiled.
<br><br>
The source code for this program may be found in the
<a href="pthread_create.c">pthread_create.c</a> file.
</p>

<hr size=4>


<a name="thread_mutex">
<font color=brown><h2>Synchronizing Threads With Mutexes</h2></font>
</a>
<p>
One of the basic problems when running several threads that use the same
memory space, is making sure they don't "step on each other's toes". By this
we refer to the problem of using a data structure from two different threads.
一个使用多线程的基本问题是当它们使用相同的内存空间,要确保它们不会'互踩脚趾'
这里我们以两个不同线程使用同一结构体空间为例:
</p>

<p>
For instance, consider the case where two threads try to update two variables.
One tries to set both to 0, and the other tries to set both to 1. If both
threads would try to do that at the same time, we might get with a situation
where one variable contains 1, and one contains 0. This is because a
context-switch (we already know what this is by now, right?) might occur after
the first tread zeroed out the first variable, then the second thread would
set both variables to 1, and when the first thread resumes operation, it will
zero out the second variable, thus getting the first variable set to '1',
and the second set to '0'.
例如,考虑这样的情况,两个线程试图更新2个变量.一个线程试图将变量全设置为0,另一个则想设置为
1.如果2个线程在同一个时间执行,我们有可能会碰到一个变量是1,而另一个是0的情况.这是因为,当第一个线程将
第一个变量设置成0之后,上下文发生变化,线程2然后将2个变量全设置为1,然后线程1转换回来,继续设置变量2 为0.
这样,我们就得到了变量一个为0,一个为1的情况.
</p>

<hr>

<a name="thread_mutex_whatis">
<font color=brown><h4>什么是Mutex?</h4></font>
</a>
<p>
A basic mechanism supplied by the pthreads library to solve this problem,
is called a mutex. A mutex is a lock that guarantees three things:
pthread库提供的解决该问题一个基本机制就是mutex.一个mutex是一个锁,以保证:
<ol>
<li> <u>原子性(Atomicity)</u> - Locking a mutex is an atomic operation, meaning that the
     operating system (or threads library) assures you that if you locked a
     mutex, no other thread succeeded in locking this mutex at the same time.
     锁住一个mutex是一个原子操作,意味着os会为你保证,当你锁住一个mutex,其它线程
     在同一时间对该mutex加锁是不会成功的.
<li> <u>独占性(Singularity)</u> - If a thread managed to lock a mutex, it is assured
     that no other thread will be able to lock the thread until the original
     thread releases the lock.
     如果一个线程试图对一个mutex加锁,便保证了其他线程无法对mutex进行加锁,除非原始线程释放了
     该锁.
    
<li> <u>Non-Busy Wait</u> - If a thread attempts to lock a thread that was
     locked by a second thread, the first thread will be suspended (and will
     not consume any CPU resources) until the lock is freed by the second
     thread. At this time, the first thread will wake up and continue execution,
     having the mutex locked by it.
     如果一个线程试图加锁对一个已经被另外的线程加锁的mutex,那么第一个线程将挂起(不会消耗cpu资源),
     直到第二线程释放该锁.这时,线程1唤醒,拥有了被他锁住的mutex,继续执行操作.
</ol>
</p>

<p>
From these three points we can see how a mutex can be used to assure exclusive
access to variables (or in general critical code sections). Here is some
pseudo-code that updates the two variables we were talking about in the
previous section, and can be used by the first thread:
通过这3点,我们可以看到mutex是如何确保变量的互斥操作(或者说是临界代码区域)
这里是再上一个章节我们讨论的关于更新2个变量的伪代码
<br><br>
<pre>
lock mutex 'X1'.
set first variable to '0'.
set second variable to '0'.
unlock mutex 'X1'.
</pre>
<br><br>
Meanwhile, the second thread will do something like this:
同时,线程2如这样执行:
<br><br>
<pre>
lock mutex 'X1'.
set first variable to '1'.
set second variable to '1'.
unlock mutex 'X1'.
</pre>
<br><br>
Assuming both threads use the same mutex, we are assured that after they both
ran through this code, either both variables are set to '0', or both are set
to '1'. You'd note this requires some work from the programmer - If a third
thread was to access these variables via some code that does not use this
mutex, it still might mess up the variable's contents. Thus, it is important
to enclose all the code that accesses these variables in a small set of
functions, and always use only these functions to access these variables.
假如2个线程使用相同的mutex,我们便可以肯定,2个线程执行完毕,2个变量要么都是0,
要么都是1.你已经注意到这需要coder多做一些工作-如果第3个线程没有使用这个mutex
来获取这些变量,这些变量依然会被混淆.因此,把对变量的操作的代码封装到一组函数里,并总是使
用这些函数来操作这些变量是很重要的.
</p>

<hr>

<a name="thread_mutex_creation">
<font color=brown><h4>创建及初始化Mutex</h4></font>
</a>
<p>
In order to create a mutex, we first need to declare a variable of type
<code>pthread_mutex_t</code>, and then initialize it. The simplest way it
by assigning it the <code>PTHREAD_MUTEX_INITIALIZER</code> constant. So
we'll use a code that looks something like this:
为了使用mutex,我们首先需要声明<code>pthread_mutex_t</code>类型变量.简单的方法
是给它赋<code>PTHREAD_MUTEX_INITIALIZER</code>常量.因此,我们这样编码:
<br><br>
<pre><code>
pthread_mutex_t a_mutex = PTHREAD_MUTEX_INITIALIZER;
</code></pre>
<br><br>
One note should be made here: This type of initialization creates a mutex
called 'fast mutex'. This means that if a thread locks the mutex and then
tries to lock it again, it'll get stuck - it will be in a deadlock.
你会注意到:这样初始化的mutex被称作'fast mutex'.这表示,如果一个线程对一个mutex加锁,
如果然后再次对其加锁,将导致程序僵住-她将处于死锁状态.
</p>

<p>
There is another type of mutex, called 'recursive mutex', which allows the
thread that locked it, to lock it several more times, without getting blocked
(but other threads that try to lock the mutex now will get blocked). If the
thread then unlocks the mutex, it'll still be locked, until it is unlocked
the same amount of times as it was locked. This is similar to the way modern
door locks work - if you turned it twice clockwise to lock it, you need to turn
it twice counter-clockwise to unlock it. This kind of mutex can be created
by assigning the constant <code>PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP</code>
to a mutex variable.
有另外一种mutex,叫做'recursive mutex',允许线程对其多次加锁,而不会阻塞(但是其他线程试图对其加锁
将阻塞).如果线程解锁一次,它依然是锁住的,直到解锁的次数等于加锁的次数.
这种锁可以通过赋值常量<code>PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP</code>来初始化.

</p>

<hr>

<a name="thread_mutex_lock_unlock">
<font color=brown><h4>对mutex加锁及解锁</h4></font>
</a>
<p>
In order to lock a mutex, we may use the function
<code>pthread_mutex_lock()</code>. This function attempts to lock the mutex,
or block the thread if the mutex is already locked by another thread. In this
case, when the mutex is unlocked by the first process, the function will return
with the mutex locked by our process. Here is how to lock a mutex (assuming it
was initialized earlier):
对mutex加锁,我们应使用<code>pthread_mutex_lock()</code>函数.此函数试图对mutex加锁,如果mutex已经
被其他线程锁住,该线程阻塞.在次情况下,如果mutex被第一个线程加锁,该函数将会返回.如下是
如何加锁一个mutex(假设它已经初始化了.)
<br><br>
<pre><code>
int rc = pthread_mutex_lock(&amp;a_mutex);
if (rc) { <font color=brown>/* an error has occurred */</font>
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
<font color=brown>/* mutex is now locked - do your stuff. */</font>
.
.
</code></pre>
<br><br>
</p>

<p>
After the thread did what it had to (change variables or data structures,
handle file, or whatever it intended to do), it should free the mutex,
using the <code>pthread_mutex_unlock()</code> function, like this:
如果一个线程做完它应该做的,它应该释放mutex.使用<code>pthread_mutex_unlock()</code>
像这样:
<br><br>
<pre><code>
rc = pthread_mutex_unlock(&amp;a_mutex);
if (rc) {
    perror("pthread_mutex_unlock");
    pthread_exit(NULL);
}
</code></pre>
</p>

<hr>

<a name="thread_mutex_destroy">
<font color=brown><h4>Destroying A Mutex</h4></font>
</a>
<p>
After we finished using a mutex, we should destroy it. Finished using means
no thread needs it at all. If only one thread finished with the mutex,
it should leave it alive, for the other threads that might still need to use
it. Once all finished using it, the last one can destroy it using the
<code>pthread_mutex_destroy()</code> function:
如果使用完毕,我们应该销毁它,这表示该mutex已经没有线程再用到了.如果只有一个线程使用完毕,
该mutex还应该存活,其他线程也许还用得到.一旦所有线程都不需要了,最后一个线程则可用
<code>pthread_mutex_destroy()</code>函数来销毁.
<br><br>
<pre><code>
rc = pthread_mutex_destroy(&amp;a_mutex);
</code></pre>
<br><br>
After this call, this variable (a_mutex) may not be used as a mutex any more,
unless it is initialized again. Thus, if one destroys a mutex too early,
and another thread tries to lock or unlock it, that thread will get a
<code>EINVAL</code> error code from the lock or unlock function.
在此调用之后,mutex不能再被使用了, 除非它再次初始化.因此,如果一个被销毁的
mutex被另一个线程试图加锁或者解锁,该线程中的lock或者unlock函数中会返回
<code>EINVAL</code> 错误码.
</p>

<hr>

<a name="thread_mutex_complete_example">
<font color=brown><h4>使用mutex的完整例子</h4></font>
</a>
<p>
After we have seen the full life cycle of a mutex, lets see an example
program that uses a mutex. The program introduces two employees competing
for the "employee of the day" title, and the glory that comes with it.
To simulate that in a rapid pace, the program employs 3 threads: one that
promotes Danny to "employee of the day", one that promotes Moshe to
that situation, and a third thread that makes sure that the employee
of the day's contents is consistent (i.e. contains exactly the data of
one employee).<br>
Two copies of the program are supplied. One that uses a mutex, and one that
does not. Try them both, to see the differences, and be convinced that mutexes
are essential in a multi-threaded environment.
在描述了mutex的完整生命周期后,让我们看一个使用mutex的例子.这个程序描述了2个
员工在竞争'员工的一天'称号,这也会带来荣誉.为模拟这个竞赛.程序调用了3个线程:
一个投票给Danny,一个投票给 Moshe,另一个程序
来保证称号内容是一致的(例如:包含一个员工的准确数据).<br>
提供2份程序,一个使用了mutex,另一个没有.全都试试,比较下结果.
</p>

<p>
The programs themselves are in the files accompanying this tutorial.
The one that uses a mutex is
<a href="employee-with-mutex.c">employee-with-mutex.c</a>. The one that does
not use a mutex is
<a href="employee-without-mutex.c">employee-without-mutex.c</a>. Read the
comments inside the source files to get a better understanding of how they
work.
</p>

<hr>

<a name="thread_mutex_starvation_deadlock">
<font color=brown><h4>饥饿及死锁状态--Starvation And Deadlock Situations</h4></font>
</a>
Again we should remember that <code>pthread_mutex_lock()</code> might block
for a non-determined duration, in case of the mutex being already locked.
If it remains locked forever, it is said that our poor thread is "starved" -
it was trying to acquire a resource, but never got it. It is up to the
programmer to ensure that such starvation won't occur. The pthread library
does not help us with that.
</p>

<p>
The pthread library might, however, figure out a "deadlock". A deadlock is
a situation in which a set of threads are all waiting for resources taken by
other threads, all in the same set. Naturally, if all threads are blocked
waiting for a mutex, none of them will ever come back to life again. The
pthread library keeps track of such situations, and thus would fail the last
thread trying to call <code>pthread_mutex_lock()</code>, with an error
of type <code>EDEADLK</code>. The programmer should check for such a value,
and take steps to solve the deadlock somehow.
</p>

<hr size=4>

<a name="thread_condvar">
<font color=brown><h2>更有的同步 - 条件变量----Refined Synchronization - Condition Variables</h2></font>
</a>
<p>
As we've seen before with mutexes, they allow for simple coordination -
exclusive access to a resource. However, we often need to be able to
make real synchronization between threads:
<ul>
<li> In a server, one thread reads requests from clients, and dispatches
     them to several threads for handling. These threads need to be notified
     when there is data to process, otherwise they should wait without
     consuming CPU time.
     在服务器,一个线程读取客户端的请求,分发请求到其它线程去处理.这些线程需要被通知,
     当有数据来到,否则他们应该等待,而且不应消耗cpu时间.
<li> In a GUI (Graphical User Interface) Application, one thread reads user
     input, another handles graphical output, and a third thread sends
     requests to a server and handles its replies. The server-handling
     thread needs to be able to notify the graphics-drawing thread when a reply
     from the server arrived, so it will immediately show it to the user.
     The user-input thread needs to be always responsive to the user, for
     example, to allow her to cancel long operations currently executed by
     the server-handling thread.
     在一个gui程序,一个线程读取用户输入,另一个处理图像输出,第三个发送请求到服务器,并处理
     服务器的回复.当一个回复从服务器来到时,处理服务器端的线程需要通知负责图像绘制的线程,
     这样能够尽快的显示给用户,例如,允许她取消当前的服务器端线程的长操作
</ul>
All these examples require the ability to send notifications between threads.
This is where condition variables are brought into the picture.
所有这些例子需要在线程间传递通知的能力.这就是'条件变量'发挥的空间
</p>

<hr>

<a name="thread_condvar_whatis">
<font color=brown><h4>What Is A Condition Variable?</h4></font>
</a>
<p>
A condition variable is a mechanism that allows threads to wait (without
wasting CPU cycles) for some even to occur. Several threads may wait on
a condition variable, until some other thread signals this condition variable
(thus sending a notification). At this time, one of the threads waiting on this
condition variable wakes up, and can act on the event. It is possible to also
wake up all threads waiting on this condition variable by using a broadcast
method on this variable.
条件变量是一种机制来允许线程来等待事件的发生(不消耗cpu周期).多个线程可以等待一个
条件变量,直到其他某个线程发信这个条件变量(等于发送一个通知).这时,这些等待这个条件变量的线程中的一个
唤醒,执行操作.通过对该变量使用广播方法,也可能唤醒所有的等待线程.
</p>

<p>
Note that a condition variable does not provide locking. Thus, a mutex is
used along with the condition variable, to provide the necessary locking
when accessing this condition variable.
注意,条件变量没有提供锁机制.因此,一个mutex总是伴随条件变量,当获取条件变量时,以提供必要的
锁.
</p>

<hr>

<a name="thread_condvar_creation">
<font color=brown><h4>Creating And Initializing A Condition Variable</h4></font>
</a>
<p>
Creation of a condition variable requires defining a variable of type
<code>pthread_cond_t</code>, and initializing it properly. Initialization
may be done with either a simple use of a macro named
<code>PTHREAD_COND_INITIALIZER</code> or the usage of the
<code>pthread_cond_init()</code> function. We will show the first form
here:
<br><br>
<code>
pthread_cond_t got_request = PTHREAD_COND_INITIALIZER;
</code>
<br><br>
This defines a condition variable named 'got_request', and initializes it.
</p>

<p>
<em>Note: since the <code>PTHREAD_COND_INITIALIZER</code> is actually a
structure initializer, it may be used to initialize a condition variable
only when it is declared.  In order to initialize it during runtime, one
must use the <code>pthread_cond_init()</code> function.
</em>
</p>

<hr>

<a name="thread_condvar_signal">
<font color=brown><h4>Signaling A Condition Variable</h4></font>
</a>
<p>
In order to signal a condition variable, one should either the
<code>pthread_cond_signal()</code> function (to wake up a only one
thread waiting on this variable), or the <code>pthread_cond_broadcast()</code>
function (to wake up all threads waiting on this variable). Here is
an example using signal, assuming 'got_request' is a properly
initialized condition variable:
<br><br>
<code>
int rc = pthread_cond_signal(&amp;got_request);
</code>
<br><br>
Or by using the broadcast function:
<br><br>
<code>
int rc = pthread_cond_broadcast(&amp;got_request);
</code>
<br><br>
When either function returns, 'rc' is set to 0 on success, and to a
non-zero value on failure. In such a case (failure), the return value denotes
the error that occured (<code>EINVAL</code> denotes that the given parameter
is not a condition variable. <code>ENOMEM</code> denotes that the system
has run out of memory.
</p>

<p>
<em>Note: success of a signaling operation does not mean any thread
was awakened - it might be that no thread was waiting on the condition variable,
and thus the signaling does nothing (i.e. the signal is lost).
<br>It is also not remembered for
future use - if after the signaling function returns another thread starts
waiting on this condition variable, a further signal is required to wake it up.
</em>
</p>

<hr>

<a name="thread_condvar_wait">
<font color=brown><h4>Waiting On A Condition Variable</h4></font>
</a>
<p>
If one thread signals the condition variable, other threads would probably
want to wait for this signal. They may do so using one of two functions,
<code>pthread_cond_wait()</code> or <code>pthread_cond_timedwait()</code>.
Each of these functions takes a condition variable, and a mutex (which should
be locked before calling the wait function), unlocks the mutex,
and waits until the condition variable is signaled, suspending
the thread's execution. If this signaling causes the thread to awake (see
discussion of <code>pthread_cond_signal()</code> earlier), the mutex is
automagically locked again by the wait funciton, and the wait function
returns.
</p>

</p>The only difference between these two functions is that
<code>pthread_cond_timedwait()</code> allows the programmer to specify a
timeout for the waiting, after which the function always returns, with a
proper error value (ETIMEDOUT) to notify that condition variable was NOT
signaled before the timeout passed. The <code>pthread_cond_wait()</code>
would wait indefinitely if it was never signaled.
</p>

<p>
Here is how to use these two functions. We make the assumption that
'got_request' is a properly initialized condition variable, and that
'request_mutex' is a properly initialized mutex. First, we try
the <code>pthread_cond_wait()</code> function:
<br><br>
<pre><code>
<font color=brown>/* first, lock the mutex */</font>
int rc = pthread_mutex_lock(&amp;request_mutex);
if (rc) { <font color=brown>/* an error has occurred */</font>
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
<font color=brown>/* mutex is now locked - wait on the condition variable.             */</font>
<font color=brown>/* During the execution of pthread_cond_wait, the mutex is unlocked. */</font>
rc = pthread_cond_wait(&amp;got_request, &amp;request_mutex);
if (rc == 0) { <font color=brown>/* we were awakened due to the cond. variable being signaled */</font>
               <font color=brown>/* The mutex is now locked again by pthread_cond_wait()      */</font>
    <font color=brown>/* do your stuff... */</font>
    .
}
<font color=brown>/* finally, unlock the mutex */</font>
pthread_mutex_unlock(&amp;request_mutex);
</code></pre>
<br><br>
Now an example using the <code>pthread_cond_timedwait()</code> function:
<br><br>
<pre><code>
#include &lt;sys/time.h&gt;     <font color=brown>/* struct timeval definition           */</font>
#include &lt;unistd.h&gt;       <font color=brown>/* declaration of gettimeofday()       */</font>

struct timeval  now;            <font color=brown>/* time when we started waiting        */</font>
struct timespec timeout;        <font color=brown>/* timeout value for the wait function */</font>
int             done;           <font color=brown>/* are we done waiting?                */</font>

<font color=brown>/* first, lock the mutex */</font>
int rc = pthread_mutex_lock(&amp;a_mutex);
if (rc) { <font color=brown>/* an error has occurred */</font>
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
<font color=brown>/* mutex is now locked */</font>

<font color=brown>/* get current time */</font>
gettimeofday(&amp;now);
<font color=brown>/* prepare timeout value.              */</font>
<font color=brown>/* Note that we need an absolute time. */</font>
timeout.tv_sec = now.tv_sec + 5
timeout.tv_nsec = now.tv_usec * 1000; <font color=brown>/* timeval uses micro-seconds.         */</font>
                                      <font color=brown>/* timespec uses nano-seconds.         */</font>
                                      <font color=brown>/* 1 micro-second = 1000 nano-seconds. */</font>

<font color=brown>/* wait on the condition variable. */</font>
<font color=brown>/* we use a loop, since a Unix signal might stop the wait before the timeout */</font>
done = 0;
while (!done) {
    <font color=brown>/* remember that pthread_cond_timedwait() unlocks the mutex on entrance */</font>
    rc = pthread_cond_timedwait(&amp;got_request, &amp;request_mutex, &amp;timeout);
    switch(rc) {
        case 0:  <font color=brown>/* we were awakened due to the cond. variable being signaled */</font>
                 <font color=brown>/* the mutex was now locked again by pthread_cond_timedwait. */</font>
            <font color=brown>/* do your stuff here... */</font>
            .
            .
            done = 0;
            break;
        default:        <font color=brown>/* some error occurred (e.g. we got a Unix signal) */</font>
            if (errno == ETIMEDOUT) { <font color=brown>/* our time is up */</font>
                done = 0;
            }
            break;      <font color=brown>/* break this switch, but re-do the while loop.   */</font>
    }
}
<font color=brown>/* finally, unlock the mutex */</font>
pthread_mutex_unlock(&amp;request_mutex);
</code></pre>
<br><br>
As you can see, the timed wait version is way more complex, and thus
better be wrapped up by some function, rather than being re-coded in every
necessary location.
</p>

<p>
<em>Note: it might be that a condition variable that has 2 or more threads
waiting on it is signaled many times, and yet one of the threads waiting on
it never awakened. This is because we are not guaranteed which of the waiting
threads is awakened when the variable is signaled. It might be that the awakened
thread quickly comes back to waiting on the condition variables, and gets
awakened again when the variable is signaled again, and so on. The situation
for the un-awakened thread is called 'starvation'. It is up to the programmer
to make sure this situation does not occur if it implies bad behavior. Yet,
in our server example from before, this situation might indicate requests are
coming in a very slow pace, and thus perhaps we have too many threads waiting
to service requests. In this case, this situation is actually good, as it means
every request is handled immediately when it arrives.
</em>
</p>

<p>
<em>Note 2: when the mutex is being broadcast (using pthread_cond_broadcast),
this does not mean all threads are running together. Each of them tries to
lock the mutex again before returning from their wait function, and thus they'll
start running one by one, each one locking the mutex, doing their work, and
freeing the mutex before the next thread gets its chance to run.
</em>
</p>

<hr>

<a name="thread_condvar_destroy">
<font color=brown><h4>Destroying A Condition Variable</h4></font>
</a>
<p>
After we are done using a condition variable, we should destroy it, to free
any system resources it might be using. This can be done using the
<code>pthread_cond_destroy()</code>. In order for this to work, there should
be no threads waiting on this condition variable. Here is how to use
this function, again, assuming 'got_request' is a pre-initialized condition
variable:
<br><br>
<pre><code>
int rc = pthread_cond_destroy(&amp;got_request);
if (rc == EBUSY) { <font color=brown>/* some thread is still waiting on this condition variable */</font>
    <font color=brown>/* handle this case here... */</font>
    .
    .
}
</code></pre>
<br><br>
What if some thread is still waiting on this variable? depending on the case,
it might imply some flaw in the usage of this variable, or just lack of proper
thread cleanup code. It is probably good to alert the programmer, at least
during debug phase of the program, of such a case. It might mean nothing,
but it might be significant.
</p>

<hr>

<a name="thread_condvar_condition">
<font color=brown><h4>A Real Condition For A Condition Variable</h4></font>
</a>
<p>
A note should be taken about condition variables - they are usually pointless
without some real condition checking combined with them. To make this clear,
lets consider the server example we introduced earlier. Assume that we use
the 'got_request' condition variable to signal that a new request has arrived
that needs handling, and is held in some requests queue. If we had threads
waiting on the condition variable when this variable is signaled, we are
assured that one of these threads will awake and handle this request.
</p>

<p>
However, what if all threads are busy handling previous requests, when a new
one arrives? the signaling of the condition variable will do nothing (since
all threads are busy doing other things, NOT waiting on the condition variable
now), and after all threads finish handling their current request, they come
back to wait on the variable, which won't necessarily be signaled again
(for example, if no new requests arrive). Thus, there is at least one
request pending, while all handling threads are blocked, waiting for a signal.
</p>

<p>
In order to overcome this problem, we may set some integer variable to
denote the number of pending requests, and have each thread check the value
of this variable before waiting on the variable. If this variable's value
is positive, some request is pending, and the thread should go and handle
it, instead of going to sleep. Further more, a thread that handled a request,
should reduce the value of this variable by one, to make the count correct.<br>
Lets see how this affects the waiting code we have seen above.
</p>

<hr width=40%>
<pre><code>
<font color=brown>/* number of pending requests, initially none */</font>
int num_requests = 0;
.
.
<font color=brown>/* first, lock the mutex */</font>
int rc = pthread_mutex_lock(&amp;request_mutex);
if (rc) { <font color=brown>/* an error has occurred */</font>
    perror("pthread_mutex_lock");
    pthread_exit(NULL);
}
<font color=brown>/* mutex is now locked - wait on the condition variable */</font>
<font color=brown>/* if there are no requests to be handled.              */</font>
rc = 0;
if (num_requests == 0)
    rc = pthread_cond_wait(&amp;got_request, &amp;request_mutex);
if (num_requests &gt; 0 &amp;&amp; rc == 0) { <font color=brown>/* we have a request pending */</font>
        <font color=brown>/* unlock mutex - so other threads would be able to handle */</font>
        <font color=brown>/* other reqeusts waiting in the queue paralelly.          */</font>
        rc = pthread_mutex_unlock(&amp;request_mutex);
        <font color=brown>/* do your stuff... */</font>
        .
        .
        <font color=brown>/* decrease count of pending requests */</font>
        num_requests--;
        <font color=brown>/* and lock the mutex again - to remain symmetrical,. */</font>
        rc = pthread_mutex_lock(&amp;request_mutex);
    }
}
<font color=brown>/* finally, unlock the mutex */</font>
pthread_mutex_unlock(&amp;request_mutex);
</code></pre>
<hr width=40%>

<hr>

<a name="thread_condvar_example">
<font color=brown><h4>Using A Condition Variable - A Complete Example</h4></font>
</a>
<p>
As an example for the actual usage of condition variables, we will show
a program that simulates the server we have described earlier - one thread,
the receiver, gets client requests. It inserts the requests to a linked list,
and a hoard of threads, the handlers, are handling these requests.
For simplicity, in our simulation, the receiver thread creates requests
and does not read them from real clients.
</p>

<p>
The program source is available in the file
<a href="thread-pool-server.c">thread-pool-server.c</a>, and contains
many comments. Please read the source file first, and then read the
following clarifying notes.
<ol>
<li> The 'main' function first launches the handler threads, and then
     performs the chord of the receiver thread, via its main loop.
<li> A single mutex is used both to protect the condition variable,
     and to protect the linked list of waiting requests. This simplifies
     the design. As an exercise, you may think how to divide these roles
     into two mutexes.
<li> The mutex itself MUST be a recursive mutex. In order to see why,
     look at the code of the 'handle_requests_loop' function. You will
     notice that it first locks the mutex, and afterwards calls the
     'get_request' function, which locks the mutex again. If we used
     a non-recursive mutex, we'd get locked indefinitely in the mutex locking
     operation of the 'get_request' function.<br>
     You may argue that we could remove the mutex locking in the 'get_request'
     function, and thus remove the double-locking problem, but this is
     a flawed design - in a larger program, we might call the 'get_request'
     function from other places in the code, and we'll need to check for
     proper locking of the mutex in each of them.
<li> As a rule, when using recursive mutexes, we should try to make sure that
     each lock operation is accompanied by a matching unlock operation in the
     same function. Otherwise, it will be very hard to make sure that after
     locking the mutex several times, it is being unlocked the same number
     of times, and deadlocks would occur.
<li> The implicit unlocking and re-locking of the mutex on the call to
     the <code>pthread_cond_wait()</code> function is confusing at first.
     It is best to add a comment regarding this behavior in the code,
     or else someone that reads this code might accidentally add a further
     mutex lock.
<li> When a handler thread handles a request - it should free the mutex,
     to avoid blocking all the other handler threads. After it finished
     handling the request, it should lock the mutex again, and check if
     there are more requests to handle.
</ol>
</p>

<hr size=4>

<a name="thread_tss">
<font color=brown><h2>"Private" thread data - Thread-Specific Data</h2></font>
</a>
<p>
In "normal", single-thread programs, we sometimes find the need to use a global
variable. Ok, so good old teach' told us it is bad practice to have global
variables, but they sometimes do come handy. Especially if they are static
variables - meaning, they are recognized only on the scope of a single file.
</p>

<p>
In multi-threaded programs, we also might find a need for such variables.
We should note, however, that the same variable is accessible from all the
threads, so we need to protect access to it using a mutex, which is extra
overhead. Further more, we sometimes need to have a variable that is
'global', but only for a specific thread. Or the same 'global' variable
should have different values in different threads. For example, consider
a program that needs to have one globally accessible linked list in each
thread, but note the same list. Further, we want the same code to be
executed by all threads. In this case, the global pointer to the start of the
list should be point to a different address in each thread.
</p>

<p>
In order to have such a pointer, we need a mechanism that enables the same
global variable to have a different location in memory. This is what
the thread-specific data mechanism is used for.
</p>

<hr>

<a name="thread_tss_overview">
<font color=brown><h4>Overview Of Thread-Specific Data Support</h4></font>
</a>
<p>
In the thread-specific data (TSD) mechanism, we have notions of keys and values.
Each key has a name, and pointer to some memory area. Keys with the same name
in two separate threads always point to different memory locations - this
is handled by the library functions that allocate memory blocks to be
accessed via these keys. We have a function to create a key (invoked once per
key name for the whole process), a function to allocate memory (invoked
separately in each thread), and functions to de-allocate this memory for
a specific thread, and a function to destroy the key, again, process-wide.
we also have functions to access the data pointed to by a key, either
setting its value, or returning the value it points to.
</p>

<hr>

<a name="thread_tss_create">
<font color=brown><h4>Allocating Thread-Specific Data Block</h4></font>
</a>
<p>
The <code>pthread_key_create()</code> function is used to allocate
a new key. This key now becomes valid for all threads in our process.
When a key is created, the value it points to defaults to NULL. Later
on each thread may change its copy of the value as it wishes. Here is
how to use this function:
<br><br>
<pre><code>
<font color=brown>/* rc is used to contain return values of pthread functions */</font>
int rc;
<font color=brown>/* define a variable to hold the key, once created.         */</font>
pthread_key_t list_key;
<font color=brown>/* cleanup_list is a function that can clean up some data   */</font>
<font color=brown>/* it is specific to our program, not to TSD                */</font>
extern void* cleanup_list(void*);

<font color=brown>/* create the key, supplying a function that'll be invoked when it's deleted. */</font>
rc = pthread_key_create(&amp;list_key, cleanup_list);
</code></pre>
<br><br>
Some notes:
<ol>
<li> After <code>pthread_key_create()</code> returns, the variable 'list_key'
     points to the newly created key.
<li> The function pointer passed as second parameter to
     <code>pthread_key_create()</code>, will be automatically invoked by the
     pthread library when our thread exits, with a pointer to the key's value
     as its parameter. We may supply a NULL pointer as the function pointer,
     and then no function will be invoked for key. Note that the function will
     be invoked once in each thread, even thought we created this key only
     once, in one thread.<br>
     If we created several keys, their associated destructor functions will
     be called in an arbitrary order, regardless of the order of keys creation.
<li> If the <code>pthread_key_create()</code> function succeeds, it returns 0.
     Otherwise, it returns some error code.
<li> There is a limit of <code>PTHREAD_KEYS_MAX</code> keys that may exist
     in our process at any given time. An attempt to create a key after
     <code>PTHREAD_KEYS_MAX</code> exits, will cause a return value of
     EAGAIN from the <code>pthread_key_create()</code> function.
</ol>
</p>

<hr>

<a name="thread_tss_access">
<font color=brown><h4>Accessing Thread-Specific Data</h4></font>
</a>
<p>
After we have created a key, we may access its value using two pthread
functions: <code>pthread_getspecific()</code> and
<code>pthread_setspecific()</code>. The first is used to get the value of a
given key, and the second is used to set the data of a given key. A key's value
is simply a void pointer (void*), so we can store in it anything that we want.
Lets see how to use these functions. We assume that 'a_key' is a properly
initialized variable of type <code>pthread_key_t</code> that contains a
previously created key:
</p>

<hr width=40%>

<pre><code>
<font color=brown>/* this variable will be used to store return codes of pthread functions */</font>
int rc;

<font color=brown>/* define a variable into which we'll store some data */</font>
<font color=brown>/* for example, and integer.                          */</font>
int* p_num = (int*)malloc(sizeof(int));
if (!p_num) {
    fprintf(stderr, "malloc: out of memory\n";
    exit(1);
}
<font color=brown>/* initialize our variable to some value */</font>
(*p_num) = 4;

<font color=brown>/* now lets store this value in our TSD key.    */</font>
<font color=brown>/* note that we don't store 'p_num' in our key. */</font>
<font color=brown>/* we store the value that p_num points to.     */</font>
rc = pthread_setspecific(a_key, (void*)p_num);

.
.
<font color=brown>/* and somewhere later in our code... */</font>
.
.
<font color=brown>/* get the value of key 'a_key' and print it. */</font>
{
    int* p_keyval = (int*)pthread_getspecific(a_key);

    if (p_keyval != NULL) {
    printf("value of 'a_key' is: %d\n", *p_keyval);
    }
}
</code></pre>

<hr width=40%>

<p>
Note that if we set the value of the key in one thread, and try to get it
in another thread, we will get a NULL, since this value is distinct for
each thread.
</p>

<p>
Note also that there are two cases where <code>pthread_getspecific()</code>
might return NULL:
<ol>
<li> The key supplied as a parameter is invalid (e.g. its key wasn't created).
<li> The value of this key is NULL. This means it either wasn't initialized,
     or was set to NULL explicitly by a previous call to
     <code>pthread_setspecific()</code>.
</ol>
</p>

<hr>

<a name="thread_tss_delete">
<font color=brown><h4>Deleting Thread-Specific Data Block</h4></font>
</a>
<p>
The <code>pthread_key_delete()</code> function may be used to delete keys.
But do not be confused by this function's name: it does not delete memory
associated with this key, nor does it call the destructor function defined
during the key's creation. Thus, you still need to do memory cleanup on
your own if you need to free this memory during runtime. However, since
usage of global variables (and thus also thread-specific data), you usually
don't need to free this memory until the thread terminate, in which case
the pthread library will invoke your destructor functions anyway.
</p>

<p>
Using this function is simple. Assuming list_key is a
<code>pthread_key_t</code> variable pointing to a properly created key, use
this function like this:
<br><br>
<code>
int rc = pthread_key_delete(key);
</code>
<br><br>
the function will return 0 on success, or EINVAL if the supplied variable
does not point to a valid TSD key.
</p>

<hr>

<a name="thread_tss_example">
<font color=brown><h4>A Complete Example</h4></font>
</a>
<p>
None yet. Give me a while to think of one...... sorry. All i can
think of right now is 'global variables are evil'. I'll try to find a good
example for the future. If you have a good example, please let me know.
</p>

<hr size=4>

<a name="thread_cancel">
<font color=brown><h2>Thread Cancellation And Termination</h2></font>
</a>
<p>
As we create threads, we need to think about terminating them as well.
There are several issues involved here. We need to be able to
terminate threads cleanly. Unlike processes, where a very ugly method of
using signals is used, the folks that designed the pthreads library were
a little more thoughtful. So they supplied us with a whole system of
canceling a thread, cleaning up after a thread, and so on. We will discuss
these methods here.
</p>

<hr>

<a name="thread_cancel_cancel">
<font color=brown><h4>Canceling A Thread</h4></font>
</a>
<p>
When we want to terminate a thread, we can use the <code>pthread_cancel</code>
function. This function gets a thread ID as a parameter, and sends a
cancellation request to this thread. What this thread does with this
request depends on its state. It might act on it immediately, it might
act on it when it gets to a cancellation point (discussed below), or
it might completely ignore it. We'll see later how to set the state of
a thread and define how it acts on cancellation requests. Lets first see
how to use the cancel function. We assume that 'thr_id' is a variable
of type <code>pthread_id</code> containing the ID of a running thread:
<br><br>
<pre><code>
pthread_cancel(thr_id);
</code></pre>
<br><br>
The <code>pthread_cancel()</code> function returns 0, so we cannot know
if it succeeded or not.
</p>

<hr>

<a name="thread_cancel_setstate">
<font color=brown><h4>Setting Thread Cancellation State</h4></font>
</a>
<p>
A thread's cancel state may be modified using several methods. The first
is by using the <code>pthread_setcancelstate()</code> function. This function
defines whether the thread will accept cancellation requests or not. The
function takes two arguments. One that sets the new cancel state, and one
into which the previous cancel state is stored by the function. Here is
how it is used:
<br><br>
<pre><code>
int old_cancel_state;
pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &amp;old_cancel_state);
</code></pre>
<br><br>
This will disable canceling this thread. We can also enable canceling
the thread like this:
<br><br>
<pre><code>
int old_cancel_state;
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, &amp;old_cancel_state);
</code></pre>
<br><br>
Note that you may supply a NULL pointer as the second parameter, and then
you won't get the old cancel state.
</p>

<p>
A similar function, named <code>pthread_setcanceltype()</code> is used
to define how a thread responds to a cancellation request, assuming
it is in the 'ENABLED' cancel state. One option is to handle the request
immediately (asynchronously). The other is to defer the request until
a cancellation point. To set the first option (asynchronous cancellation),
do something like:
<br><br>
<pre><code>
int old_cancel_type;
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, &amp;old_cancel_type);
</code></pre>
<br><br>
And to set the second option (deferred cancellation):
<br><br>
<pre><code>
int old_cancel_type;
pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, &amp;old_cancel_type);
</code></pre>
<br><br>
Note that you may supply a NULL pointer as the second parameter, and then
you won't get the old cancel type.
</p>

<p>
You might wonder - "What if i never set the cancellation state or type
of a thread?". Well, in such a case, the <code>pthread_create()</code>
function automatically sets the thread to enabled deferred cancellation,
that is, <code>PTHREAD_CANCEL_ENABLE</code> for the cancel mode, and
<code>PTHREAD_CANCEL_DEFERRED</code> for the cancel type.
</p>

<hr>

<a name="thread_cancel_points">
<font color=brown><h4>Cancellation Points</h4></font>
</a>
<p>
As we've seen, a thread might be in a state where it does not handle
cancel requests immediately, but rather defers them until it reaches
a cancellation point. So what are these cancellation points?
</p>

<p>
In general, any function that might suspend the execution of a thread
for a long time, should be a cancellation point. In practice, this
depends on the specific implementation, and how conformant it is to
the relevant POSIX standard (and which version of the standard it
conforms to...). The following set of pthread functions serve as
cancellation points:
<ul>
<li> <code>pthread_join()</code>
<li> <code>pthread_cond_wait()</code>
<li> <code>pthread_cond_timedwait()</code>
<li> <code>pthread_testcancel()</code>
<li> <code>sem_wait()</code>
<li> <code>sigwait()</code>
</ul>
This means that if a thread executes any of these functions, it'll check
for deferred cancel requests. If there is one, it will execute the cancellation
sequence, and terminate. Out of these functions,
<code>pthread_testcancel()</code> is unique - it's only purpose is to test
whether a cancellation request is pending for this thread. If there is,
it executes the cancellation sequence. If not, it returns immediately. This
function may be used in a thread that does a lot of processing without
getting into a "natural" cancellation state.
</p>

<p>
<em>Note: In real conformant implementations of the pthreads standard, normal
system calls that cause the process to block, such as <code>read()</code>,
<code>select()</code>, <code>wait()</code> and so on, are also cancellation
points. The same goes for standard C library functions that use these
system calls (the various printf functions, for example).
</em>
</p>

<hr>

<a name="thread_cancel_cleanup">
<font color=brown><h4>Setting Thread Cleanup Functions</h4></font>
</a>
<p>
One of the features the pthreads library supplies is the ability for
a thread to clean up after itself, before it exits. This is done by
specifying one or more functions that will be called automatically
by the pthreads library when the thread exits, either due to its
own will (e.g. calling <code>pthread_exit()</code>), or due to it being
canceled.
</p>

<p>
Two functions are supplied for this purpose. The
<code>pthread_cleanup_push()</code> function is used to add a cleanup function
to the set of cleanup functions for the current thread. The
<code>pthread_cleanup_pop()</code> function removes the last function added
with <code>pthread_cleanup_push()</code>. When the thread terminates, its
cleanup functions are called in the reverse order of their registration. So
the the last one to be registered is the first one to be called.
</p>

<p>
When the cleanup functions are called, each one is supplied with one parameter,
that was supplied as the second parameter to the
<code>pthread_cleanup_push()</code> function call. Lets see how these functions
may be used. In our example we'll see how these functions may be used to
clean up some memory that our thread allocates when it starts running.
</p>

<hr width=40%>
<pre><code>

<font color=brown>/* first, here is the cleanup function we want to register.        */</font>
<font color=brown>/* it gets a pointer to the allocated memory, and simply frees it. */</font>
void
cleanup_after_malloc(void* allocated_memory)
{
    if (allocated_memory)
        free(allocated_memory);
}

<font color=brown>/* and here is our thread's function.      */</font>
<font color=brown>/* we use the same function we used in our */</font>
<font color=brown>/* thread-pool server.                     */</font>
void*
handle_requests_loop(void* data)
{
    .
    .
    <font color=brown>/* this variable will be used later. please read on...         */</font>
    int old_cancel_type;

    <font color=brown>/* allocate some memory to hold the start time of this thread. */</font>
    <font color=brown>/* assume MAX_TIME_LEN is a previously defined macro.          */</font>
    char* start_time = (char*)malloc(MAX_TIME_LEN);

    <font color=brown>/* push our cleanup handler. */</font>
    pthread_cleanup_push(cleanup_after_malloc, (void*)start_time);
    .
    .
    <font color=brown>/* here we start the thread's main loop, and do whatever is desired.. */</font>
    .
    .
    .

    <font color=brown>/* and finally, we unregister the cleanup handler. our method may seem */</font>
    <font color=brown>/* awkward, but please read the comments below for an explanation.     */</font>

    <font color=brown>/* put the thread in deferred cancellation mode.      */</font>
    pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, &amp;old_cancel_type);

    <font color=brown>/* supplying '1' means to execute the cleanup handler */</font>
    <font color=brown>/* prior to unregistering it. supplying '0' would     */</font>
    <font color=brown>/* have meant not to execute it.                      */</font>
    pthread_cleanup_pop(1);

    <font color=brown>/* restore the thread's previous cancellation mode.   */</font>
    pthread_setcanceltype(old_cancel_type, NULL);
}
</code></pre>
<hr width=40%>

<p>
As we can see, we allocated some memory here, and registered a cleanup handler
that will free this memory when our thread exits. After the execution of
the main loop of our thread, we unregistered the cleanup handler. This must
be done in the same function that registered the cleanup handler, and in the
same nesting level, since both <code>pthread_cleanup_pop()</code>
and <code>pthread_cleanup_pop()</code> functions are actually macros
that add a '{' symbol and a '}' symbol, respectively.
</p>

<p>
As to the reason that we used that complex piece of code to unregister
the cleanup handler, this is done to assure that our thread won't get
canceled in the middle of the execution of our cleanup handler.
This could have happened if our thread was in asynchronous cancellation
mode. Thus, we made sure it was in deferred cancellation mode, then
unregistered the cleanup handler, and finally restored whatever cancellation
mode our thread was in previously. Note that we still assume the thread cannot
be canceled in the execution of <code>pthread_cleanup_pop()</code> itself -
this is true, since <code>pthread_cleanup_pop()</code> is not a cancellation
point.
</p>

<hr>

<a name="thread_cancel_join">
<font color=brown><h4>Synchronizing On Threads Exiting</h4></font>
</a>
<p>
Sometimes it is desired for a thread to wait for the end of execution of
another thread. This can be done using the <code>pthread_join()</code>
function. It receives two parameters: a variable of type <code>pthread_t</code>,
denoting the thread to be joined, and an address of a <code>void*</code>
variable, into which the exit code of the thread will be placed (or
<code>PTHREAD_CANCELED</code> if the joined thread was canceled).<br>
The <code>pthread_join()</code> function suspends the execution of the
calling thread until the joined thread is terminated.
</p>

<p>
For example, consider our earlier thread pool server.
Looking back at the code, you'll see that we used an odd <code>sleep()</code>
call before terminating the process. We did this since the main thread
had no idea when the other threads finished processing all pending
requests. We could have solved it by making the main thread run a loop
of checking if no more requests are pending, but that would be a busy loop.
</p>

<p>
A cleaner way of implementing this, is by adding three changes to the code:
<ol>
<li> Tell the handler threads when we are done creating requests, by setting
     some flag.
<li> Make the threads check, whenever the requests queue is empty, whether
     or not new requests are supposed to be generated. If not, then the
     thread should exit.
<li> Make the main thread wait for the end of execution of each of the threads
     it spawned.
</ol>
</p>

<p>
The first 2 changes are rather easy. We create a global variable named
'done_creating_requests' and set it to '0' initially. Each thread checks
the contents of this variable every time before it intends to go to wait
on the condition variable (i.e. the requests queue is empty).<br>
The main thread is modified to set this variable to '1' after it finished
generating all requests. Then the condition variable is being broadcast,
in case any of the threads is waiting on it, to make sure all threads
go and check the 'done_creating_requests' flag.
</p>

<p>
The last change is done using a <code>pthread_join()</code> loop:
call <code>pthread_join()</code> once for each handler thread. This way,
we know that only after all handler threads have exited, this loop
is finished, and then we may safely terminate the process. If we didn't
use this loop, we might terminate the process while one of the handler
threads is still handling a request.
</p>

<p>
The modified program is available in the file named
<a href="thread-pool-server-with-join.c">thread-pool-server-with-join.c</a>.
Look for the word 'CHANGE' (in capital letters) to see the locations
of the three changes.
</p>

<hr>

<a name="thread_cancel_detach">
<font color=brown><h4>Detaching A Thread</h4></font>
</a>
<p>
We have seen how threads can be joined using the <code>pthread_join()</code>
function. In fact, threads that are in a 'join-able' state, must be
joined by other threads, or else their memory resources will not be fully
cleaned out. This is similar to what happens with processes whose parents
didn't clean up after them (also called 'orphan' or 'zombie' processes).
</p>

<p>
If we have a thread that we wish would exit whenever it wants without
the need to join it, we should put it in the detached state. This can
be done either with appropriate flags to the <code>pthread_create()</code>
function, or by using the <code>pthread_detach()</code> function. We'll
consider the second option in our tutorial.
</p>

<p>
The <code>pthread_detach()</code> function gets one parameter, of type
<code>pthread_t</code>, that denotes the thread we wish to put in the detached
state. For example, we can create a thread and immediately detach it
with a code similar to this:
<br><br>
<pre><code>
pthread_t a_thread;   <font color=brown>/* store the thread's structure here              */</font>
int rc;               <font color=brown>/* return value for pthread functions.            */</font>
extern void* thread_loop(void*); <font color=brown>/* declare the thread's main function. */</font>

<font color=brown>/* create the new thread. */</font>
rc = pthread_create(&amp;a_thread, NULL, thread_loop, NULL);

<font color=brown>/* and if that succeeded, detach the newly created thread. */</font>
if (rc == 0) {
    rc = pthread_detach(a_thread);
}
</code></pre>
<br><br>
Of-course, if we wish to have a thread in the detached state immediately,
using the first option (setting the detached state directly when calling
<code>pthread_create()</code> is more efficient.
</p>

<hr>

<a name="thread_cancel_example">
<font color=brown><h4>Threads Cancellation - A Complete Example</h4></font>
</a>
<p>
Our next example is much larger than the previous examples. It demonstrates
how one could write a multi-threaded program in C, in a more or less clean
manner. We take our previous thread-pool server, and enhance it in two
ways. First, we add the ability to tune the number of handler threads
based on the requests load. New threads are created if the requests queue
becomes too large, and after the queue becomes shorter again, extra threads
are canceled.
</p>

<p>
Second, we fix up the termination of the server when there are no more new
requests to handle. Instead of the ugly sleep we used in our first example,
this time the main thread waits for all threads to finish handling their
last requests, by joining each of them using <code>pthread_join()</code>.
</p>

<p>
The code is now being split to 4 separate files, as follows:
<ol>
<li> <a href="thread-pool-server-changes/requests_queue.c">requests_queue.c</a>
     - This file contains functions to manipulate a requests queue. We took
     the <code>add_request()</code> and <code>get_request()</code> functions
     and put them here, along with a data structure that contains all the
     variables previously defined as globals - pointer to queue's head,
     counter of requests, and even pointers to the queue's mutex and
     condition variable. This way, all the manipulation of the data is done
     in a single file, and all its functions receive a pointer to a
     'requests_queue' structure.
<li> <a href="thread-pool-server-changes/handler_thread.c">handler_thread.c</a>
     - this contains the functions executed by each handler thread - a function
     that runs the main loop (an enhanced version of the
     'handle_requests_loop()' function, and a few local functions explained
     below). We also define a data structure to collect all the data we want
     to pass to each thread. We pass a pointer to such a structure as a
     parameter to the thread's function in the <code>pthread_create()</code>
     call, instead of using a bunch of ugly globals: the thread's ID, a pointer
     to the requests queue structure, and pointers to the mutex and condition
     variable to be used.
<li> <a href="thread-pool-server-changes/handler_threads_pool.c">
     handler_threads_pool.c</a> -
     here we define an abstraction of a thread pool. We have a function
     to create a thread, a function to delete (cancel) a thread, and a function
     to delete all active handler threads, called during program termination.
     we define here a structure similar to that used to hold the requests
     queue, and thus the functions are similar. However, because we only
     access this pool from one thread, the main thread, we don't need to
     protect it using a mutex. This saves some overhead caused by mutexes.
     the overhead is small, but for a busy server, it might begin to become
     noticeable.
<li> <a href="thread-pool-server-changes/main.c">main.c</a> -
     and finally, the main function to rule them all, and in the system
     bind them. This function creates a requests queue, creates a threads
     pool, creates few handler threads, and then starts generating requests.
     After adding a request to the queue, it checks the queue size and the
     number of active handler threads, and adjusts the number of threads
     to the size of the queue. We use a simple
     <a href="#side_notes_watermarks">water-marks algorithm</a> here,
     but as you can see from the code, it can be easily be replaced by
     a more sophisticated algorithm. In our water-marks algorithm
     implementation, when the high water-mark is reached, we start creating new
     handler threads, to empty the queue faster. Later, when the low water-mark
     is reached, we start canceling the extra threads, until we are left with
     the original number of handler threads.
</ol>
</p>

<p>
After rewriting the program in a more manageable manner, we added code that
uses the newly learned pthreads functions, as follows:
<ol>
<li> Each handler thread created puts itself in the deferred cancellation mode.
     This makes sure that when it gets canceled, it can finish handling
     its current request, before terminating.
<li> Each handler thread also registers a cleanup function, to unlock
     the mutex when it terminates. This is done, since a thread is most likely
     to get canceled when calling <code>pthread_cond_wait()</code>, which
     is a cancellation point. Since the function is called with the mutex
     locked, it might cause the thread to exit and cause all other threads
     to 'hang' on the mutex. Thus, unlocking the mutex in a cleanup
     handler (registered with the <code>pthread_cleanup_push()</code> function)
     is the proper solution.
<li> Finally, the main thread is set to clean up properly, and not brutally,
     as we did before. When it wishes to terminate, it calls the
     'delete_handler_threads_pool()' function, which calls
     <code>pthread_join</code> for each remaining handler thread. This way,
     the function returns only after all handler threads finished handling
     their last request.
</ol>
</p>

<p>
Please refer to the <a href="thread-pool-server-changes">source code</a> for
the full details. Reading the header files first will make it easier to
understand the design. To compile the program, just switch to the
thread-pool-server-changes directory, and type 'gmake'.
</p>

<p>
Exercise: our last program contains some possible race condition during
its termination process. Can you see what this race is all about? Can you offer
a complete solution to this problem? (hint - think of what happens to threads
deleted using 'delete_handler_thread()').
</p>

<p>
Exercise 2: the way we implement the water-marks algorithm might come up too
slow on creation of new threads. Try thinking of a different algorithm that
will shorten the average time a request stays on the queue until it gets
handled. Add some code to measure this time, and experiment until you find
your "optimal pool algorithm". Note - Time should be measured in very small
units (using the <code>getrusage</code> system call), and several runs of
each algorithm should be made, to get more accurate measurements.
</p>

<hr size=4>

<a name="thread_user_interface">
<font color=brown><h2>Using Threads For Responsive User Interface Programming</h2></font>
</a>
<p>
One area in which threads can be very helpful is in user-interface programs.
These programs are usually centered around a loop of reading user input,
processing it, and showing the results of the processing. The processing part
may sometimes take a while to complete, and the user is made to wait during
this operation. By placing such long operations in a seperate thread, while
having another thread to read user input, the program can be more responsive.
It may allow the user to cancel the operation in the middle.
</p>

<p>
In graphical programs the problem is more severe, since the application
should always be ready for a message from the windowing system telling it
to repaint part of its window. If it's too busy executing some other
task, its window will remain blank, which is rather ugly. In such a case,
it is a good idea to have one thread handle the message loop of the windowing
systm and always ready to get such repain requests (as well as user input).
When ever this thread sees a need to do an operation that might take a long
time to complete (say, more than 0.2 seconds in the worse case), it will
delegate the job to a seperate thread.
</p>

<p>
In order to structure things better, we may use a third thread, to control
and synchronize the user-input and task-performing threads. If the user-input
thread gets any user input, it will ask the controlling thread to handle the
operation. If the task-performing thread finishes its operation, it will ask
the controlling thread to show the results to the user.
</p>

<hr>

<a name="thread_user_interface_example">
<font color=brown><h4>User Interaction - A Complete Example</h4></font>
</a>
<p>
As an example, we will write a simple character-mode program that counts
the number of lines in a file, while allowing the user to cancel the operation
in the middle.
</p>

<p>
Our main thread will launch one thread to perform the
line counting, and a second thread to check for user input. After that,
the main thread waits on a condition variable. When any of the threads finishes
its operation, it signals this condition variable, in order to let the main
thread check what happened. A global variable is used to flag whether or not
a cancel request was made by the user. It is initialized to '0', but if
the user-input thread receives a cancellation request (the user pressing 'e'),
it sets this flag to '1', signals the condition variable, and terminates.
The line-counting thread will signal the condition variable only after it
finished its computation.
</p>

<p>
Before you go read the program, we should explain the use of the
<code>system()</code> function and the 'stty' Unix command. The
<code>system()</code> function spawns a shell in which it executes the Unix
command given as a parameter. The <code>stty</code> Unix command is used to
change terminal mode settings. We use it to switch the terminal from
its default, line-buffered mode, to a character mode (also known as raw mode),
so the call to <code>getchar()</code> in the user-input thread will return
immediatly after the user presses any key. If we hadn't done so, the system will
buffer all input to the program until the user presses the ENTER key. Finally,
since this raw mode is not very useful (to say the least) once the program
terminates and we get the shell prompt again, the user-input thread registers
a cleanup function that restores the normal terminal mode, i.e. line-buffered.
For more info, please refer to stty's manual page.
</p>

<p>
The program's source can be found in the file
<a href="line-count.c">line-count.c</a>.
The name of the file whose lines it reads is hardcoded to
'very_large_data_file'. You should create a file with this name in the
program's directory (large enough for the operation to take enough time).
Alternatively, you may un-compress the file 'very_large_data_file.Z' found
in this directory, using the command:
<br><br>
<code>
uncompress very_large_data_file.Z
</code>
<br><br>
note that this will create a 5MB(!) file named 'very_large_data_file', so make
sure you have enough free disk-space before performing this operation.
</p>

<hr size=4>

<a name="thread_3rd_party">
<font color=brown><h2>Using 3rd-Party Libraries In A Multi-Threaded Application</h2></font>
</a>
<p>
One more point, and a very important one, should be taken by programmers
employeeing multi-threading in their programs. Since a multi-threaded program
might have the same function executed by different threads at the same time,
one must make sure that any function that might be invoked from more than one
thread at a time, is MT-safe (Multi-Thread Safe). This means that any access
to data structures and other shared resources is protected using mutexes.
</p>

<p>
It may be possibe to use a non-MT-safe library in a multi-threaded programs in
two ways:
<ol>
<li> <u>Use this library only from a single thread</u>. This way we are assured
     that no function from the library is executed simultanouasly from two
     seperate threads. The problem here is that it might limit your whole
     design, and might force you to add more communications between threads,
     if another thread needs to somehow use a function from this library.
<li> <u>Use mutexes to protect function calls to the library</u>. This means
     that a single mutex is used by any thread invoking any function in this
     library. The mutex is locked, the function is invoked, and then the mutex
     is unlocked. The problem with this solution is that the locking is not
     done in a fine granularity - even if two functions from the library do not
     interfere with each other, they still cannot be invoked at the same
     time by seperate threads. The second thread will be blocked on the mutex
     until the first thread finishes the function call. You might call for using
     seperate mutexes for unrelated functions, but usually you've no idea how
     the library really works and thus cannot know which functions access the
     same set of resources. More than that, even if you do know that, a new
     version of the library might behave differently, forcing you to modify
     your whole locking system.
</ol>
As you can see, non-MT-safe libraries need special attention, so it is best
to find MT-safe libraries with a similar functionality, if possible.
</p>

<hr size=4>

<a name="thread_debugger">
<font color=brown><h2>Using A Threads-Aware Debugger</h2></font>
</a>
<p>
One last thing to note - when debugging a multi-threaded application, one
needs to use a debugger that "sees" the threads in the program. Most
up-to-date debuggers that come with commercial development environments
are thread-aware. As for Linux, gdb as is shiped with most (all?) distributions
seems to be not thread-aware. There is a project, called 'SmartGDB', that
added thread support to gdb, as well as a graphical user interface (which
is almost a must when debugging multi-threaded applications). However,
it may be used to debug only multi-threaded applications that use
the various user-level thread libraries. Debugging LinuxThreads with SmartGDB
requires applying some kernel patches, that
are currently available only for Linux kernels from the 2.1.X series. More
information about this tool may be found at
<a href="http://hegel.ittc.ukans.edu/projects/smartgdb/">http://hegel.ittc.ukans.edu/projects/smartgdb/</a>.
There is also some information about availability of patches to the 2.0.32
kernel and gdb 4.17. This information may be found on the
<a href="http://pauillac.inria.fr/~xleroy/linuxthreads/">LinuxThreads homepage</a>.
</p>

<hr size=4>

<a name="side_notes">
<h4>Side-Notes</h4>
</a>

<dl>
<a name="side_notes_watermarks">
<dt>water-marks algorithm</a>
<dd>An algorithm used mostly when handling buffers or queues: start filling
    in the queue. If its size exceeds a threshold, known as the high water-mark,
    stop filling the queue (or start emptying it faster). Keep this state until
    the size of the queue becomes lower than another threshold, known as the
    low water-mark. At this point, resume the operation of filling the queue
    (or return the emptying speed to the original speed).
</dl>
<br><br>

<p align=center><img src=http://users.actcom.co.il/~choo/lupg/images/lupg_toolbar.gif height=40 width=360 alt="" usemap="#lupg_map"><map name=lupg_map>
<area shape=rect coords="3,0 37,39" href=http://users.actcom.co.il/~choo/lupg alt="LUPG home">
<area shape=rect coords="67,0 102,39" href=http://users.actcom.co.il/~choo/lupg/tutorials/index.html alt="Tutorials">
<area shape=rect coords="138,0 170,39" href=http://users.actcom.co.il/~choo/lupg/related-material.html alt="Related material">
<area shape=rect coords="213,0 232,39" href=http://users.actcom.co.il/~choo/lupg/project-ideas/index.html alt="Project Ideas">
<area shape=rect coords="272,0 290,39" href=http://users.actcom.co.il/~choo/lupg/essays/index.html alt="Essays">
<area shape=rect coords="324,0 355,39" href=mailto:choo@actcom.co.il alt="Send comments">
</map>
<br>[<a href=http://users.actcom.co.il/~choo/lupg/index.html>LUPG Home</a>]  [<a href=http://users.actcom.co.il/~choo/lupg/tutorials/index.html>Tutorials</a>]  [<a href=http://users.actcom.co.il/~choo/lupg/related-material.html>Related Material</a>] [<a href=http://users.actcom.co.il/~choo/lupg/essays/index.html>Essays</a>] [<a href=http://users.actcom.co.il/~choo/lupg/project-ideas/index.html>Project Ideas</a>] [<a href=mailto:choo@actcom.co.il>Send Comments</a>]<br><img src=http://users.actcom.co.il/~choo/lupg/images/good_bar.gif alt=""></p>

<p>This document is copyright (c) 1998-2002 by guy keren.<br><br>
The material in this document is provided AS IS, without any
expressed or implied warranty, or claim of fitness for a
particular purpose. Neither the author nor any contributers shell
be liable for any damages incured directly or indirectly by using
the material contained in this document.<br><br>
permission to copy this document (electronically or on paper, for
personal or organization internal use) or publish it on-line is
hereby granted, provided that the document is copied as-is, this
copyright notice is preserved, and a link to the original document
is written in the document's body, or in the page linking to the
copy of this document.<br><br>
Permission to make translations of this document is also granted,
under these terms - assuming the translation preserves the meaning
of the text, the copyright notice is preserved as-is, and a link
to the original document is written in the document's body, or in
the page linking to the copy of this document.<br><br>
For any questions about the document and its license, please
<a href=mailto:choo@actcom.co.il>contact the author</a>.</p>




</body>

</html>

转载于:https://my.oschina.net/lyr/blog/140917

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值