Pthread(1)

What is a Thread?

  • Technically, a thread is defined as an independent stream of instructions that can bescheduled to run as such by the operating system. But what does this mean?

  • To the software developer, the concept of a "procedure" that runs independently from its main program maybest describe a thread.

  • To go one step further, imagine a main program (a.out) that contains a number of procedures. Then imagine all of these procedures being able to be scheduled to run simultaneously and/or independently by the operating system. That would describe a "multi-threaded" program.

  • How is this accomplished?
  • Before understanding a thread, one first needs to understand a UNIX process. A process is created by the operating system, and requires a fair amount of "overhead(开销)". Processes contain information about program resources and program execution state, including:
    • Process ID, process group ID, user ID, and group ID
    • Environment
    • Working directory.
    • Program instructions
    • Registers
    • Stack
    • Heap
    • File descriptors
    • Signal actions
    • Shared libraries
    • Inter-process communication tools (such as message queues, pipes, semaphores, or shared memory).

    Unix ProcessProcess-thread relationship
    UNIX PROCESSTHREADS WITHIN A UNIX PROCESS

  • Threads use and exist within these process resources, yet are able to be scheduled by the operating system and run as independent entities largely because they duplicate only the bare essential resources that enable them to exist as executable code.

  • This independent flow of control is accomplished because a thread maintains its own:
    • Stack pointer
    • Registers
    • Scheduling properties (such as policy or priority)
    • Set of pending and blocked signals
    • Thread specific data.

  • So, in summary, in the UNIX environment a thread:
    • Exists within a process and uses the process resources
    • Has its own independent flow of control as long as its parent process exists and the OS supports it
    • Duplicates only the essential resources it needs to be independently schedulable
    • May share the process resources with other threads that act equally independently (and dependently)
    • Dies if the parent process dies - or something similar
    • Is "lightweight" because most of the overhead has already been accomplished through the creation of its process.

  • Because threads within the same process share resources:
    • Changes made by one thread to shared system resources (such as closing a file) will be seen by all other threads.
    • Two pointers having the same value point to the same data.
    • Reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer。

What are Pthreads?

  • Historically, hardware vendors have implemented their own proprietary versions of threads. These implementations differed substantially from each other making it difficult for programmers to develop portable threaded applications.
  • In order to take full advantage of the capabilities provided by threads, a standardized programming interface was required.
    • For UNIX systems, this interface has been specified by the IEEE POSIX 1003.1c standard (1995).
    • Implementations adhering to this standard are referred to as POSIX threads, or Pthreads.
    • Most hardware vendors now offer Pthreads in addition to their proprietary API's.
  • The POSIX standard has continued to evolve and undergo revisions, including the Pthreads specification.
  • Pthreads are defined as a set of C language programming types and procedure calls, implemented with apthread.h header/include file and a thread library - though this library may be part of another library, such aslibc, in some implementations.

Why Pthreads?

  • In the world of high performance computing, the primary motivation for using Pthreads is to realize potential program performance gains.
  • When compared to the cost of creating and managing a process, a thread can be created with much less operating system overhead. Managing threads requires fewer system resources than managing processes.
  • All threads within a process share the same address space. Inter-thread communication is more efficient and in many cases, easier to use than inter-process communication.
  • Threaded applications offer potential performance gains and practical advantages over non-threaded applications in several other ways:
    • Overlapping CPU work with I/O: For example, a program may have sections where it is performing a long I/O operation. While one thread is waiting for an I/O system call to complete, CPU intensive work can be performed by other threads.
    • Priority/real-time scheduling: tasks which are more important can be scheduled to supersede or interrupt lower priority tasks.
    • Asynchronous event handling: tasks which service events of indeterminate frequency and duration can be interleaved. For example, a web server can both transfer data from previous requests and manage the arrival of new requests.
  • The primary motivation for considering the use of Pthreads on an SMP architecture is to achieve optimum performance. In particular, if an application is using MPI for on-node communications, there is a potential that performance could be greatly improved by using Pthreads for on-node data transfer instead.
  • Pthreads can also be used for serial applications, to emulate parallel execution and/or take advantage of spare cycles.

  • A perfect example is the typical web browser, which for most people, runs on a single cpu desktop/laptop machine. Many things can "appear" to be happening at the same time.
  • Many other common serial applications and operating systems use threads. An example of the MS Windows OS and applications using threads is shown below.

Designing Threaded Programs

Parallel Programming:
  • On modern, multi-cpu machines, pthreads are ideally suited for parallel programming, and whatever applies to parallel programming in general, applies to parallel pthreads programs.
  • There are many considerations for designing parallel programs, such as:
    • What type of parallel programming model to use?
    • Problem partitioning
    • Load balancing
    • Communications
    • Data dependencies
    • Synchronization and race conditions
    • Memory issues
    • I/O issues
    • Program complexity
    • Programmer effort/costs/time
    • ...
  • In general though, in order for a program to take advantage of Pthreads, it must be able to be organized into discrete, independent tasks which can execute concurrently. For example, if routine1 and routine2 can be interchanged, interleaved and/or overlapped in real time, they are candidates for threading.

  • Programs having the following characteristics may be well suited for pthreads:
    • Work that can be executed, or data that can be operated on, by multiple tasks simultaneously:
    • Block for potentially long I/O waits
    • Use many CPU cycles in some places but not others
    • Must respond to asynchronous events
    • Some work is more important than other work (priority interrupts)
  • Several common models for threaded programs exist:
    • Manager/worker: a single thread, the manager assigns work to other threads, theworkers. Typically, the manager handles all input and parcels out work to the other tasks. At least two forms of the manager/worker model are common: static worker pool and dynamic worker pool.

    • Pipeline: a task is broken into a series of suboperations, each of which is handled in series, but concurrently, by a different thread. An automobile assembly line best describes this model.

    • Peer: similar to the manager/worker model, but after the main thread creates other threads, it participates in the work.

Shared Memory Model:

  • All threads have access to the same global, shared memory
  • Threads also have their own private data
  • Programmers are responsible for synchronizing access (protecting) globally shared data.
    Shared Memory Model

Thread-safeness:

  • Thread-safeness: in a nutshell, refers an application's ability to execute multiple threads simultaneously without "clobbering" shared data or creating "race" conditions.
  • For example, suppose that your application creates several threads, each of which makes a call to the same library routine:
    • This library routine accesses/modifies a global structure or location in memory.
    • As each thread calls this routine it is possible that they may try to modify this global structure/memory location at the same time.
    • If the routine does not employ some sort of synchronization constructs to prevent data corruption, then it is not thread-safe.
threadunsafe
  • The implication to users of external library routines is that if you aren't 100% certain the routine is thread-safe, then you take your chances with problems that could arise.
  • Recommendation: Be careful if your application uses libraries or other objects that don't explicitly guarantee thread-safeness. When in doubt, assume that they are not thread-safe until proven otherwise. This can be done by "serializing" the calls to the uncertain routine, etc.

Thread Limits:

  • Although the Pthreads API is an ANSI/IEEE standard, implementations can, and usually do, vary in ways not specified by the standard.
  • Because of this, a program that runs fine on one platform, may fail or produce wrong results on another platform.
  • For example, the maximum number of threads permitted, and the default thread stack size are two important limits to consider when designing your program.
  • Several thread limits are discussed in more detail later in this tutorial.
The Pthreads API

  • The original Pthreads API was defined in the ANSI/IEEE POSIX 1003.1 - 1995 standard. The POSIX standard has continued to evolve and undergo revisions, including the Pthreads specification.

  • The subroutines which comprise the Pthreads API can be informally grouped into four major groups:
    1. Thread management: Routines that work directly on threads - creating, detaching, joining, etc. They also include functions to set/query thread attributes (joinable, scheduling etc.)

    2. Mutexes: Routines that deal with synchronization, called a "mutex", which is an abbreviation for "mutual exclusion".Mutex functions provide for creating, destroying, locking and unlocking mutexes.These are supplemented by mutex attribute functions that set or modify attributes associated with mutexes.

    3. Condition variables: Routines that address communications between threads that share a mutex. Based upon programmer specified conditions. This group includes functions to create, destroy, wait and signal based upon specified variable values. Functions to set/query condition variable attributes are also included.

    4. Synchronization: Routines that manage read/write locks and barriers.

  • Naming conventions: All identifiers in the threads library begin with pthread_. Some examples are shown below.

    Routine PrefixFunctional Group
    pthread_Threads themselves and miscellaneous subroutines
    pthread_attr_Thread attributes objects
    pthread_mutex_Mutexes
    pthread_mutexattr_Mutex attributes objects.
    pthread_cond_Condition variables
    pthread_condattr_Condition attributes objects
    pthread_key_Thread-specific data keys
    pthread_rwlock_Read/write locks
    pthread_barrier_Synchronization barriers

  • The concept of opaque objects pervades the design of the API. The basic calls work to create or modify opaque objects - the opaque objects can be modified by calls to attribute functions, which deal with opaque attributes.

  • The Pthreads API contains around 100 subroutines. This tutorial will focus on a subset of these - specifically, those which are most likely to be immediately useful to the beginning Pthreads programmer.

  • For portability, the pthread.h header file should be included in each source file using the Pthreads library.

Compiling Threaded Programs

  • Several examples of compile commands used for pthreads codes are listed in the table below.

    Compiler / PlatformCompiler CommandDescription
    INTEL
    Linux
    icc -pthreadC
    icpc -pthreadC++
    PGI
    Linux
    pgcc -lpthreadC
    pgCC -lpthreadC++
    GNU
    Linux, Blue Gene
    gcc -pthreadGNU C
    g++ -pthreadGNU C++
    IBM
    Blue Gene
    bgxlc_r  /  bgcc_rC (ANSI  /  non-ANSI)
    bgxlC_r, bgxlc++_rC++


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值