OS——Threads

OS——Threads

1.Differ from process

process
– Resource ownership – process is allocated a virtual address space to hold the process image
– Scheduling/execution-- follows an execution path that may be interleaved(交叉) with other processes
thread
Dispatching is referred to as a thread
Resource of ownership is referred to as a process or task

  • 进程是具有一定独立功能的程序关于某个数据集合上的一次运行活动,进程是系统进行资源分配和调度的一个独立单位.
  • 线程是进程的一个实体,是CPU调度和分派的基本单位,它是比进程更小的能独立运行的基本单位.线程自己基本上不拥有系统资源,只拥有一点在运行中必不可少的资源(如程序计数器,一组寄存器和栈),但是它可与同属一个进程的其他的线程共享进程所拥有的全部资源.
  • 一个线程可以创建和撤销另一个线程;同一个进程中的多个线程之间可以并发执行.

2.Overview

  • A thread, called a lightweight process (LWP), is a basic unit of CPU utilization.
    – Comprises a thread ID, a program counter, a register set, and a stack.
    – Shares with other threads belongs to the same process its code section, data section, and other operating-system resources, such as open files and signals.
  • A traditional (or heavyweight) process has a single thread of control.
  • If the process has multiple threads of control, it can do more than one task at a time4
    在这里插入图片描述

3.Motivation

Multiple tasks with the application can be implemented by separate threads
– Update display– Fetch data– Spell checking– Answer a network request
Process creation is heavy-weight while thread creation is light-weight

4.Benefits of Threads

  • Responsiveness
  • Resource Sharing
  • Economy
  • Scalability
    – process can take advantage of multiprocessor architectures
    – threads may be running in parallel on different processing cores.
  • Swapping a process involves swapping all threads of the process
    Termination of a process, terminates all threads within the process
    since all threads share the same address space
  • Thread States
    Running, ready, waiting

5.Multicore Programming

  1. Parallelism implies a system can perform more than one task simultaneously.
  2. Concurrency supports more than one task making Concurrency supports more than one task making progress
  3. Two types of parallelism
    – Data parallelism
    – distributes subsets of the same data across multiple cores, same operation on each.
    • E.g. Summing the contents of an array of size N.on a single-core system, one thread simply sum the elements [0]…[N-1]on a dual-core system, thread A, running on core 0, could sum the elements [0]…[N/2-1], thread B, running on core 1, could sum the elements [n/2]…N-1].
    – Task parallelism
    – distributing threads across cores, each thread performing unique operation.
    • Different threads may be operating on the same data, or they may be operating on different data.
  4. five areas present challenges in programming for multicore systems:
    – Identifying tasks.
    – Balance.
    – Data splitting.
    – Data dependency.
    – Testing and debugging.

6.Multithreading Models

  • – User threads, supported above the kernel and are managed without kernel support.
    All thread management is done by the application
    • The kernel is not aware of the existence of threads
  • – Kernel threads, supported and managed directly by the operating system
    The Kernel performs thread creation, scheduling, and management in kernel space, maintains context information for the process and the threads
    – Scheduling is done on a thread basis

• Many-to-One

  • false concurrency
  • Many user-level threads mapped to single kernel thread.
  • Multiple threads are unable to run in parallel on multiprocessors, because only one thread can access the kernel at a time.
  • The entire process will block if a thread makes a blocking system

• One-to-One

  • true concurrency
  • Each user-level thread maps to a kernel thread.
  • Allows another thread to run when a thread makes a blocking system call. Allows multiple threads to run in parallel on multiprocessors.
  • Creating a user thread requires creating the corresponding kernel thread. Restrict the number of threads supported by the system

• Many-to-Many

  • Allows many user level threads to be mapped to many kernel threads
  • Allows the operating system to create a sufficient number of kernel threads

• Two-level Model

  • Similar to M:M, except that it allows a user thread to be bound to a kernel thread(可绑定)

7.Thread Libraries

  • Two primary ways of implementing
    – Provides a library entirely in user space with no kernel support. (-- local function call) code and data structures exist in user space
    – Implement a kernel-level library supported directly by the OS. (-- system call) code and data
    structures exist in kernel space
    Main thread library
    – POSIX Pthreads Windows Java
  • Two general strategies for creating multiple threads:
  • – asynchronous threading
    • once the parent creates a child thread, the parent resumes its execution, so that the parent and child execute concurrently.
  • – synchronous threading.
    • when the parent thread creates one or more children and then must wait for all of its children to terminate before it resumes —the so-called fork-join strategy.

POSIX Pthreads

Specification, not implementation
Common in UNIX and Linux operating systems.
Referred to user-level library, because no distinct relationship exists between a thread created using the Pthread API and any associated kernel threads

Windows Threads

A kernel-level threads library available on Windows systems

Java Threads

  • Java provides support at the language level for the creation and management of threads. Java threads are managed by the JVM.
  • All Java programs comprise at least a single thread of control.
    Two techniques for creating threads in a Java program:
    – Extending Thread class create a new class that is derived from the Thread class and to
    override its run() method.
    – Define a class that implements the Runnable interface
    在这里插入图片描述

8.Implicit Threading

  • Creation and management of threads done by compilers and run-time libraries rather than programmers
    Three methods explored
    – Thread Pools
    – OpenMP
    – Grand Central Dispatch
  • Thread pools
    Benefits of thread pool
    – usually faster to service a request with an existing thread than waiting to create a thread
    – Allows the number of threads in the application to be bound to the size of the pool
    – Separating the task to be performed from the mechanics of creating the task allows different strategies for running the task.
  • OpenMP
    Set of compiler directives and an API for C, C++, FORTRAN
    编程模型以线程为基础,通过编译制导指令制导并行化,有三种编程要素可以实现并行化控制,他们分别是编译制导、API函数集和环境变量。
    All the threads then simultaneously execute the parallel region. As each thread exits the parallel region, it is terminated.
  • Grand Central Dispatch
    Blocks placed in dispatch queue
    – Assigned to available thread in thread pool when removed from queue
    Two types of dispatch queues:
    – serial – blocks removed in FIFO order, queue is per process, called main queue
    • Programmers can create additional serial queues within program
    – concurrent – removed in FIFO order but several may be removed at a time
    • allowing multiple blocks to execute in parallel.
    • Three system wide queues with priorities low, default, high

9.Threading Issues

The fork() and exec() system calls

If one thread in a program calls fork, the new process :
– Duplicates all threads
– Duplicates only the thread that invoked the fork()
If a thread invokes the exec() system call, the program specified in the parameter to exec will replace the entire process, including all threads. If exec() is called immediately after forking, duplicating only the calling thread is appropriate.
If the separate process does not call exec() after forking, the separate process should duplicate all threads.

Signal handling

  • All signals follow the same pattern:
  • A signal is generated by the occurrence of a particular event.
  • The signal is delivered to a process.
  • Once delivered, the signal must be handled.
    Every signal may be handled by one of two possible handlers
    – a user-defined signal handler
    – a default signal handler, which is run by the kernel
  • In multithreaded programs, a signal should be delivered
    – to the thread to which the signal applies
    – to every thread in the process
    – to certain threads in the process
    – to a specific thread which is assigned to receive all signals for the process
  • – Synchronous and asynchronous
    Synchronous signals need to be delivered to the thread causing the signal and not to other threads
    in the process. Some asynchronous signals should be sent to all threads, e.g. ^+C Some asynchronous signals may be delivered to only those threads that are not blocking the signal

Thread cancellation

  • Two different scenarios to cancel a target thread
    – Asynchronous cancellation: one thread immediately terminates the target thread.
    – Deferred cancellation: the target thread periodically check whether it should terminate.

  • The difficulty with cancellation occurs in situations:
    – resources have been allocated to a cancelled thread
    – thread was cancelled while in the middle of updating data it is sharing with other threads.

  • Most operating systems allow a process or thread to be cancelled asynchronously. Pthread API provides deferred cancellation.
    Cancellation only occurs when thread reaches cancellation point

Thread-local Storage

Thread-local storage (TLS) allows each thread to have its own copy of data.
Similar to static data

Scheduler Activation

Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application.
Typically use an intermediate data structure between user and kernel threads
– lightweight process (LWP)(cont.)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值