C++11 线程相关操作

目录

多线程Thread

Abstract

构造函数ProtoType

默认构造函数

初始化构造函数

拷贝构造函数

Move 构造函数

主要成员函数

get_id()

joinable()

join()

deteach()

简单线程的创建

线程封装

线程变量 - thread_local

ABSTRACT

C++ Storage Type

Demo

互斥量

分类

独占互斥量 std::mutex

成员函数

递归互斥量std::recursive_mutex

带超时的互斥量std::timed_mutex和 std::recursive_timed_mutex

lock_guard和unique_lock的使用和区别

condition_variable - 条件变量

ABSTRACT

Member Functions

std::condition_variable::wait

prototype

abstract

parameters

return value

demo

std::condition_variable::wait_for

prototype

abstract

parameters

return value

demo

std::condition_variable::wait_until

prototype

abstract

parameters

return value

demo

原子操作-atomic

ABSTRACT

内存顺序模型

Template parameters

T

Member Functions

General atomic operations

Operations supported by certain specializations (integral and/or pointer)

std::atomic::store & load

prototype

abstract

parameters

demo

prototype

abstract

return value

demo

std::atomic::compare_exchange_weak

prototype

abstract

return value

demo

Reference counting - 引用计数

abstract

Implementation

Usage

Spinlock - 自旋锁

abstract

Implementation

Usage

Wait-free ring buffer - 无锁环形队列

abstract

Implementation

Usage

Lock-free multi-producer queue- 无锁多生产者队列

abstract

Implementation

Usage

异步操作future & async & package_task & promise

std::future

What

How

Demo

std::async

What

std::package_task

What

How

Demo

std::promise

What

How

Demo

单次操作 - call_once

What

How

Demo



多线程Thread

Abstract

Class to represent individual threads of execution.

A thread of execution is a sequence of instructions that can be executed concurrently with other such sequences in multithreading environments, while sharing a same address space.

An initialized thread object represents an active thread of execution; Such a thread object is joinable, and has a unique thread id.

A default-constructed (non-initialized) thread object is not joinable, and its thread id is common for all non-joinable threads.

A joinable thread becomes not joinable if moved from, or if either join or detach are called on them.

/

hread类来表示执行的各个线程。

执行线程是一个指令序列,它可以在多线程环境中与其他这样的序列并发执行,同时共享相同的地址空间。

一个初始化的线程对象表示一个活动的执行线程;这样的线程对象是可接合的,并且具有惟一的线程id。

默认构造(非初始化)的线程对象是不可join的,它的线程id对于所有不可接合的线程都是通用的。

如果将可join线程从,或者对它们调用join或detach,则可join线程将变得不可join。

std::thread 在 #include <Thread> 头文件中声明,因此使用 std::thread 时需要包含 #include<Thread>头文件。

构造函数ProtoType

默认构造函数

//创建一个空的 thread 执行对象。
thread() _NOEXCEPT 
{ 
    // construct with no thread 
    _Thr_set_null(_Thr); 
  }

初始化构造函数

//创建std::thread执行对象,该thread对象可被joinable,新产生的线程会调用threadFun函数,该函 数的参数由 args 给出
template<class Fn, class... Args> 
explicit thread(Fn&& fn, Args&&... args);

拷贝构造函数

// 拷贝构造函数(被禁用),意味着 thread 不可被拷贝构造。
thread(const thread&) = delete;

Move 构造函数

//move 构造函数,调用成功之后 x 不代表任何 thread 执行对象
thread(thread&& x)noexcept
#include<thread> 
using namespace std; 
void threadFun(int &a) // 引用传递
{
	cout << "this is thread fun !" <<endl; 
	cout <<" a = "<<(a+=10)<<endl;
}
int main() 
{ 
	int x = 10; 
	thread t1(threadFun, std::ref(x)); 
	thread t2(std::move(t1)); // t1 线程失去所有权 
	thread t3; 
	t3 = std::move(t2); // t2 线程失去所有权 
	//t1.join(); // ? 
	t3.join(); 
	cout<<"Main End "<<"x = "<<x<<endl;
	return 0; 
}

主要成员函数

get_id()

获取线程ID,返回类型std::thread::id对象。

joinable()

判断线程是否可以加入等待

join()

等待该线程执行完成之后才会返回

deteach()

detach调用之后,目标线程就成为了守护线程,驻留后台运行,与之关联的std::thread对象

失去对目标线程的关联,无法再通过std::thread对象取得该线程的控制权。当线程主函数执

行完之后,线程就结束了,运行时库负责清理与该线程相关的资源。

调用 detach 函数之后:

*this 不再代表任何的线程执行实例。

joinable() == false

get_id() == std::thread::id()

简单线程的创建

#include <iostream>
#include <thread> // head file



using namespace std;
// 1 传入0个值 
void func1() { cout << "func1 into" << endl; }
// 2 传入2个值
void func2(int a, int b) { cout << "func2 a + b = " << a+b << endl; }
//3 传入引用
void func3(int &c) { cout << "func3 c = " << &c << endl; c += 10; }
class A
{
	public: 
		 // 4. 传入类函数 
		 void func4(int a) 
		 { 
			 // std::this_thread::sleep_for(std::chrono::seconds(1));
			 cout << "thread:" << name_<< ", fun4 a = " << a << endl;
		 }
		 void setName(string name) { name_ = name; }
		 void displayName() { cout << "this:" << this << ", name:" << name_ << endl; }
		 void play() { std::cout<<"play call!"<<std::endl; } 
		 private: string name_; 
};
//5. detach 
void func5() 
{
	cout << "func5 into sleep " << endl; 
	std::this_thread::sleep_for(std::chrono::seconds(1)); 
	cout << "func5 leave " << endl; 
}
// 6. move 
void func6() { cout << "this is func6 !" <<endl; }
int main() {// 1. 传入0个值 
	cout << "\n\n main1--------------------------\n";
	std::thread t1(&func1); 
	// 只传递函数 
	t1.join(); 
	// 阻塞等待线程函数执行结束 
	// 2. 传入2个值
	cout << "\n\n main2--------------------------\n"; 
	int a =10; int b =20; 
	std::thread t2(func2, a, b); 
	// 加上参数传递,可以任意参数 
	t2.join(); 
	// 3. 传入引用 
	cout << "\n\n main3--------------------------\n"; 
	int c =10; 
	std::thread t3(func3, std::ref(c));
	// 加上参数传递,可以任意参数 
	t3.join(); 
	cout << "main3 c = " << &c << ", "<<c << endl; 
	// 4. 传入类函数 
	cout << "\n\n main4--------------------------\n"; 
	A * a4_ptr = new A(); 
	a4_ptr->setName("darren"); 
	std::thread t4(A::func4, a4_ptr, 10); 
	t4.join(); 
	delete a4_ptr; 
	// 5.detach 
	cout << "\n\n main5--------------------------\n"; 
	std::thread t5(&func5); 
	// 只传递函数 
	t5.detach(); 
	// 脱离 // 
	std::this_thread::sleep_for(std::chrono::seconds(2)); // 如果这里不休眠会怎么 样 
	cout << "\n main5 end\n"; 
	// 6.move 
	cout << "\n\n main6--------------------------\n"; 
	int x = 10; 
	thread t6_1(func6); 
	thread t6_2(std::move(t6_1)); 
	// t6_1 线程失去所有权 
	t6_1.join(); // 抛出异常 
	t6_2.join(); 
	return 0; }

线程封装

#ifndef ZERO_THREAD_H 
#define ZERO_THREAD_H
#include <thread> 
class ZERO_Thread
{
	public: 
	ZERO_Thread(); // 构造函数 
	virtual ~ZERO_Thread(); // 析构函数 
	bool start(); 
	void stop(); 
	bool isAlive() const; // 线程是否存活. 
	std::thread::id id() { return th_->get_id(); } 
	std::thread* getThread() { return th_; } 
	void join(); // 等待当前线程结束, 不能在当前线程上调用 
	void detach(); //能在当前线程上调用 
	static size_t CURRENT_THREADID(); 
	protected:
	void threadEntry(); 
	virtual void run() = 0; // 运行 
	protected: bool running_; //是否在运行 
	std::thread *th_;
};
#endif // ZERO_THREAD_H
#include "zero_thread.h" 
#include <sstream> 
#include <iostream> 
#include <exception> 
ZERO_Thread::ZERO_Thread(): running_(false), th_(NULL) {}
ZERO_Thread::~ZERO_Thread() { 
	if(th_ != NULL) 
	{ //如果到调用析构函数的时候,调用者还没有调用join则触发detach,此时是一个比较危险的动 作,用户必须知道他在做什么 
		if (th_->joinable()) 
		{ 
			std::cout << "~ZERO_Thread detach\n"; 
			th_->detach(); 
		}
		delete th_; 
		th_ = NULL; 
	}
	std::cout << "~ZERO_Thread()" << std::endl;
}
bool ZERO_Thread::start() 
{ 
	if (running_) {
		return false; 
	}try { 
		th_ = new std::thread(ZERO_Thread::threadEntry, this);
	}catch(...) { 
		throw "[ZERO_Thread::start] thread start error"; 
	}return true; 
}
void ZERO_Thread::stop() { running_ = false; }
bool ZERO_Thread::isAlive() const { return running_; }
void ZERO_Thread::join() { 
	if (th_->joinable()) { 
		th_->join(); // 不是detach才去join 
	} 
}
void ZERO_Thread::detach() 
{
	th_->detach(); 
}
size_t ZERO_Thread::CURRENT_THREADID() { 
	// 声明为thread_local的本地变量在线程中是持续存在的,不同于普通临时变量的生命周期, 
	// 它具有static变量一样的初始化特征和生命周期,即使它不被声明为static。
	static thread_local size_t threadId = 0; 
	if(threadId == 0 ) { 
		std::stringstream ss; 
		ss << std::this_thread::get_id(); 
		threadId = strtol(ss.str().c_str(), NULL, 0);
	}return threadId;
}
void ZERO_Thread::threadEntry() { 
	running_ = true; 
	try { 
		run(); // 函数运行所在 
	}
	catch (std::exception &ex) { 
		running_ = false; 
		throw ex; 
	}catch (...) { 
		running_ = false; 
		throw;
	}running_ = false;
}
#include <iostream> 
#include <chrono> 
#include "zero_thread.h" 
using namespace std; 
class A: public ZERO_Thread
{
	public: 
	void run() { 
		while (running_) { 
			cout << "print A " << endl; 
			std::this_thread::sleep_for(std::chrono::seconds(5)); 
			}
			cout << "----- leave A " << endl; 
	}
};
class B: public ZERO_Thread 
{
	public: 
	void run() { 
		while (running_) { 
		cout << "print B " << endl; 
		std::this_thread::sleep_for(std::chrono::seconds(2)); 
		}
		cout << "----- leave B " << endl; 
		} 
};
int main() { 
	{ 
		A a; a.start(); 
		B b; b.start(); 
		std::this_thread::sleep_for(std::chrono::seconds(5));
		a.stop(); a.join(); b.stop(); b.join(); // 需要我们自己join 
	}
	cout << "Hello World!" << endl; 
	return 0; 
}

线程变量 - thread_local

ABSTRACT

In C++, thread_local is defined as a specifier to define the thread-local data and this data is created when the thread is created and destroyed when the thread is also destroyed, hence this thread-local data is known as thread-local storage. This thread_local is one of the storage classes other than extern and static specifiers. Therefore a variable declared as thread_local. It copies its thread as each thread created the variable is also created and hence this thread_local specifier can be declared or defined only on variable and this cannot be applied to functions definition or declarations and the declaration can be done only during the static duration.

在c++中,thread_local被定义为一个定义线程本地数据的关键字,该数据在创建线程时创建,在销毁线程时销毁,因此这种线程本地数据被称为线程本地存储。这个thread_local是extern和静态说明符以外的存储类之一。

因此,一个变量声明为thread_local。每一个被创建的线程都会复制自己的thread_local变量,因此这个thread_local说明符只能在变量上声明或定义,而不能应用于函数定义或声明,声明只能在静态期间完成。

C++ Storage Type

automatic

Temporary variable,Block scope, automatic allocation and destruction

临时变量,作用域在一个所属代码块内,代码块结束释放

All local objects that are not declared static, extern, or thread_local have this storage period.

未声明为 static、extern 或 thread_local 的所有局部对象均拥有此存储期。

static

The storage of such objects is allocated at the beginning of the program and unallocated at the end of the program. There is only one instance of such an object. All objects declared in the scope of namespaces (including global namespaces), plus objects declared static or extern, have this storage period. For details on initialization of objects with this storage period, see nonlocal and static local variables

这类对象的存储在程序开始时分配,并在程序结束时解分配。这类对象只存在一个实例。所有在命名空间(包含全局命名空间)作用域声明的对象,加上声明带有 static 或 extern 的对象均拥有此存储期。有关拥有此存储期的对象的初始化的细节与非局部变量与静态局部变量一致

thread

The storage of such objects is allocated at the start of the thread and unallocated at the end of the thread. Each thread has its own object instance. Only objects declared thread_local have this storage period. Thread_local can appear with static or extern, which are used to adjust links. Details about the initialization of objects with this storage period are consistent with non-local and static local variables.

这类对象的存储在线程开始时分配,并在线程结束时解分配。每个线程拥有它自身的对象实例。只有声明为 thread_local 的对象拥有此存储期。thread_local 能与 static 或 extern 一同出现,它们用于调整链接。关于具有此存储期的对象的初始化的细节与与非局部变量和静态局部变量一致。

dynamic

The storage of these objects is allocated and unallocated on request using dynamic memory allocation functions (new, malloc).

这类对象的存储是通过使用动态内存分配函数(new、malloc)来按请求进行分配和解分配的.

在c++中,变量被声明为线程局部数据,使用下划线(_)后跟线程关键字,如__thread int a, __thread char s等,这些变量可以作为任何变量访问,如全局变量或文件范围或函数范围,而自动变量总是线程局部变量,因此线程局部说明符可以与静态说明符或extern说明符结合使用。

这种变量的初始化需要一个静态构造函数,如果这个带有命名空间或类作用域的thread_local变量可以作为线程启动的一部分进行初始化,并且只有当类的一个成员只能是线程局部的,因此每个变量在每个线程中都可以有一个副本时,它才是静态的。

而这些初始化的线程局部变量被分配在。tdata部分,未初始化的被存储为用“COMMON”符号定义的变量,对于每个创建或初始化的新线程,线程在线程局部存储中分配一个新的块,每个线程都有一个指向线程控制块的线程指针,并有当前执行线程的线程指针的指针的值。因此,线程本地存储只能在创建任何新线程或在加载共享对象后或在程序启动本身第一次引用任何线程本地存储块时创建。

Demo

#include <iostream>   // std::cout
#include <thread>     // std::thread thread_local

thread_local int n=2;

void thread_integer(int n_val) {
    n=n_val;
}

void thread_cnt() {
    std::cout<<n;
}

void thread_func(int td) {
    thread_integer(td);
    ++n;
    thread_cnt();
}

int main(){
    n=4;
    std::thread it1(thread_func,1);
    std::thread it2(thread_func,2);
    std::thread it3(thread_func,3);
    
    it1.join();
    it2.join();
    it3.join();
    std::cout<<"\nmajor thread num = "<<n<<std::endl;
}

互斥量

在C++11中需要包含<mutex>模块。而在该文件中还有其他和mutex协作的类和函数,使得多线程编程时非常方便。

分类

  • std::mutex,独占的互斥量,不能递归使用。
  • std::time_mutex,带超时的独占互斥量,不能递归使用。
  • std::recursive_mutex,递归互斥量,不带超时功能。
  • std::recursive_timed_mutex,带超时的递归互斥量。

独占互斥量 std::mutex

std::mutex 是C++11 中最基本的互斥量,std::mutex 对象提供了独占所有权的特性——即不支持递归地

对 std::mutex 对象上锁,而 std::recursive_lock 则可以递归地对互斥量对象上锁.

成员函数

构造函数

std::mutex不允许拷贝构造,也不允许 move 拷贝,最初产生的 mutex 对象是处于

unlocked 状态的。

lock()

调用线程将锁住该互斥量。线程调用该函数会发生下面 3 种情况:

(1). 如果该互斥量当前没 有被锁住,则调用线程将该互斥量锁住,直到调用 unlock之前,该线程一直拥有该锁。

(2). 如果当前互斥量被其他线程锁住,则当前的调用线程被阻塞住。

(3). 如果当前互斥量被当前调用线程锁 住,则会产生死锁(deadlock)。

unlock()

解锁,释放对互斥量的所有权。

try_lock()

尝试锁住互斥量,如果互斥量被其他线程占有,则当前线程也不会被阻塞。线程调用该 函数也会出现下面 3 种情况:

(1). 如果当前互斥量没有被其他线程占有,则该线程锁住互斥量,直到该线程调用 unlock 释放互斥量。

(2). 如果当前互斥量被其他线程锁住,则当前调用线程返回 false,而并不会被阻塞掉。

(3). 如果当前互斥量被当前调用线程锁住,则会产生死锁(deadlock)。

递归互斥量std::recursive_mutex

递归锁允许同一个线程多次获取该互斥锁,可以用来解决同一线程需要多次获取互斥量时死锁的问题

#include <iostream> 
#include <thread> 
#include <mutex> 
struct Complex { 
    std::mutex mutex; 
    int i; 
    Complex() : i(1){} 
    void mul(int x) { 
        std::lock_guard<std::mutex> lock(mutex); 
        i *= x; 
    } //lock_guard析构时会自动释放锁
    void div(int x) { 
        std::lock_guard<std::mutex> lock(mutex); 
        i /= x; 
    }
    void both(int x, int y) { 
        std::lock_guard<std::mutex> lock(mutex); 
        mul(x); 
        div(y); 
    } 

};
int main(void) 
{ 
    Complex complex; 
    std::cout<<"start demo" <<std::endl;
    std::thread th(&Complex::both,&complex,32,23);
    th.join();
    std::cout<<"finish demo "<<complex.i <<std::endl;
    return 0; 
}
#include <iostream> 
#include <thread> 
#include <mutex> 
struct Complex { 
    std::recursive_mutex mutex; 
    int i; 
    Complex() : i(1){} 
    void mul(int x) { 
        std::lock_guard<std::recursive_mutex> lock(mutex); 
        i *= x; 
    } //lock_guard析构时会自动释放锁
    void div(int x) { 
        std::lock_guard<std::recursive_mutex> lock(mutex); 
        i /= x; 
    }
    void both(int x, int y) { 
        std::lock_guard<std::recursive_mutex> lock(mutex); 
        mul(x); 
        div(y); 
    } 

};
int main(void) 
{ 
    Complex complex; 
    std::cout<<"start demo" <<std::endl;
    std::thread th(&Complex::both,&complex,32,23);
    th.join();
    std::cout<<"finish demo "<<complex.i <<std::endl;
    return 0; 
}

带超时的互斥量std::timed_mutex和 std::recursive_timed_mutex

std::timed_mutex比std::mutex多了两个超时获取锁的接口:try_lock_for和try_lock_until

//1-2-timed_mutex 
#include <iostream> 
#include <thread> 
#include <mutex> 
#include <chrono> 
std::timed_mutex mutex; 
void work() { 
    std::chrono::milliseconds timeout(100); 
    while (true) { 
        if (mutex.try_lock_for(timeout)) { 
            std::cout << std::this_thread::get_id() << ": do work with the mutex" << std::endl; 
            std::chrono::milliseconds sleepDuration(250); 
            std::this_thread::sleep_for(sleepDuration); 
            mutex.unlock(); 
            std::this_thread::sleep_for(sleepDuration); 
            }else { 
                std::cout << std::this_thread::get_id() << ": do work without the mutex" << std::endl; 
                std::chrono::milliseconds sleepDuration(100); 
                std::this_thread::sleep_for(sleepDuration); 
            } 
    } 
}
int main(void) { 
    std::thread t1(work); 
    std::thread t2(work); 
    t1.join(); t2.join(); 
    std::cout << "main finish\n"; 
    return 0; 
    }

lock_guard和unique_lock的使用和区别

unique_lock与lock_guard都能实现自动加锁和解锁,但是前者更加灵活,能实现更多的功能。

unique_lock可以进行临时解锁和再上锁,如在构造对象之后使用lck.unlock()就可以进行解锁,

lck.lock()进行上锁,而不必等到析构时自动解锁。

必须使用unique_lock 的场景:需要结合notify+wait的场景使用unique_lock;

1. unique_lock 是通用互斥包装器,允许延迟锁定、锁定的有时限尝试、递归锁定、所有权转移和与

条件变量一同使用。

2. unique_locklock_guard使用更加灵活,功能更加强大。

3. 使用unique_lock需要付出更多的时间、性能成本。

#include <iostream> 
#include <deque> 
#include <thread> 
#include <mutex> 
#include <condition_variable> 
#include <unistd.h> 
std::deque<int> q; 
std::mutex mu; 
std::condition_variable cond; 
int count = 0; 
void fun1() { 
    while (true) { 
       std::unique_lock<std::mutex> locker(mu);
       q.push_front(count++); 
       locker.unlock(); // 这里是不是必须的? 
       cond.notify_one(); // } 
       sleep(1); 
       } 
}
void fun2() { 
    while (true) { 
        std::unique_lock<std::mutex> locker(mu); 
        cond.wait(locker, [](){return !q.empty();}); 
        auto data = q.back(); 
        q.pop_back(); 
        // locker.unlock(); // 这里是不是必须的? 
        std::cout << "thread2 get value form thread1: " << data << std::endl; 
        } 
}
int main() { 
    std::thread t1(fun1); 
    std::thread t2(fun2); 
    t1.join(); t2.join(); 
    return 0; 
}

condition_variable - 条件变量

ABSTRACT

A condition variable is an object able to block the calling thread until notified to resume. It uses a unique_lock (over a mutex) to lock the thread when one of its wait functions is called. The thread remains blocked until woken up by another thread that calls a notification function on the same condition_variable object. Objects of type condition_variable always use unique_lock<mutex> to wait: for an alternative that works with any kind of lockable type, see condition_variable_any.

条件变量是一个对象,它能够阻塞调用线程,直到通知它恢复。

当线程的一个等待函数被调用时,它使用unique_lock(通过互斥锁)来锁定线程。线程保持阻塞状态,直到另一个线程调用同一个condition_variable对象上的通知函数时才被唤醒。

condition_variable类型的对象总是使用unique_lock<mutex>等待:等待可用于任何类型的可锁定类型的替代方法,参见condition_variable_any.

Member Functions

wait

Wait until notified (public member function )

wait_for

Wait for timeout or until notified (public member function )

wait_until

Wait until notified or time point (public member function )

notify_one

Notify one (public member function )

notify_all

Notify all (public member function )

std::condition_variable::wait

prototype

void wait (unique_lock<mutex>& lck);

template <class Predicate> void wait (unique_lock<mutex>& lck, Predicate pred);

abstract

Wait until notified

The execution of the current thread (which shall have locked lck's mutex) is blocked until notified.

At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.

Once notified (explicitly, by some other thread), the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called. Then the function returns (notice that this last mutex locking may block again the thread before returning).

Generally, the function is notified to wake up by a call in another thread either to member notify_one or to member notify_all. But certain implementations may produce spurious wake-up calls without any of these functions being called. Therefore, users of this function shall ensure their condition for resumption is met.

If pred is specified (2), the function only blocks if pred returns false, and notifications can only unblock the thread when it becomes true (which is specially useful to check against spurious wake-up calls). This version (2) behaves as if implemented as:

while (!pred()) wait(lck);

等到通知

当前线程(应该已经锁定了llock的互斥锁)的执行被阻塞,直到通知。

在阻塞线程时,函数会自动调用lck.unlock(),从而允许其他被锁定的线程继续执行。

一旦得到通知(被其他线程显式地通知),该函数将解除阻塞并调用lck.lock(),使lck处于与调用该函数时相同的状态。然后函数返回(注意,最后一个互斥锁可能会在返回之前再次阻塞线程)。

通常,在另一个线程中调用notify_one成员或notify_all成员会通知该函数被唤醒。但是某些实现可能会在没有调用这些函数的情况下产生虚假的唤醒调用。因此,使用该功能的用户必须确保满足恢复使用的条件。

如果指定了pred参数,函数只会在pred返回false时阻塞,而通知只有在它变为true时才能解除阻塞(这对于检查虚假唤醒调用特别有用)。这个行为就像实现了:

while (!pred()) wait(lck);

parameters

lck

A unique_lock object whose mutex object is currently locked by this thread.
All concurrent calls to wait member functions of this object shall use the same underlying mutex object (as returned by lck.mutex()).

pred

A callable object or function that takes no arguments and returns a value that can be evaluated as a bool.
This is called repeatedly until it evaluates to true.


return value

None

demo

// condition_variable::wait (with predicate) #include <iostream> // std::cout #include <thread> // std::thread, std::this_thread::yield #include <mutex> // std::mutex, std::unique_lock #include <condition_variable> // std::condition_variable std::mutex mtx; std::condition_variable cv; int cargo = 0; bool shipment_available() {return cargo!=0;} void consume (int n) { for (int i=0; i<n; ++i) { std::unique_lock<std::mutex> lck(mtx); cv.wait(lck,shipment_available); // consume: std::cout << cargo << '\n'; cargo=0; } } int main () { std::thread consumer_thread (consume,10); // produce 10 items when needed: for (int i=0; i<10; ++i) { while (shipment_available()) std::this_thread::yield(); std::unique_lock<std::mutex> lck(mtx); cargo = i+1; cv.notify_one(); } consumer_thread.join(); return 0; }

std::condition_variable::wait_for

prototype

template <class Rep, class Period> cv_status wait_for (unique_lock<mutex>& lck, const chrono::duration<Rep,Period>& rel_time);

template <class Rep, class Period, class Predicate> bool wait_for (unique_lock<mutex>& lck, const chrono::duration<Rep,Period>& rel_time, Predicate pred);

abstract

The execution of the current thread (which shall have locked lck's mutex) is blocked during rel_time, or until notified (if the latter happens first).

At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.

Once notified or once rel_time has passed, the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called. Then the function returns (notice that this last mutex locking may block again the thread before returning). Generally, the function is notified to wake up by a call in another thread either to member notify_one or to member notify_all. But certain implementations may produce spurious wake-up calls without any of these functions being called. Therefore, users of this function shall ensure their condition for resumption is met. If pred is specified (2), the function only blocks if pred returns false, and notifications can only unblock the thread when it becomes true (which is especially useful to check against spurious wake-up calls). It behaves as if implemented as:

return wait_until (lck, chrono::steady_clock::now() + rel_time, std::move(pred));

当前线程的执行(已经锁定了lock的互斥锁)在rel_time期间被阻塞,或者直到通知(如果后者先发生)之前。

在阻塞线程时,函数会自动调用lck.unlock(),从而允许其他被锁定的线程继续执行。

一旦收到通知或rel_time已通过,该函数就会解除阻塞并调用lck.lock(),使lck处于与调用该函数时相同的状态。然后函数返回(注意,最后一个互斥锁可能会在返回之前再次阻塞线程)。

通常,在另一个线程中调用notify_one成员或notify_all成员会通知该函数被唤醒。但是某些实现可能会在没有调用这些函数的情况下产生虚假的唤醒调用。因此,使用该功能的用户必须确保满足恢复使用的条件。

如果指定了pred参数,函数只会在pred返回false时阻塞,而通知只有在它变为true时才能解除阻塞(这对于检查虚假唤醒调用特别有用)。它的行为就像实现了:

返回wait_until (lck, chrono::steady_clock::now() + rel_time, std::move(pred));

parameters

lck

A unique_lock object whose mutex object is currently locked by this thread.
All concurrent calls to wait member functions of this object shall use the same underlying mutex object (as returned by lck.mutex()).

rel_time

The maximum time span during which the thread will block waiting to be notified.
duration is an object that represents a specific relative time.

pred

A callable object or function that takes no arguments and returns a value that can be evaluated as a bool.
This is called repeatedly until it evaluates to true.


return value

The unconditional version (1) returns cv_status::timeout if the function returns because rel_time has passed, or cv_status::no_timeout otherwise.
The predicate version (2) returns pred(), regardless of whether the timeout was triggered (although it can only be false if triggered).

demo

// condition_variable::wait_for example #include <iostream> // std::cout #include <thread> // std::thread #include <chrono> // std::chrono::seconds #include <mutex> // std::mutex, std::unique_lock #include <condition_variable> // std::condition_variable, std::cv_status std::condition_variable cv; int value; void read_value() { std::cin >> value; cv.notify_one(); } int main () { std::cout << "Please, enter an integer (I'll be printing dots): \n"; std::thread th (read_value); std::mutex mtx; std::unique_lock<std::mutex> lck(mtx); while (cv.wait_for(lck,std::chrono::seconds(1))==std::cv_status::timeout) { std::cout << '.' << std::endl; } std::cout << "You entered: " << value << '\n'; th.join(); return 0;

std::condition_variable::wait_until

prototype

template <class Clock, class Duration> cv_status wait_until (unique_lock<mutex>& lck, const chrono::time_point<Clock,Duration>& abs_time);

template <class Clock, class Duration, class Predicate> bool wait_until (unique_lock<mutex>& lck, const chrono::time_point<Clock,Duration>& abs_time, Predicate pred);

abstract

Wait until notified or time point

The execution of the current thread (which shall have locked lck's mutex) is blocked either until notified or until abs_time, whichever happens first.

At the moment of blocking the thread, the function automatically calls lck.unlock(), allowing other locked threads to continue.

Once notified or once it is abs_time, the function unblocks and calls lck.lock(), leaving lck in the same state as when the function was called. Then the function returns (notice that this last mutex locking may block again the thread before returning).

Generally, the function is notified to wake up by a call in another thread either to member notify_one or to member notify_all. But certain implementations may produce spurious wake-up calls without any of these functions being called. Therefore, users of this function shall ensure their condition for resumption is met.

If pred is specified (2), the function only blocks if pred returns false, and notifications can only unblock the thread when it becomes true (which is especially useful to check against spurious wake-up calls). It behaves as if implemented as:

while (!pred()) if ( wait_until(lck,abs_time) == cv_status::timeout) return pred(); return true;

当前线程(应该已经锁定了llock的互斥锁)的执行被阻塞,直到接到通知或者直到abs_time,无论哪个先发生。

在阻塞线程时,函数会自动调用lck.unlock(),从而允许其他被锁定的线程继续执行。

一旦收到通知或者是abs_time,函数就会解除阻塞并调用lck.lock(),使lck处于与调用函数时相同的状态。然后函数返回(注意,最后一个互斥锁可能会在返回之前再次阻塞线程)。

通常,在另一个线程中调用notify_one成员或notify_all成员会通知该函数被唤醒。但是某些实现可能会在没有调用这些函数的情况下产生虚假的唤醒调用。因此,使用该功能的用户必须确保满足恢复使用的条件。

如果指定了pred参数,函数只会在pred返回false时阻塞,而通知只有在它变为true时才能解除阻塞(这对于检查虚假唤醒调用特别有用)。它的行为就像实现了:

while (!pred()) if ( wait_until(lck,abs_time) == cv_status::timeout) return pred(); return true;

parameters

lck

A unique_lock object whose mutex object is currently locked by this thread.
All concurrent calls to wait member functions of this object shall use the same underlying mutex object (as returned by lck.mutex()).

abs_time

A point in time at which the thread will stop blocking, allowing the function to return.
time_point is an object that represents a specific absolute time.

pred

A callable object or function that takes no arguments and returns a value that can be evaluated as a bool.
This is called repeatedly until it evaluates to true.


return value

The unconditional version (1) returns cv_status::timeout if the function returns because abs_time has been reached, or cv_status::no_timeout otherwise.
The predicate version (2) returns pred(), regardless of whether the timeout was triggered (although it can only be false if triggered).

demo

// condition_variable example
#include <iostream>           // std::cout
#include <thread>             // std::thread
#include <mutex>              // std::mutex, std::unique_lock
#include <condition_variable> // std::condition_variable

std::mutex mtx;
std::condition_variable cv;
bool ready = false;

void print_id (int id) {
  std::unique_lock<std::mutex> lck(mtx);
  while (!ready) cv.wait(lck);
  // ...
  std::cout << "thread " << id << '\n';
}

void go() {
  std::unique_lock<std::mutex> lck(mtx);
  ready = true;
  cv.notify_all();
}

int main ()
{
  std::thread threads[10];
  // spawn 10 threads:
  for (int i=0; i<10; ++i)
    threads[i] = std::thread(print_id,i);

  std::cout << "10 threads ready to race...\n";
  go();                       // go!

  for (auto& th : threads) th.join();

  return 0;
}

原子操作-atomic

ABSTRACT

Objects of atomic types contain a value of a particular type (T).

The main characteristic of atomic objects is that access to this contained value from different threads cannot cause data races (i.e., doing that is well-defined behavior, with accesses properly sequenced). Generally, for all other objects, the possibility of causing a data race for accessing the same object concurrently qualifies the operation as undefined behavior.

Additionally, atomic objects have the ability to synchronize access to other non-atomic objects in their threads by specifying different memory orders.

原子类型的对象包含一个特定类型(T)的值。

原子对象的主要特征是,从不同的线程访问这个包含的值不会导致数据竞争(也就是说,这样做是定义良好的行为,访问顺序正确)。一般来说,对于所有其他对象,由于并发访问同一对象而导致数据竞争的可能性,因此将该操作限定为未定义的行为。

此外,通过指定不同的内存顺序,原子对象能够同步访问其线程中的其他非原子对象。

内存顺序模型

内存顺序模型有下面四种,默认的序列为一致顺序

宽松顺序(Relaxed ordering):原子操作带上memory_order_relaxed参数,仅保证操作是原子性的,不提供任何顺序约束。

释放获得顺序(Release-Acquire ordering):对于同一个atomic,在线程A中使用memory_order_release调用store(),在线程B中使用memory_order_acquire调用load()。这种模型保证在store()之前发生的所有读写操作(A线程)不会在store()后调用,在load()之后发生的所有读写操作(B线程)不会在load()的前调用,A线程的所有写入操作对B线程可见。

释放消费顺序(Release-Consume ordering):释放获得顺序的弱化版,对于同一个atomic,在线程A中使用memory_order_release调用store(),在线程B中使用memory_order_consume调用load()。这种模型保证在store()之前发生的所有读写操作(A线程)不会在store()后调用,在load()之后发生的依赖于该atomic的读写操作(B线程)不会在load()的前面调用,A线程对该atomic的带依赖写入操作对B线程可见。

序列一致顺序(Sequential consistency):原子操作带上memory_order_seq_cst参数,这也是C++标准库的默认顺序,也是执行代价最大的,它是memory_order_acq_rel的加强版,如果是读取就是acquire语义,如果是写入就是 release 语义,且全部读写操作顺序均一致。

Template parameters

T

Type of the contained value.

This shall be a trivially copyable type.

包含值的类型。 这应该是一个可简单复制的类型。


Member Functions

General atomic operations

(constructor)

Construct atomic (public member function )

构造函数

operator=

Assign contained value (public member function )

传递包含的值(公共成员函数)

is_lock_free

Is lock-free (public member function )

判断在 *this 的基本操作是否存在任意的锁。(公共成员函数)

store

Modify contained value (public member function )

设置*this 的值(公共成员函数)

load

Read contained value (public member function )

获取*this 的值(公共成员函数)

operator T

Access contained value (public member function )

读取并返回该存储的值(公共成员函数)

exchange

Access and modify contained value (public member function )

访问和修改所包含的值(公共成员函数)

compare_exchange_weak

Compare and exchange contained value (weak) (public member function )

比较并改变包含的值(公共成员函数)

比较原子对象所包含值的内容与预期值:

-如果为真,它会用val替换包含的值(像store一样)。

-如果为false,则用包含的值替换expected。

这个函数可能在满足真的情况下仍然返回false,所以可以在循环里使用

compare_exchange_strong

Compare and exchange contained value (strong) (public member function )

比较并改变包含的值(公共成员函数)

比较原子对象所包含值的内容与预期值:

-如果为真,它会用val替换包含的值(像store一样)。

-如果为false,则用包含的值替换expected。

Operations supported by certain specializations (integral and/or pointer)

Do not use floating point types here

fetch_add

Add to contained value (public member function )

fetch_sub

Subtract from contained value (public member function )

fetch_and

Apply bitwise AND to contained value (public member function )

fetch_or

Apply bitwise OR to contained value (public member function )

fetch_xor

Apply bitwise XOR to contained value (public member function )

operator++

Increment container value (public member function )

operator--

Decrement container value (public member function )

std::atomic::store & load

prototype

void store (T val, memory_order sync = memory_order_seq_cst) volatile noexcept; void store (T val, memory_order sync = memory_order_seq_cst) noexcept;

abstract

Replaces the contained value with val.

The operation is atomic and follows the memory ordering specified by sync.

parameters

val

Value to copy to the contained object. T is atomic's template parameter (the type of the contained value).

sync

Synchronization mode for the operation. This shall be one of these possible values of the enum type memory_order:

There are six types of memory_orders defined by the C++ standard library, of which memory_order_acq_rel can be regarded as a combination of memory_order_acquire and memory_order_release.

typedef enum memory_order { memory_order_relaxed, memory_order_consume, memory_order_acquire, memory_order_release, memory_order_acq_rel, memory_order_seq_cst } memory_order;

The memory order model is in appendix

demo

// atomic::load/store example
#include <iostream>       // std::cout
#include <atomic>         // std::atomic, std::memory_order_relaxed
#include <thread>         // std::thread

std::atomic<int> foo (0);

void set_foo(int x) {
  foo.store(x,std::memory_order_relaxed);     // set value atomically
}

void print_foo() {
  int x;
  do {
    x = foo.load(std::memory_order_relaxed);  // get value atomically
  } while (x==0);
  std::cout << "foo: " << x << '\n';
}

int main ()
{
  std::thread first (print_foo);
  std::thread second (set_foo,10);
  first.join();
  second.join();
  return 0;
}

prototype

T exchange (T val, memory_order sync = memory_order_seq_cst) volatile noexcept; T exchange (T val, memory_order sync = memory_order_seq_cst) noexcept;

abstract

Replaces the contained value by val and returns the value it had immediately before.

The entire operation is atomic (an atomic read-modify-write operation): the value is not affected by other threads between the instant its value is read (to be returned) and the moment it is modified by this function.

将包含的值替换为val并返回它之前的值。

整个操作是原子(原子的读-修改-写操作):价值不受其他线程之间的即时影响它的值是读取(返回),现在是修改这个函数。

return value

The contained value before the call. T is atomic's template parameter (the type of the contained value).

demo

// atomic::exchange example
#include <iostream>       // std::cout
#include <atomic>         // std::atomic
#include <thread>         // std::thread
#include <vector>         // std::vector

std::atomic<bool> ready (false);
std::atomic<bool> winner (false);

void count1m (int id) {
  while (!ready) {}                  // wait for the ready signal
  for (int i=0; i<1000000; ++i) {}   // go!, count to 1 million
  if (!winner.exchange(true)) { std::cout << "thread #" << id << " won!\n"; }
};

int main ()
{
  std::vector<std::thread> threads;
  std::cout << "spawning 10 threads that count to 1 million...\n";
  for (int i=1; i<=10; ++i) threads.push_back(std::thread(count1m,i));
  ready = true;
  for (auto& th : threads) th.join();

  return 0;
}

std::atomic::compare_exchange_weak

prototype

bool compare_exchange_weak (T& expected, T val, memory_order sync = memory_order_seq_cst) volatile noexcept; bool compare_exchange_weak (T& expected, T val, memory_order sync = memory_order_seq_cst) noexcept; bool compare_exchange_weak (T& expected, T val, memory_order success, memory_order failure) volatile noexcept; bool compare_exchange_weak (T& expected, T val, memory_order success, memory_order failure) noexcept;

abstract

Compares the contents of the atomic object's contained value with expected:

  • if true, it replaces the contained value with val (like store).
  • if false, it replaces expected with the contained value .

The function always accesses the contained value to read it, and -if the comparison is true- it then also replaces it. But the entire operation is atomic: the value cannot be modified by other threads between the instant its value is read and the moment it is replaced.

比较原子对象所包含值的内容与预期值:

-如果为真,它会用val替换包含的值(像store一样)。

-如果为false,则用包含的值替换expected。

该函数总是访问所包含的值来读取它,如果比较为真,那么它也会替换它。

这个函数可能在满足真的情况下仍然返回false,所以可以在循环里使用.

return value

true if expected compares equal to the contained value (and does not fail spuriously). false otherwise.

如果预期的值与所包含的值相等(没有因为代码错误而导致失败),则为True。

否则错误。

demo

// atomic::compare_exchange_weak example:
#include <iostream>       // std::cout
#include <atomic>         // std::atomic
#include <thread>         // std::thread
#include <vector>         // std::vector

// a simple global linked list:
struct Node { int value; Node* next; };
std::atomic<Node*> list_head (nullptr);

void append (int val) {     // append an element to the list
  Node* oldHead = list_head;
  Node* newNode = new Node {val,oldHead};

  // what follows is equivalent to: list_head = newNode, but in a thread-safe way:
  while (!list_head.compare_exchange_weak(oldHead,newNode))
    newNode->next = oldHead;
}

int main ()
{
  // spawn 10 threads to fill the linked list:
  std::vector<std::thread> threads;
  for (int i=0; i<10; ++i) threads.push_back(std::thread(append,i));
  for (auto& th : threads) th.join();

  // print contents:
  for (Node* it = list_head; it!=nullptr; it=it->next)
    std::cout << ' ' << it->value;
  std::cout << '\n';

  // cleanup:
  Node* it; while (it=list_head) {list_head=it->next; delete it;}

  return 0;
}

Reference counting - 引用计数

abstract

The purpose of a reference counter is to count the number of pointers to an object. The object can be destroyed as soon as the reference counter reaches zero.

Implementation

#include <boost/intrusive_ptr.hpp>
#include <boost/atomic.hpp>

class X {
public:
  typedef boost::intrusive_ptr<X> pointer;
  X() : refcount_(0) {}

private:
  mutable boost::atomic<int> refcount_;
  friend void intrusive_ptr_add_ref(const X * x)
  {
    x->refcount_.fetch_add(1, boost::memory_order_relaxed);
  }
  friend void intrusive_ptr_release(const X * x)
  {
    if (x->refcount_.fetch_sub(1, boost::memory_order_release) == 1) {
      boost::atomic_thread_fence(boost::memory_order_acquire);
      delete x;
    }
  }
};

Usage

X::pointer x = new X;

Spinlock - 自旋锁

abstract

The purpose of a spin lock is to prevent multiple threads from concurrently accessing a shared data structure. In contrast to a mutex, threads will busy-wait and waste CPU cycles instead of yielding the CPU to another thread. Do not use spinlocks unless you are certain that you understand the consequences.

旋转锁的目的是防止多个线程并发地访问共享数据结构。与互斥锁相反,线程将忙等待并浪费CPU周期,而不是将CPU让给另一个线程。不要使用自旋锁,除非你确定你明白其后果,可能的使用场景是:在现有系统中的锁操作是短时锁的情况下,要求线程强制一定顺序去执行。

Implementation

#include <boost/atomic.hpp>

class spinlock {
private:
  typedef enum {Locked, Unlocked} LockState;
  boost::atomic<LockState> state_;

public:
  spinlock() : state_(Unlocked) {}

  void lock()
  {
    while (state_.exchange(Locked, boost::memory_order_acquire) == Locked) {
      /* busy-wait */
    }
  }
  void unlock()
  {
    state_.store(Unlocked, boost::memory_order_release);
  }
};

Usage

spinlock s;

s.lock(); // access data structure here

s.unlock();

Wait-free ring buffer - 无锁环形队列

abstract

A wait-free ring buffer provides a mechanism for relaying objects from one single "producer" thread to one single "consumer" thread without any locks. The operations on this data structure are "wait-free" which means that each operation finishes within a constant number of steps. This makes this data structure suitable for use in hard real-time systems or for communication with interrupt/signal handlers.

CAS无锁循环队列提供了一种机制,可以在没有任何锁的情况下将对象从一个“生产者”线程中继到一个“消费者”线程。该数据结构上的操作是“无等待”的,这意味着每个操作在固定数量的步骤内完成。这使得该数据结构适用于硬实时系统或与中断/信号处理程序通信。

Implementation

#include <iostream>       // std::cout
#include <atomic>         // std::atomic, std::memory_order_relaxed
using namespace std;
template<typename T, size_t Size>
class ringbuffer {
public:
  ringbuffer() : head_(0), tail_(0) {}

  bool push(const T & value)
  {
    size_t head = head_.load(memory_order_relaxed);
    size_t next_head = next(head);
    if (next_head == tail_.load(memory_order_acquire))
      return false;
    ring_[head] = value;
    head_.store(next_head, memory_order_release);
    return true;
  }
  bool pop(T & value)
  {
    size_t tail = tail_.load(memory_order_relaxed);
    if (tail == head_.load(memory_order_acquire))
      return false;
    value = ring_[tail];
    tail_.store(next(tail), memory_order_release);
    return true;
  }
private:
  size_t next(size_t current)
  {
    return (current + 1) % Size;
  }
  T ring_[Size];
  atomic<size_t> head_, tail_;
};

Usage

ringbuffer<int, 32> r;

// try to insert an element
if (r.push(42)) { /* succeeded */ }
else { /* buffer full */ }

// try to retrieve an element
int value;
if (r.pop(value)) { /* succeeded */ }
else { /* buffer empty */ }

Lock-free multi-producer queue- 无锁多生产者队列

abstract

The purpose of the lock-free multi-producer queue is to allow an arbitrary number of producers to enqueue objects which are retrieved and processed in FIFO order by a single consumer.

无锁多生产者队列的目的是允许任意数量的生产者排队对象,这些对象由单个消费者按照FIFO顺序检索和处理。

Implementation

ringbuffer<int, 32> r;

// try to insert an element
if (r.push(42)) { /* succeeded */ }
else { /* buffer full */ }

// try to retrieve an element
int value;
if (r.pop(value)) { /* succeeded */ }
else { /* buffer empty */ }

Lock-free multi-producer queue- 无锁多生产者队列

abstract

The purpose of the lock-free multi-producer queue is to allow an arbitrary number of producers to enqueue objects which are retrieved and processed in FIFO order by a single consumer.

无锁多生产者队列的目的是允许任意数量的生产者排队对象,这些对象由单个消费者按照FIFO顺序检索和处理。

Implementation

template<typename T>
class lockfree_queue {
public:
  struct node {
    T data;
    node * next;
  };
  void push(const T &data)
  {
    node * n = new node;
    n->data = data;
    node * stale_head = head_.load(boost::memory_order_relaxed);
    do {
      n->next = stale_head;
    } while (!head_.compare_exchange_weak(stale_head, n, boost::memory_order_release));
  }

  node * pop_all(void)
  {
    T * last = pop_all_reverse(), * first = 0;
    while(last) {
      T * tmp = last;
      last = last->next;
      tmp->next = first;
      first = tmp;
    }
    return first;
  }

  lockfree_queue() : head_(0) {}

  // alternative interface if ordering is of no importance
  node * pop_all_reverse(void)
  {
    return head_.exchange(0, boost::memory_order_consume);
  }
private:
  boost::atomic<node *> head_;
};

Usage

lockfree_queue<int> q;

// insert elements
q.push(42);
q.push(2);

// pop elements
lockfree_queue<int>::node * x = q.pop_all()
while(x) {
  X * tmp = x;
  x = x->next;
  // process tmp->data, probably delete it afterwards
  delete tmp;
}

异步操作future & async & package_task & promise

std::future

What

std::future期待一个返回,从一个异步调用的角度来说,future更像是执行函数的返回值,C++标准库

使用std::future为一次性事件建模,如果一个事件需要等待特定的一次性事件,那么这线程可以获取一

个future对象来代表这个事件。

在库的头文件中声明了两种future,唯一future(std::future)和共享future(std::shared_future)这

两个是参照std::unique_ptr和std::shared_ptr设立的,前者的实例是仅有的一个指向其关联事件的实例,而后者可以有多个实例指向同一个关联事件,当事件就绪时,所有指向同一事件的std::shared_future实例会变成就绪。

线程可以周期性的在这个future上等待一小段时间,检查future是否已经ready,如果没有,该线程可以

先去做另一个任务,一旦future就绪,future就无法复位(无法再次使用这个future等待这个事

件),所以future代表的是一次性事件

How

std::future是一个模板,模板参数就是期待返回的类型,虽然future被用于线程间通信,但其本身却并不提供同步访问,必须通过互斥元或其他同步机制来保护访问。

future使用的时机是当你不需要立刻得到一个结果的时候,你可以开启一个线程帮你去做一项任务,并

期待这个任务的返回,但是std::thread并没有提供这样的机制,这就需要用到std::async和std::future

(都在头文件中声明)

std::async返回一个std::future对象,而不是给你一个确定的值(所以当你不需要立刻使用此值的时候才

需要用到这个机制)。当你需要使用这个值的时候,对future使用get(),线程就会阻塞直到future就

绪,然后返回该值。

Demo

#include <iostream> 
#include <future> 
#include <thread> 
using namespace std; 
int find_result_to_add() {
    std::this_thread::sleep_for(std::chrono::seconds(2)); // 用来测试异步延迟的影 响 
    std::cout << "find_result_to_add" << std::endl; return 1 + 1; 
}
int find_result_to_add2(int a, int b) {
    // std::this_thread::sleep_for(std::chrono::seconds(5)); // 用来测试异步延迟的影 响 
    return a + b; 
}
void do_other_things() { 
    std::cout << "do_other_things" << std::endl;
    std::this_thread::sleep_for(std::chrono::seconds(5));
}

int main() { 
    std::future<int> result = std::async(launch::async,find_result_to_add); 
    // std::future<decltype (find_result_to_add())> result = std::async(find_result_to_add); 
    // auto result = std::async(find_result_to_add); // 推荐的写法 
    do_other_things();
    std::cout << "result: " << result.get() << std::endl; // 延迟是否有影响? 
    // std::future<decltype (find_result_to_add2(int, int))> result2 = std::async(find_result_to_add2, 10, 20); //错误 
    // std::future<decltype (find_result_to_add2(0, 0))> result2 = std::async(find_result_to_add2, 10, 20); 
    // std::cout << "result2: " << result2.get() << std::endl; // 延迟是否有影响? 
    // std::cout << "main finish" << endl; 
    return 0; 
}

std::async

What

跟thread类似,async允许你通过将额外的参数添加到调用中,来将附加参数传递给函数。如果传入的

函数指针是某个类的成员函数,则还需要将类对象指针传入(直接传入,传入指针,或者是std::ref封

装)。

默认情况下,std::async是否启动一个新线程,或者在等待future时,任务是否同步运行都取决于你给的

参数。这个参数为std::launch类型

std::launch::defered表明该函数会被延迟调用,直到在future上调用get()或者wait()为止

std::launch::async,表明函数会在自己创建的线程上运行

std::launch::any = std::launch::defered | std::launch::async

std::launch::sync = std::launch::defered

默认选项参数被设置为std::launch::any。如果函数被延迟运行可能永远都不会运行。

std::package_task

What

The class template std::packaged_task wraps any Callable target (function, lambda expression,

bind expression, or another function object) so that it can be invoked asynchronously. Its return

value or exception thrown is stored in a shared state which can be accessed through std::future

objects.

可以通过std::packaged_task对象获取任务相关联的feature,调用get_future()方法可以获得

std::packaged_task对象绑定的函数的返回值类型的future。std::packaged_task的模板参数是函数签

PS:(例如int add(int a, intb)的函数签名就是int(int, int)

How

  1. 先创建一个package_task 的对象
  2. 通过get_future 获取future
  3. 执行task
  4. 主线程运行other job
  5. future get返回值

Demo

#include <iostream> 
#include <future> 
using namespace std; 
int add(int a, int b, int c) { 
    std::cout << "call add\n"; 
    return a + b + c; 
}
void do_other_things() { 
    std::cout << "do_other_things" << std::endl; 
}
int main() { 
    std::packaged_task<int(int, int, int)> task(add); // 封装任务 
    do_other_things(); 
    std::future<int> result = task.get_future(); 
    task(1, 1, 2); //必须要让任务执行,否则在get()获取future的值时会一直阻塞 
    std::cout << "result:" << result.get() << std::endl; 
    return 0; 
}

std::promise

What

std::promise提供了一种设置值的方式,它可以在这之后通过相关联的std::future对象进行读取。换种

说法,之前已经说过std::future可以读取一个异步函数的返回值了,那么这个std::promise就提供一种

方式手动让future就绪

How

线程在创建promise的同时会获得一个future,然后将promise传递给设置他的线程,当前线程则持有

future,以便随时检查是否可以取值。

future的表现为期望,当前线程持有future时,期望从future获取到想要的结果和返回,可以把future当

做异步函数的返回值。而

promise是一个承诺,当线程创建了promise对象后,这个promise对象向线程承诺他必定会被人设置一

个值,和promise相关联的future就是获取其返回的手段。

Demo

#include <future> 
#include <string> 
#include <thread> 
#include <iostream> 
using namespace std; 
void print(std::promise<std::string>& p) { p.set_value("There is the result whitch you want."); }
void do_some_other_things() { std::cout << "Hello World" << std::endl; }
int main() { 
    std::promise<std::string> promise; 
    std::future<std::string> result = promise.get_future(); 
    std::thread t(print, std::ref(promise)); 
    do_some_other_things(); 
    std::cout << result.get() << std::endl; 
    t.join();
    return 0;
}

单次操作 - call_once

What

std:call_once是C++11引入的新特性,如需使用,只需要#include <mutex>即可,简单来说std:call_once的作用,确保函数或代码片段在多线程环境下,只需要执行一次,常用的场景如Init()操作或一些系统参数的获取等。

How

std::call_once用法比较简单,配合std::once_flag即可实现

Demo

#include <iostream>
#include <thread>
#include <mutex>

std::once_flag flag;

void Initialize()
{
	std::cout << "Run into Initialize.." << std::endl;
}

void Init()
{
	std::call_once(flag, Initialize);
}

int main()
{
	std::thread t1(Init);
	std::thread t2(Init);
	std::thread t3(Init);
	std::thread t4(Init);
	t1.join();
	t2.join();
	t3.join();
	t4.join();
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Ym影子

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值