[C++]boost提供的几种lock-free方案以及std::atomic实现无锁队列

boost方案

boost提供了三种无锁方案

boost::lockfree::queue:

支持多个生产者和多个消费者线程的无锁队列。

boost::lockfree::stack:

支持多个生产者和多个消费者线程的无锁栈。

boost::lockfree::spsc_queue:

仅支持单个生产者和单个消费者线程的无锁队列。相比boost::lockfree::queue,其效率更高。

注:这些API内部是通过轻量级原子锁实现的lock-free,不是真正意义的无锁。我看到的资料中,貌似只有linux kernel中fifo实现了真正意义上的无锁,但是仅用于与单个消费者单个生产者的环境。

boost官方文档:

http://www.boost.org/doc/libs/1_60_0/doc/html/lockfree.html

queue容量和自增长的问题

可以设置初始容量,添加新元素时如果容量不够,则总容量可能自动增长:queue在当前操作系统上如果支持lock-free,则不会自动增长,如果不支持lock-free,才会自动增长。不同的操作系统其内存分配机制不同,这样会导致在某些操作系统上的queue不支持lockfree

boost::lockfree::spsc_queue<int, boost::lockfree::capacity<2>> q;
printf(“boost::lockfree:queue is lock free:%s”, q.is_lock_free() ? “true” : “false”); //true

//push的返回值:1,push成功;0,push失败。
size_t s1 = q.push(9); //1
size_t s2 = q.push(9); //1
size_t s3 = q.push(9); //0

boost::lockfree::queue<int, boost::lockfree::fixed_sized, boost::lockfree::capacity<2>> q2;
size_t s2_1 = q2.push(9); //1
size_t s2_2 = q2.push(9); //1
size_t s2_3 = q2.push(9); //0

boost::lockfree::queue<int, boost::lockfree::fixed_sized, boost::lockfree::capacity<2>> q3;
size_t s3_1 = q3.push(9); //1
size_t s3_2 = q3.push(9); //1
size_t s3_3 = q3.push(9); //0
size_t s3_4 = q3.push(9); //0

如果不需要考虑多线程或者自己实现同步,还有一种方案:boost::circular_buffer

http://www.boost.org/doc/libs/1_60_0/doc/html/circular_buffer.html

C++11 std::atomic方案

网上有人借用std::atomic实现的一套无锁队列,其内部实现参考了boost::lockfree::queue的设计思路,可以适用于多个消费者多个生产者线程。

A High Performance Lock Free Ring Queue

http://www.codeproject.com/Tips/754393/A-High-Performance-Lock-Free-Ring-Queue

下面代码我在原文基础上做了修改:最新的编译器已不支持std::atomic_flag在构造函数中初始化。

lfringqueue.h

#ifndef INCLUDED_UTILS_LFRINGQUEUE
#define INCLUDED_UTILS_LFRINGQUEUE

#define _ENABLE_ATOMIC_ALIGNMENT_FIX
#define ATOMIC_FLAG_INIT 0

#pragma once

#include
#include
#include
#include
#include
#include
#include

// Lock free ring queue

template < typename _TyData, long _uiCount = 100000 >
class lfringqueue
{
public:
lfringqueue( long uiCount = _uiCount ) : m_lTailIterator(0), m_lHeadIterator(0), m_uiCount( uiCount )
{
m_queue = new _TyData*[m_uiCount];
memset( m_queue, 0, sizeof(_TyData*) * m_uiCount );
}

~lfringqueue()
{
    if ( m_queue )
        delete [] m_queue;
}


bool enqueue( _TyData *pdata, unsigned int uiRetries = 1000 )
{
    if ( NULL == pdata )
    {
        // Null enqueues are not allowed
        return false;
    }

    unsigned int uiCurrRetries = 0;
    while ( uiCurrRetries < uiRetries )
    {
        // Release fence in order to prevent memory reordering 
        // of any read or write with following write
        std::atomic_thread_fence(std::memory_order_release);
        
        long lHeadIterator = m_lHeadIterator;

        if ( NULL == m_queue[lHeadIterator] )
        {
            long lHeadIteratorOrig = lHeadIterator;

            ++lHeadIterator;
            if ( lHeadIterator >= m_uiCount )
                    lHeadIterator = 0;

            // Don't worry if this CAS fails.  It only means some thread else has
            // already inserted an item and set it.
            if ( std::atomic_compare_exchange_strong( &m_lHeadIterator, &lHeadIteratorOrig, lHeadIterator ) )             
            {
                // void* are always atomic (you wont set a partial pointer).
                m_queue[lHeadIteratorOrig] = pdata;
              
                if ( m_lEventSet.test_and_set( ))
                {
                    m_bHasItem.test_and_set();
                }
                return true;
            }
        }
        else
        {
            // The queue is full.  Spin a few times to check to see if an item is popped off.
            ++uiCurrRetries;
        }
    }
    return false;
}

bool dequeue( _TyData **ppdata )
{
    if ( !ppdata )
    {
        // Null dequeues are not allowed!
        return false;
    }

    bool bDone = false;
    bool bCheckQueue = true;

    while ( !bDone )
    {
        // Acquire fence in order to prevent memory reordering 
        // of any read or write with following read
        std::atomic_thread_fence(std::memory_order_acquire);
        //MemoryBarrier();
        long lTailIterator = m_lTailIterator;
        _TyData *pdata = m_queue[lTailIterator];
        //volatile _TyData *pdata = m_queue[lTailIterator];            
        if ( NULL != pdata )
        {
            bCheckQueue = true;
            long lTailIteratorOrig = lTailIterator;

            ++lTailIterator;
            if ( lTailIterator >= m_uiCount )
                    lTailIterator = 0;

            //if ( lTailIteratorOrig == atomic_cas( (volatile long*)&m_lTailIterator, lTailIterator, lTailIteratorOrig ))
            if ( std::atomic_compare_exchange_strong( &m_lTailIterator, &lTailIteratorOrig, lTailIterator ))
            {
                    // Sets of sizeof(void*) are always atomic (you wont set a partial pointer).
                    m_queue[lTailIteratorOrig] = NULL;

                    // Gets of sizeof(void*) are always atomic (you wont get a partial pointer).
                    *ppdata = (_TyData*)pdata;

                    return true;
            }
        }
        else
        {
            bDone = true;
            m_lEventSet.clear();
        }
    }
    *ppdata = NULL;
    return false;
}


long countguess() const
{
    long lCount = trycount();

    if ( 0 != lCount )
            return lCount;

    // If the queue is full then the item right before the tail item will be valid.  If it
    // is empty then the item should be set to NULL.
    long lLastInsert = m_lTailIterator - 1;
    if ( lLastInsert < 0 )
            lLastInsert = m_uiCount - 1;

    _TyData *pdata = m_queue[lLastInsert];
    if ( pdata != NULL ) 
            return m_uiCount;

    return 0;
}

long getmaxsize() const
{
    return m_uiCount;
}

bool HasItem()
{
    return m_bHasItem.test_and_set();
}

void SetItemFlagBack()
{
    m_bHasItem.clear();
}

private:
long trycount() const
{
long lHeadIterator = m_lHeadIterator;
long lTailIterator = m_lTailIterator;

    if ( lTailIterator > lHeadIterator )
            return m_uiCount - lTailIterator + lHeadIterator;

    // This has a bug where it returns 0 if the queue is full.
    return lHeadIterator - lTailIterator;
}

private:
std::atomic m_lHeadIterator; // enqueue index
std::atomic m_lTailIterator; // dequeue index
_TyData **m_queue; // array of pointers to the data
long m_uiCount; // size of the array
std::atomic_flag m_lEventSet = ATOMIC_FLAG_INIT; // a flag to use whether we should change the item flag
std::atomic_flag m_bHasItem = ATOMIC_FLAG_INIT; // a flag to indicate whether there is an item enqueued
};

#endif //INCLUDED_UTILS_LFRINGQUEUE

测试:

/*

  • File: main.cpp
  • Author: Peng
  • Created on February 22, 2014, 9:55 PM
    */

#include
#include “lfringqueue.h”
#include
#include <stdio.h>
#include
#include
#include
#include
#include
#include
#include
#include

#include <boost/thread/thread.hpp>
#include <boost/lockfree/queue.hpp>
#include

#include <boost/atomic.hpp>

const long NUM_DATA = 10;
const int NUM_ENQUEUE_THREAD = 1;
const int NUM_DEQUEUE_THREAD = 1;
const long NUM_ITEM = 1000000;

using namespace std;
class Data
{
public:
Data( int i = 0 ) : m_iData(i)
{
stringstream ss;
ss << i;
m_szDataString = ss.str();
//sprintf( m_szDataString, “%l-d”, i);
}

bool operator< ( const Data & aData) const
{
    if ( m_iData < aData.m_iData)
        return true;
    else
        return false;
}

int& GetData()
{
    return m_iData;
}

private:
int m_iData;
string m_szDataString;
//char m_szDataString[MAX_DATA_SIZE];
};

Data DataArray[NUM_DATA];

constexpr long size = 0.5 * NUM_DATA;
lfringqueue < Data, 1000> LockFreeQueue;
boost::lockfree::queue<Data*> BoostQueue(1000);

// Since there is a chance that the searched number cannot be found, so the function should return boolean
bool BinarySearchNumberInSortedArray( Data datas[], int iStart, int iEnd, int SearchedNum, int &iFound )
{
if ( iEnd - iStart <= 1 )
{
if ( datas[iStart].GetData() == SearchedNum )
{
iFound = iStart;
return true;
}
else if ( datas[iEnd].GetData() == SearchedNum )
{
iFound = iEnd;
return true;
}
else
return false;
}

int mid = 0.5 * ( iStart + iEnd );

if ( datas[mid].GetData() == SearchedNum )
{
    iFound = mid;
    return true;
}

if ( datas[mid].GetData() > SearchedNum )
{
    if ( mid - 1 >= 0)
        return BinarySearchNumberInSortedArray ( datas, iStart, mid - 1, SearchedNum, iFound);
    else
        return false;
}
else
{
    if ( mid + 1 <= iEnd )
        return BinarySearchNumberInSortedArray ( datas, mid + 1, iEnd, SearchedNum, iFound);
    else
        return false;
}

}
bool GenerateRandomNumber_FindPointerToTheNumber_EnQueue()
{
std::uniform_int_distribution dis(1, NUM_DATA);
default_random_engine engine{};

for ( long i = 0; i < NUM_ITEM; i++ )
{
    int x = dis ( engine );
    
    int iFoundIndex;
    if ( BinarySearchNumberInSortedArray(DataArray, 0, NUM_DATA - 1, x, iFoundIndex ) )
    {
        Data* pData = &DataArray[iFoundIndex];
        LockFreeQueue.enqueue( pData );
        //BoostQueue.push( pData );
    }
}

}
bool Dequeue()
{
Data *pData;

for ( long i = 0; i < NUM_ITEM; i ++)
{
    while (  LockFreeQueue.dequeue( &pData ) );       
    //while (  BoostQueue.pop( pData ) ) ;      
}    

}

int main(int argc, char** argv)
{
for ( int i = 1; i < NUM_DATA + 1; i++ )
{
Data data(i);
DataArray[i-1] = data;
}

std::thread PublishThread[NUM_ENQUEUE_THREAD]; 
std::thread ConsumerThread[NUM_DEQUEUE_THREAD];
std::chrono::duration<double> elapsed_seconds;

for ( int i = 0; i < NUM_ENQUEUE_THREAD;  i++ )
{
    PublishThread[i] = std::thread( GenerateRandomNumber_FindPointerToTheNumber_EnQueue ); 
}

auto start = std::chrono::high_resolution_clock::now();
for ( int i = 0; i < NUM_DEQUEUE_THREAD; i++ )
{
    ConsumerThread[i] = std::thread{ Dequeue};
}

for ( int i = 0; i < NUM_DEQUEUE_THREAD; i++ )
{
    ConsumerThread[i].join();
}   

auto end = std::chrono::high_resolution_clock::now();
elapsed_seconds = end - start;
std::cout << "Enqueue and Dequeue 1 million item in:" << elapsed_seconds.count() << std::endl;


for ( int i = 0; i < NUM_ENQUEUE_THREAD; i++ )
{
    PublishThread[i].join();
}
         
return 0;

}

相关资源:atomic_queue:C ++无锁队列-源码
————————————————
版权声明:本文为CSDN博主「玄冬Wong」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/wag2765/article/details/84793371

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
std::atomicC++11引入的一个原子类型,用于实现多线程环境下的原子操作。它提供了一种线程安全的方式来访问和修改共享变量。 std::atomic<T>的内部实现可以使用不同的机制,具体取决于编译器和平台。一种常见的实现方式是使用硬件提供的原子指令来实现原子操作。这些指令可以确保在多线程环境下对共享变量的操作是原子的,即不会被其他线程中断。 另一种实现方式是使用互斥锁来保护共享变量的访问。当一个线程要访问共享变量时,它会先获取互斥锁,然后执行操作,最后释放互斥锁。这种方式可以确保在任意时刻只有一个线程能够访问共享变量,从而避免了竞争条件。 无论使用哪种实现方式,std::atomic提供了一系列的成员函数来进行原子操作,包括load、store、exchange、compare_exchange等。这些函数可以保证对共享变量的操作是原子的,并且提供了不同的内存序(memory order)选项来控制操作的顺序和可见性。 下面是一个示例代码,演示了如何使用std::atomic进行原子操作: ```cpp #include <iostream> #include <atomic> std::atomic<int> counter(0); void increment() { counter.fetch_add(1, std::memory_order_relaxed); } int main() { std::cout << "Counter: " << counter.load() << std::endl; std::thread t1(increment); std::thread t2(increment); t1.join(); t2.join(); std::cout << "Counter: " << counter.load() << std::endl; return 0; } ``` 这段代码创建了一个std::atomic<int>类型的counter变量,并定义了一个increment函数,该函数使用fetch_add函数对counter进行原子加一操作。在主函数中,我们创建了两个线程来同时调用increment函数,最后输出counter的值。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值