STL的并行遍历:for_each(依赖TBB)和omp parallel

15 篇文章 1 订阅
6 篇文章 0 订阅


在图像处理等应用中,我们经常需要对矩阵,大数量STL对象进行遍历操作,因此并行化对算法的加速也非常重要。
除了使用opencv提供的**parallel_for_**函数可对普通STL容器进行并行遍历,如vector。
参见 https://blog.csdn.net/weixin_41469272/article/details/126617752
本文介绍其他两种并行办法。 TBB和OMP

OMP parallel

OpenMP安装

sudo apt install libomp-dev

OpenMP示例

1) OMP Hello World

OMP是相对使用较为简洁的并行工具,仅需在需要并行的语句前加入#pragma omp parallel,便可实现并行。

      #pragma omp parallel
      {
         每个线程都会执行大括号里的代码
      }

说明:以下出现c++代码c的写法
参考:https://blog.csdn.net/ab0902cd/article/details/108770396
https://blog.csdn.net/zhongkejingwang/article/details/40350027
omp_test.cpp

#include <omp.h>

int main(){
    printf("The output:\n");
    #pragma omp parallel     /* define multi-thread section */
    {
        printf("Hello World\n");
    }
    /* Resume Serial section*/
    printf("Done\n");
}
g++ omp_test.cpp -fopenmp -o omptest
./test

Result:

The output:
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Done

2) OMP for 并行

当需要将for循环并行,则可在for语句之前加上:#pragma omp parallel for

int main(int argc, char *argv[]) {
  int length = 6;
  float *buf = new float[length];
  #pragma omp parallel for num_threads(3)
     for(int i = 0; i < length; i++) {
       int tid = omp_get_thread_num();

       printf("i:%d is handled on thread %d\n", i, tid);
       buf[i] = i;
     }
}

其中num_threads用于指定线程个数。
Result

i:0 is handled on thread 0
i:1 is handled on thread 0
i:4 is handled on thread 2
i:5 is handled on thread 2
i:2 is handled on thread 1
i:3 is handled on thread 1

3) OMP 官方示例
#include <stdlib.h>   //malloc and free
#include <stdio.h>    //printf
#include <omp.h>      //OpenMP

// Very small values for this simple illustrative example
#define ARRAY_SIZE 8     //Size of arrays whose elements will be added together.
#define NUM_THREADS 4    //Number of threads to use for vector addition.

/*
 *  Classic vector addition using openMP default data decomposition.
 *
 *  Compile using gcc like this:
 *      gcc -o va-omp-simple VA-OMP-simple.c -fopenmp
 *
 *  Execute:
 *      ./va-omp-simple
 */
int main (int argc, char *argv[])
{
    // elements of arrays a and b will be added
    // and placed in array c
    int * a;
    int * b;
    int * c;

    int n = ARRAY_SIZE;                 // number of array elements
    int n_per_thread;                   // elements per thread
    int total_threads = NUM_THREADS;    // number of threads to use  
    int i;       // loop index

        // allocate spce for the arrays
    a = (int *) malloc(sizeof(int)*n);
    b = (int *) malloc(sizeof(int)*n);
    c = (int *) malloc(sizeof(int)*n);

        // initialize arrays a and b with consecutive integer values
    // as a simple example
        for(i=0; i<n; i++) {
            a[i] = i;
        }
        for(i=0; i<n; i++) {
            b[i] = i;
        }

    // Additional work to set the number of threads.
    // We hard-code to 4 for illustration purposes only.
    omp_set_num_threads(total_threads);

    // determine how many elements each process will work on
    n_per_thread = n/total_threads;

        // Compute the vector addition
    // Here is where the 4 threads are specifically 'forked' to
    // execute in parallel. This is directed by the pragma and
    // thread forking is compiled into the resulting exacutable.
    // Here we use a 'static schedule' so each thread works on  
    // a 2-element chunk of the original 8-element arrays.
    #pragma omp parallel for shared(a, b, c) private(i) schedule(static, n_per_thread)
        for(i=0; i<n; i++) {
        c[i] = a[i]+b[i];
        // Which thread am I? Show who works on what for this samll example
        printf("Thread %d works on element%d\n", omp_get_thread_num(), i);
        }

    // Check for correctness (only plausible for small vector size)
    // A test we would eventually leave out
    printf("i\ta[i]\t+\tb[i]\t=\tc[i]\n");
        for(i=0; i<n; i++) {
        printf("%d\t%d\t\t%d\t\t%d\n", i, a[i], b[i], c[i]);
        }

        // clean up memory
        free(a);  free(b); free(c);

    return 0;
}

Result:
在这里插入图片描述
其中,shared括号中说明所有线程公用的变量名,private括号中的变量为各个线程均独立的变量。
schedule()用于指定循环的线程分布策略,默认为static。
具体不同:
schedule(kind [, chunk_size])

kind:
• static: Iterations are divided into chunks of size chunk_size. Chunks are assigned to threads in the team in round-robin fashion in order of thread number.
• dynamic: Each thread executes a chunk of iterations then requests another chunk until no chunks remain to be distributed.
• guided: Each thread executes a chunk of iterations then requests another chunk until no chunks remain to be assigned. The chunk sizes start large and shrink to the indicated chunk_size as chunks are scheduled.
• auto: The decision regarding scheduling is delegated to the compiler and/or runtime system.
• runtime: The schedule and chunk size are taken from the run-sched-var ICV

static: OpenMP会给每个线程分配chunk_size次迭代计算。这个分配是静态的,线程分配规则根据for的遍历的顺序。
dynamic:动态调度迭代的分配是依赖于运行状态进行动态确定的,当需要分配新线程时,已有线程结束,则直接使用完成的线程,而不开辟新的线程。
guided:循环迭代划分成块的大小与未分配迭代次数除以线程数成比例,然后随着循环迭代的分配,块大小会减小为chunk值。chunk的默认值为1。开始比较大,以后逐渐减小。
runtime:将调度决策推迟到指定时开始,这选项不能指定块大小?(暂未测试)

参考:
https://blog.csdn.net/gengshenghong/article/details/7000979
https://blog.csdn.net/yiguagua/article/details/107053043

4) map使用OMP遍历

关于invalid controlling predicate的问题
OMP不支持终止条件为“!=”或者“==”的for循环,因为无法判断循环的数量。

int main()
{
  map<int,int> mii;
  map<int, string> mis;
  for (int i = 0; i < 50; i++) {mis[i] = to_string(i);}

  clock_t start,end;
  start = clock();

#if 1
  mutex m;
  auto it = mis.begin();
  #pragma omp parallel for num_threads(3) shared(it)
  //Error '!=" can not be used in omp: invalid controlling predicate
  for (int i = 0; i < mis.size(); i++)
  {
    int tid = omp_get_thread_num();
    m.lock();
    mii[it->first] = atoi(it->second.c_str());
    cout << "Thread " << tid << " handle " << it->first << endl;
    m.unlock();
    it++;
  }

#else
  for (auto it : mis)
  {
    int tid = omp_get_thread_num();
    mii[it.first] = atoi(it.second.c_str());
    cout << "Thread " << tid << " handle " << it.first << endl;
  }
#endif

  end = clock();
  cout<<"time = "<<double(end-start)/CLOCKS_PER_SEC<<"s"<<endl;


  for (auto it = mii.begin(); it != mii.end(); it++)
  {
    cout << "it->first: " << it->first << " it->second: " << it->second << endl;
  }
}

Result:

加OMP:time = 0.000877s
不加并行:time = 0.001862s

TBB的安装和使用

关于Intel的oneTBB工具与g++版本相互制约,因此在安装时较为麻烦
以下测试选择的工具版本:
TBB:v2020.0
Gcc:9.4

Gcc9的安装

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install gcc-9 g++-9

TBB 安装

wget https://github.com/oneapi-src/oneTBB/archive/refs/tags/v2020.0.tar.gz
tar zxvf v2020.0.tar.gz
cd oneTBB

cp build/linux.gcc.inc build/linux.gcc-9.inc
修改 build/linux.gcc-9.inc 15,16 行:
CPLUS ?= g++-9
CONLY ?= gcc-9 

#build
make compiler=gcc-9 stdver=c++17 -j20 DESTDIR=install tbb_build_prefix=build

#***************************** TBB examples build *****************************************
#build test code:
g++-9 -std=c++17  -I ~/Download/softpackages/oneTBB/install/include/ -L/home/niebaozhen/Download/    softpackages/oneTBB/install/lib/ std_for_each.cpp -ltbb -Wl,-rpath=/home/niebaozhen/Download/soft    packages/oneTBB/install/lib/

参考链接:https://blog.csdn.net/weixin_32207065/article/details/112270765

Tips:
当TBB版本大于v2021.1.1时,cmake被引入,但是该版本TBB不支持gcc9/10
但是gcc版本高等于9时,才支持对TBB的使用,且编译标准建议为c++17。

说明链接
v2021.1.1版本编译命令

#tbb version >= v2021.1.1: cmake employed, however,
#libc++9&10 are incompatible with TBB version > 2021.xx

mkdir build install
cd build
cmake -DCMAKE_INSTALL_PREFIX=../install/ -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_COMPILER=/usr/bin/g++-9  -DTBB_TEST=OFF ..cmake -DCMAKE_INSTALL_PREFIX=../install/  -DTBB_TEST=OFF ..
make -j30
make install

TBB使用

std_for_each.cpp

#include <iostream>
#include <unistd.h>
#include <map>
#include <algorithm>
#include <chrono>

#define __MUTEX__ 0
#if __MUTEX__
#include <mutex>
#endif

#if __GNUC__ >= 9
#include <execution>
#endif

using namespace std; 


int main ()
{
  //cout << "gnu version: " << __GNUC__ << endl;
  int a[] = {0,1,3,4,5,6,7,8,9};
  
  map<int, int> mii;
  #if __MUTEX__
  mutex m;
  #endif  
 
  auto tt1 = chrono::steady_clock::now();

  #if __GNUC__ >= 9
  for_each(execution::par, begin(a), std::end(a), [&](int i) {
  #else
  for_each(begin(a), std::end(a), [&](int i) {
  #endif
    #if __MUTEX__
    lock_guard<mutex> guard(m);
    #endif
    mii[i] = i*2+1;
    //sleep(1);
    //cout << "Sleep one second" << endl;
  }); 

  auto tt2 = chrono::steady_clock::now();
  auto dt = chrono::duration_cast<chrono::duration<double>>(tt2 - tt1);

  cout << "time = " << dt.count() << "s" <<endl;

  for(auto it = mii.begin(); it != mii.end(); it++) {
    cout << "mii[" << it->first << "] = " << it->second << endl;
  }   
}

build:

g++ std_for_each.cpp
或:
g++-9 -std=c++17  -I ~/Download/softpackages/oneTBB/install/include/ -L/home/niebaozhen/Download/softpackages/oneTBB/install/lib/ std_for_each.cpp -ltbb -Wl,-rpath=/home/niebaozhen/Download/softpackages/oneTBB/install/lib/

Result:
在这里插入图片描述
可以看出,当遍历所操作的工作比较少时,并行反而会带来更多的耗时。
当遍历的操作较多,这里sleep来模拟较多的工作,并行体现出优势。

OpenCV parallel_for_ 遍历map测试

#include <iostream>
#include <opencv2/core.hpp>
#include <mutex>
#include <map>

using namespace std;
using namespace cv;

map<int, string> mis;
map<int, string>::iterator it;
mutex m;

void fun (const Range range)
{
  //cout << "test*******" << endl; 
  for (size_t i = range.start; i < range.end; i++) {
    m.lock();
    cout << "it->first: " << it->first << " it->second: " << it->second << endl;
    m.unlock();
    it++;
  }
}

void parallel()
{
  parallel_for_(cv::Range(0, mis.size()), &fun);
}

void oneline()
{
  for (auto it : mis) {
    m.lock();
    cout << "it.first: " << it.first << " it.second: " << it.second << endl;
    m.unlock();
  }
}

int main ()
{
  for (int i = 0; i < 50; i++) {mis[i] = to_string(i);}
  it = mis.begin();

  clock_t start,end;
  start = clock();

#if 0
  parallel();
#else
  oneline();
#endif
  end = clock();
  cout<<"time = "<<double(end-start)/CLOCKS_PER_SEC<<"s"<<endl;

  return 0;
}           

build:

g++ parallel_for_.cpp `pkg-config --libs --cflags opencv`

Result:
并行:
time = 0.002147s
不并行:
time = 0.000168s

结论:可能是it的限制导致并行变慢。

opencv Mat.forEach遍历测试

#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/pcl_base.h>

#include <opencv2/imgproc.hpp>
#include <opencv2/opencv.hpp>

#include <omp.h>
#include <mutex>

using namespace pcl;
using namespace std;

#define __NOTHING__
using namespace cv; 

PointCloud<PointXYZI>::Ptr dcp(new PointCloud<PointXYZI>());

int main()
{
  Mat img = imread("img.png", IMREAD_ANYDEPTH);

  clock_t start,end;

  start = clock();
  pcl::PointXYZI point;
  #pragma omp parallel for private(point)
  for (int i = 0; i < img.rows; i++) {
    #pragma omp parallel for private(point)
    for (int j = 0; j < img.cols; j++) {
    #ifndef __NOTHING__ 
      float val = img.at<uchar>(i, j) / 5000;

      if (val <= 0 || isnan(val)) {/* cout <<"val is unavailable"*/; continue; }

      point.x = (320 - j) / 500 / val * 10; 
      point.y = (240 - i) / 500 / val * 10; 
      point.z = 10; 
      point.intensity = val;

      dcp->push_back(point);
    #endif
    }   
  }
  end = clock();
  cout<<"0000000000time = "<<double(end-start)/CLOCKS_PER_SEC<<"s"<<endl;

  dcp->clear();
  start = clock();
  for (int i = 0; i < img.rows; i++) {
    for (int j = 0; j < img.cols; j++) {
    #ifndef __NOTHING__ 
      float val = img.at<uchar>(i, j) / 5000;
      if (val <= 0 || isnan(val)) {/* cout <<"val is unavailable"*/; continue; }

      pcl::PointXYZI point;
      point.x = (320 - j) / 500 / val * 10;
      point.y = (240 - i) / 500 / val * 10;
      point.z = 10;
      point.intensity = val;

      dcp->push_back(point);
    #endif
    }
  }
  end = clock();
  cout<<"1111111111time = "<<double(end-start)/CLOCKS_PER_SEC<<"s"<<endl;

  dcp->clear();
  start = clock();
  mutex m;
  img.forEach<float>([&m] (float &val, const int *position) {
    #ifndef __NOTHING__ 
    pcl::PointXYZI point;
    //return in forEach, jump out of this loop, continue the next
    val /= 5000;
    if (val <= 0 || isnan(val)) {/* cout <<"val is unavailable"*/; return; }

    point.x = (320 - position[0]) / 500 / val * 10;
    point.y = (240 - position[1]) / 500 / val * 10;
    point.z = 10;
    point.intensity = val;

    m.lock();
     dcp->push_back(point);
    m.unlock();
    #endif
  });

  end = clock();
  cout<<"222222222time = "<<double(end-start)/CLOCKS_PER_SEC<<"s"<<endl;


  //int n = dcp->points.size();
  //cout << "points num: " << n << endl;
  //for (int i = 0; i < n; i++) {
  //  cout << dcp->points[i].x << " " << dcp->points[i].y << " " << dcp->points[i].z << endl;
  //}
}

build:
g++ pcl_new_test.cpp pkg-config --cflags pcl_ros pkg-config --libs --cflags opencv``
result:

0000000000time = 0.000607s
1111111111time = 0.000622s
222222222time = 0.001397s

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值