OpenVINO 2020r3 体验GPU Remote Blob API

最近做的GPU方面的工作比较多,再回过头来学习一下openvino 2020r3.

 

在OpenVINO的实际应用中,有相当一部分架构走的是纯GPU的处理流程,即视频解码,预处理,推理,后处理以及视频编码都交由GPU来完成。优化这类应用的要点就是所有的数据都放在显存里传递,不要把流水线里的中间结果在CPU的内存和GPU显存之间搬来搬去。其实道理很简单,现在的摄像头硬件成本越来越低,技术又越来越先进,导致前端采集的视频的分辨率越来越高。1080P仅仅是起步,现在碰到的4K, 8K的视频流也越来越多,还有YUV422 YUV444的格式,如果码流用GPU硬件解出来就以原始分辨率的图像大小把raw data拷贝到CPU内存里的话,基本上CPU资源和系统数据总线就都被拷贝线程占满了,其他线程就不用做什么事了。所以这就要求整个推理流水线需要将 所有的数据都放在GPU的显存里处理(像下图所示),CPU只做对GPU的控制以及对推理结果的一些解析工作。

 

为了考虑到应用跨平台的特性,实际的数据的预处理和后处理一般会选用OpenCL来做。这样就需要OpenVINO能够提供一套API接口能够将OpenCL 的cl::Buffer直接作为OpenVINO GPU推理引擎的输入。

 

查了一下openvino的文档,OV从2020r1开始就提供了这个接口,叫GPU RemobeBlob API.  https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_GPU_RemoteBlob_API.html 看了一下文档,大概原理就是让OV的clDNN kernel和你自己处理数据的OpenCL kernel共享同一个opencl context, 有了共同的干爹context, 这样一堆cl kernel就是兄弟了 共享个clbuffer也就变成可能了:)

 

下面在OV 2020r3上试验一下,

代码验证的基本思路是

  1. OpenVINO安装教程里安装完后会默认运行demo_squeezenet_download_convert_run.bat来验证一下用squeezenet来推理一下car.png, 这里先借用一下classification_sample_async的代码流程和转换出来的FP16的squeezenet IR模型, 做一个官方基于GPU推理的标准流程,产生一个输出结果。(移植的时候去掉对OV sample里的format_reader, gflags_nothreads_static等项目的依赖,图像的输入改由OpenCV来实现,这样以后更好移植到其他平台)
  2. 去掉官方标准流程的设置inputBlob部分,改成用RemoteBlob API把一个shared clBuffer设置成IE的输入,上面CV读出的图像数据设置给这个clBuffer. 再推理,比较这次推理和上面标准流程的推理结果。正常的话,输出应该一致。
  3. 将clBuffer全部填零,再推理看推理结果,正常的话,推理结果应该是错的

 

GPU标准推理代码部分

		string FLAGS_d = "GPU";
		string FLAGS_m = "C:\\OpenVINO\\openvino_models\\ir\\public\\squeezenet1.1\\FP16\\squeezenet1.1.xml";
		string labelFileName = "C:\\OpenVINO\\openvino_models\\ir\\public\\squeezenet1.1\\FP16\\squeezenet1.1.labels";
		string FLAGS_i = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\demo\\car.png";
		int FLAGS_nt = 10;

		cout << "starting" << endl;
		const Version *IEversion;
		IEversion = GetInferenceEngineVersion();
		cout << "InferenceEngine: API version " << IEversion->apiVersion.major << "." << IEversion->apiVersion.minor << endl;
		cout << "InferenceEngine: Build : " << IEversion->buildNumber << endl << endl;

		// --------------------------- 1. Load inference engine -------------------------------------
		cout << "Creating Inference Engine" << endl;

		Core ie;
		// -----------------------------------------------------------------------------------------------------

				// --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
		cout << "Loading network files" << endl;

		/** Read network model **/
		CNNNetwork network = ie.ReadNetwork(FLAGS_m);
		cout << "network layer count: " << network.layerCount() << endl;
		// -----------------------------------------------------------------------------------------------------

				// --------------------------- 3. Configure input & output ---------------------------------------------

			// --------------------------- Prepare input blobs -----------------------------------------------------
		cout << "Preparing input blobs" << endl;

		/** Taking information about all topology inputs **/
		InputsDataMap inputInfo(network.getInputsInfo());
		if (inputInfo.size() != 1) throw std::logic_error("Sample supports topologies with 1 input only");

		auto inputInfoItem = *inputInfo.begin();

		/** Specifying the precision and layout of input data provided by the user.
		 * This should be called before load of the network to the device **/
		inputInfoItem.second->setPrecision(Precision::U8);
		inputInfoItem.second->setLayout(Layout::NCHW);

		//cout << FLAGS_i << endl;
		loadjpg(FLAGS_i.c_str(), inputInfoItem.second->getTensorDesc().getDims()[3],
			inputInfoItem.second->getTensorDesc().getDims()[2]);

		if (jpg.data == NULL)
		{
			cout << "Valid input images were not found!" << endl;
		}

		/** Setting batch size to 1 **/
		network.setBatchSize(1);
		size_t batchSize = network.getBatchSize();
		cout << "Batch size is " << std::to_string(batchSize) << endl;


		// --------------------------- 4. Loading model to the device ------------------------------------------
		cout << "Loading model to the device: " << FLAGS_d << endl;
		ExecutableNetwork executable_network = ie.LoadNetwork(network, FLAGS_d);
		// -----------------------------------------------------------------------------------------------------

		// --------------------------- 5. Create infer request -------------------------------------------------
		cout << "Create infer request" << endl;
		InferRequest inferRequest_regular = executable_network.CreateInferRequest();
		// -----------------------------------------------------------------------------------------------------

		// --------------------------- 6. Prepare input 准备输入数据,将OpenCV读入的BGR图像按照NCHW排列--------------------------------------------------------
		for (auto & item : inputInfo) {
			Blob::Ptr inputBlob = inferRequest_regular.GetBlob(item.first);
			SizeVector dims = inputBlob->getTensorDesc().getDims();
			/** Fill input tensor with images. First b channel, then g and r channels **/
			size_t num_channels = dims[1];
			size_t image_size = dims[3] * dims[2];

			MemoryBlob::Ptr minput = as<MemoryBlob>(inputBlob);
			if (!minput) {
				cout << "We expect MemoryBlob from inferRequest_regular, but by fact we were not able to cast inputBlob to MemoryBlob" << endl;
				return 1;
			}
			// locked memory holder should be alive all time while access to its buffer happens
			auto minputHolder = minput->wmap();

			auto data = minputHolder.as<PrecisionTrait<Precision::U8>::value_type *>();
			unsigned char* pixels = (unsigned char*)(jpg.data);

			cout << "image_size = " << image_size << endl;
			/** Iterate over all pixel in image (b,g,r) **/
			for (size_t pid = 0; pid < image_size; pid++) {
				/** Iterate over all channels **/
				for (size_t ch = 0; ch < num_channels; ++ch) {
					/**          [images stride + channels stride + pixel id ] all in bytes            **/
					data[ch * image_size + pid] = pixels[pid*num_channels + ch];
				}
			}
		}


		// --------------------------- 7. Do inference ---------------------------------------------------------
#if 0
//异步推理
		//for async inference
		size_t numIterations = 1;
		size_t curIteration = 0;
		std::condition_variable condVar;

		inferRequest_regular.SetCompletionCallback(
			[&] {
			curIteration++;
			cout << "Completed " << curIteration << " async request execution" << endl;
			if (curIteration < numIterations) {
				/* here a user can read output containing inference results and put new input
				   to repeat async request again */
				inferRequest_regular.StartAsync();
			}
			else {
				/* continue sample execution after last Asynchronous inference request execution */
				condVar.notify_one();
			}
		});

		/* Start async request for the first time */
		cout << "Start inference (" << numIterations << " asynchronous executions)" << endl;
		inferRequest_regular.StartAsync();

		/* Wait all repetitions of the async request */
		std::mutex mutex;
		std::unique_lock<std::mutex> lock(mutex);
		condVar.wait(lock, [&] { return curIteration == numIterations; });
#else
//同步推理
		/* Start sync request */
		cout << "Start inference " << endl;
		inferRequest_regular.Infer();
#endif

		// -----------------------------------------------------------------------------------------------------

		// --------------------------- 8. Process output 输出推理结果 -------------------------------------------------------
		cout << "Processing output blobs" << endl;
		OutputsDataMap outputInfo(network.getOutputsInfo());
		if (outputInfo.size() != 1) throw std::logic_error("Sample supports topologies with 1 output only");
		Blob::Ptr outputBlob_regular = inferRequest_regular.GetBlob(outputInfo.begin()->first);

		/** Validating -nt value **/
		const size_t resultsCnt = outputBlob_regular->size() / batchSize;
		if (FLAGS_nt > resultsCnt || FLAGS_nt < 1) {
			cout << "-nt " << FLAGS_nt << " is not available for this network (-nt should be less than " \
				<< resultsCnt + 1 << " and more than 0)\n            will be used maximal value : " << resultsCnt << endl;
			FLAGS_nt = resultsCnt;
		}

//按照标签文件输出top10的推理结果
		/** Read labels from file (e.x. AlexNet.labels) **/
		//std::string labelFileName = fileNameNoExt(FLAGS_m) + ".labels";
		std::vector<std::string> labels;

		std::ifstream inputFile;
		inputFile.open(labelFileName, std::ios::in);
		if (inputFile.is_open()) {
			std::string strLine;
			while (std::getline(inputFile, strLine)) {
				//trim(strLine);
				labels.push_back(strLine);
			}
		}

		std::vector<std::string> validImageNames = { "car.png" };
		ClassificationResult classificationResult(outputBlob_regular, validImageNames,
			batchSize, FLAGS_nt,
			labels);
		classificationResult.print();

 

GPU RemoteBlob API推理代码部分 


		// inference using remote blob
		auto inf_req_shared = executable_network.CreateInferRequest();

		// obtain the RemoteContext pointer from the executable network object
		auto cldnn_context = executable_network.GetContext();
		cl_context ctx = std::dynamic_pointer_cast<gpu::ClContext>(cldnn_context)->get();

		cl::Context _context;
		cl::Device _device;
		cl::CommandQueue _queue;
		// user-supplied context handle 基于cldnn的context创建自己的opencl device
		_context = cl::Context(ctx, true);
		_device = cl::Device(_context.getInfo<CL_CONTEXT_DEVICES>()[0].get(), true);

//创建command Queue
		cl_command_queue_properties props = CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE;
		_queue = cl::CommandQueue(_context, _device, props);

//准备输入数据
		auto dims = network.getInputsInfo().begin()->second->getTensorDesc().getDims();
		size_t imSize = dims[1] * dims[2] * dims[3];
		cout << "imSize = " << imSize << " dims[1]=" << dims[1] << " dims[2]=" << dims[2] << " dims[3]=" << dims[3] << endl << endl;

		size_t num_channels = dims[1];
		size_t image_size = dims[3] * dims[2];

		//prepare input image data, 将OpenCV Mat里的BGR转为NCHW排列
		/** Iterate over all pixel in image (b,g,r) **/
		unsigned char *ImageBuffer;
		ImageBuffer = (unsigned char *)malloc(imSize);
		unsigned char* pixels = (unsigned char*)(jpg.data);
		for (size_t pid = 0; pid < image_size; pid++) {
			/** Iterate over all channels **/
			for (size_t ch = 0; ch < num_channels; ++ch) {
				/**          [images stride + channels stride + pixel id ] all in bytes            **/
				ImageBuffer[ch * image_size + pid] = pixels[pid*num_channels + ch];
				//set input data to 0
				//ImageBuffer[ch * image_size + pid] = 0;
			}
		}

//创建共享的clbuffer, 成功后将NCHW数据传给clbuffer
		cl_int err;
		cl::Buffer shared_buffer(_context, CL_MEM_READ_WRITE, imSize, NULL, &err);
		{
			void *buffer = ImageBuffer;
			_queue.enqueueWriteBuffer(shared_buffer, true, 0, imSize, buffer);
		}

		Blob::Ptr shared_blob = gpu::make_shared_blob(network.getInputsInfo().begin()->second->getTensorDesc(), cldnn_context,
			shared_buffer);

//将clbuffer设为IE input blob
		inf_req_shared.SetBlob(network.getInputsInfo().begin()->first, shared_blob);

//推理
		inf_req_shared.Infer();
		auto outputBlob_shared = inf_req_shared.GetBlob(network.getOutputsInfo().begin()->first);

		free(ImageBuffer);

//输出top10推理结果和对应的label
		cout << "Processing output shared blobs" << endl;
		ClassificationResult classificationResult_shared(outputBlob_shared, validImageNames,
			batchSize, FLAGS_nt,
			labels);
		classificationResult_shared.print();

 

运行结果, output blobs和output shared blobs输出结果一致,基本上是能用 :)

 

最后完整项目奉上,仅供参考

https://gitee.com/tisandman/cl_ov_sharing

 

PS: Remote Blob API目前只支持C++接口,如果用C开发程序,可能就要采用一些曲线救国的策略了。另外官方API文档里的参考代码其实是从OV源码库的openvino-master\inference-engine\tests\functional\plugin\gpu\remote_blob_tests\cldnn_remote_blob_tests.cpp里面截取的片段, 缺了某些变量的声明和初始化,所以官网文档看上去不是很容易理解。而这部分完整代码也没有包含在OpenVINO的发行包里,要理解这部分代码还是需要自己check out出openvino的完整项目来看,对于我们这种OV小白来说实在是不太友好。

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值