MNN学习笔记(五):caffe物体检测模型部署

1.模型转换

首先下载caffe模型,下载地址为:

https://github.com/C-Aniruddh/realtime_object_recognition

然后将caffe模型转换成mnn模型:

./MNNConvert -f CAFFE --modelFile MobileNetSSD_deploy.caffemodel --prototxt MobileNetSSD_deploy.prototxt --MNNModel mobilenetssd.mnn --bizCode MNN

2.模型部署

首先,进行初始化:模型载入并创建解释器,设置调度参数,设置后端参数,创建会话和图像处理参数配置

int MobilenetSSD::Init(const char * root_path) {
	std::cout << "start Init." << std::endl;
	std::string model_file = std::string(root_path) + "/mobilenetssd.mnn";
	mobilenetssd_interpreter_ = std::unique_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(model_file.c_str()));
	if (nullptr == mobilenetssd_interpreter_) {
		std::cout << "load model failed." << std::endl;
		return 10000;
	}

	MNN::ScheduleConfig schedule_config;
	schedule_config.type = MNN_FORWARD_CPU;
	schedule_config.numThread = 4;

	MNN::BackendConfig backend_config;
	backend_config.precision = MNN::BackendConfig::Precision_High;
	backend_config.power = MNN::BackendConfig::Power_High;
	schedule_config.backendConfig = &backend_config;

	mobilenetssd_sess_ = mobilenetssd_interpreter_->createSession(schedule_config);

	// image processer
	MNN::CV::Matrix trans;
	trans.setScale(1.0f, 1.0f);
	MNN::CV::ImageProcess::Config img_config;
	img_config.filterType = MNN::CV::BICUBIC;
	::memcpy(img_config.mean, meanVals_, sizeof(meanVals_));
	::memcpy(img_config.normal, normVals_, sizeof(normVals_));
	img_config.sourceFormat = MNN::CV::RGBA;
	img_config.destFormat = MNN::CV::RGB;
	pretreat_data_ = std::shared_ptr<MNN::CV::ImageProcess>(MNN::CV::ImageProcess::create(img_config));
	pretreat_data_->setMatrix(trans);

	std::string input_name = "data";
	input_tensor_ = mobilenetssd_interpreter_->getSessionInput(mobilenetssd_sess_, input_name.c_str());
	mobilenetssd_interpreter_->resizeTensor(input_tensor_, dims_);
	mobilenetssd_interpreter_->resizeSession(mobilenetssd_sess_);

	initialized_ = true;

	std::cout << "end Init." << std::endl;
	return 0;
}

然后,进行数据读入、模型推理和输出结果后处理

这里数据读入参考了资料[3],这里详细介绍了如何使用opencv读入数据,当然不止这一种,还有很多种读取方式

int MobilenetSSD::Detect(const cv::Mat & img_src, std::vector<ObjectInfo>* objects) {
	std::cout << "start detect." << std::endl;
	if (!initialized_) {
		std::cout << "model uninitialized." << std::endl;
		return 10000;
	}
	if (img_src.empty()) {
		std::cout << "input empty." << std::endl;
		return 10001;
	}

	int width = img_src.cols;
	int height = img_src.rows;

	// preprocess
	cv::Mat img_resized;
	cv::resize(img_src, img_resized, inputSize_);
	uint8_t* data_ptr = GetImage(img_resized);
	pretreat_data_->convert(data_ptr, inputSize_.width, inputSize_.height, 0, input_tensor_);
	 
	mobilenetssd_interpreter_->runSession(mobilenetssd_sess_);
	std::string output_name = "detection_out";
	MNN::Tensor* output_tensor = mobilenetssd_interpreter_->getSessionOutput(mobilenetssd_sess_, output_name.c_str());

	// copy to host
	MNN::Tensor output_host(output_tensor, output_tensor->getDimensionType());
	output_tensor->copyToHostTensor(&output_host);

	auto output_ptr = output_host.host<float>();
	for (int i = 0; i < output_host.height(); ++i) {
		int index = i * output_host.width();
		ObjectInfo object;
		object.name_ = class_names[int(output_ptr[index + 0])];
		object.score_ = output_ptr[index + 1];
		object.location_.x = output_ptr[index + 2] * width;
		object.location_.y = output_ptr[index + 3] * height;
		object.location_.width = output_ptr[index + 4] * width - object.location_.x;
		object.location_.height = output_ptr[index + 5] * height - object.location_.y;

		objects->push_back(object);
	}


	std::cout << "end detect." << std::endl;

	return 0;
}

具体代码已经上传到github:

https://github.com/MirrorYuChen/mnn_example/tree/master/src/object/mobilenetssd

觉得有用的点个star,不许白嫖哈~

参考资料:

[1] https://github.com/alibaba/MNN

[2] https://github.com/lqian/light-LPR

[3] https://blog.csdn.net/abcd740181246/article/details/90143848

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 5
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值