YoLov3目标检测代码C++版本运行

论文地址:[YOLO] [YOLOv2/YOLO9000] [YOLOv3] [YOLOv4]
YOLO系列权重、配置文件下载地址:https://github.com/AlexeyAB/darknet
代码解读:[Deep Learning based Object Detection using YOLOv3 with OpenCV ( Python / C++ ) ][中文翻译]
代码下载:这边有一个可以运行YOLOv3、YOLOv4、YOLO-Fastest,YOLObile四种网络的[C++代码][参考博客],只需在主函数修改参数即可,强烈推荐.

运行平台:i7 9700+VS2017+opencv4.4.0(opencv版本不宜过低,之前使用opencv3.4.9,只能运行YOLOv3和YOLO-Fastest,运行剩下两个会在加载网络处报错,应该是不支持YOLOv4的激活函数

链接:https://pan.baidu.com/s/1EJRMypMR0SSEGGjCpyYskg
提取码:560s

可能遇到的报错:找不到opencv440d.dll,直接去opencv安装目录下E:\opencv4.4.0\build\x64\vc15\bin将opencv440d.dll文件复制到C:\Windows\System32下即可

模型可视化网址:https://netron.app/

一、网络输出

YOLOv3输出3个特征图,从而实现检测小目标的功能。

this->net.forward(outs, this->net.getUnconnectedOutLayersNames());

outs是一个三维矩阵,每一个包围框都输出一个包含85个元素的行向量,以红色方框所在那一行为例
前4维:归一化后的目标尺寸,分别对应横坐标、纵坐标、宽度、高度(具体的横纵坐标、目标尺寸还需要用到原文公式进一步计算)
第5维:显示该包围框包含目标的概率(这个数值在我的片面理解应该是类似yolov1里面提到的,目标预测包围框与groundtruth之间的IOU)
后80维:代表80个类别对应的置信度/Score

在这里插入图片描述

二、代码注释笔记

main_yolo.cpp
#include "yolo.h"

YOLO::YOLO(Net_config config)
{
	cout << "Net use " << config.netname << endl;
	this->confThreshold = config.confThreshold; //置信度阈值,筛选可能包含目标的包围框
	this->nmsThreshold = config.nmsThreshold;   //非极大值抑制阈值,避免对同一个目标产生多个包围框
	this->inpWidth = config.inpWidth;
	this->inpHeight = config.inpHeight;
	strcpy_s(this->netname, config.netname.c_str());

	//load names of classes 读取coco.names文件的类别名
	ifstream ifs(config.classesFile.c_str());
	string line;
	while (getline(ifs, line)) this->classes.push_back(line);


	//load the network
	this->net = readNetFromDarknet(config.modelConfiguration, config.modelWeights);
	this->net.setPreferableBackend(DNN_BACKEND_OPENCV);  //Opencv
	this->net.setPreferableTarget(DNN_TARGET_CPU);     //CPU  或改成DNN_TARGET_OPENCL调用GPU,但因为我电脑没显卡,没试过
}

void YOLO::postprocess(Mat& frame, const vector<Mat>& outs)   // Remove the bounding boxes with low confidence using non-maxima suppression
{
	vector<int> classIds;
	vector<float> confidences;
	vector<Rect> boxes;

	for (size_t i = 0; i < outs.size(); ++i)
	{
		// Scan through all the bounding boxes output from the network and keep only the
		// ones with high confidence scores. Assign the box's class label as the class
		// with the highest score for the box.
		// outs[i]每一行有85个元素,头四个元素代表center_x, center_y, width和height。第五个元素表示包含着目标的边界框的置信度。
        // 剩下80个元素是和每个类别(如目标种类coconame里面定义的)有关的置信度
		float* data = (float*)outs[i].data;
		for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols)
		{
			cv::Mat look = outs[i];
			Mat scores = outs[i].row(j).colRange(5, outs[i].cols);//取每一行后80个元素,即每一类对应的置信度
			Point classIdPoint;
			double confidence;
			// Get the value and location of the maximum score
			minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);
			if (confidence > this->confThreshold)
			{
				int centerX = (int)(data[0] * frame.cols);
				int centerY = (int)(data[1] * frame.rows);
				int width = (int)(data[2] * frame.cols);
				int height = (int)(data[3] * frame.rows);
				int left = centerX - width / 2;
				int top = centerY - height / 2;

				classIds.push_back(classIdPoint.x);  //记录对应的类
				confidences.push_back((float)confidence);  //记录对应类的置信度
				boxes.push_back(Rect(left, top, width, height)); //记录包围框
			}
		}
	}

	// Perform non maximum suppression to eliminate redundant overlapping boxes with
	// lower confidences
	vector<int> indices;
	NMSBoxes(boxes, confidences, this->confThreshold, this->nmsThreshold, indices);
	for (size_t i = 0; i < indices.size(); ++i)
	{
		int idx = indices[i];
		Rect box = boxes[idx];
		this->drawPred(classIds[idx], confidences[idx], box.x, box.y,
			box.x + box.width, box.y + box.height, frame);
	}
}

void YOLO::drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)   // Draw the predicted bounding box
{
	//Draw a rectangle displaying the bounding box
	rectangle(frame, Point(left, top), Point(right, bottom), Scalar(0, 0, 255), 3);

	//Get the label for the class name and its confidence
	string label = format("%.2f", conf);
	if (!this->classes.empty())
	{
		CV_Assert(classId < (int)this->classes.size());
		label = this->classes[classId] + ":" + label;
	}

	//Display the label at the top of the bounding box
	int baseLine;
	Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
	top = max(top, labelSize.height);
	//rectangle(frame, Point(left, top - int(1.5 * labelSize.height)), Point(left + int(1.5 * labelSize.width), top + baseLine), Scalar(0, 255, 0), FILLED);
	putText(frame, label, Point(left, top), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0, 255, 0), 1);
}

void YOLO::detect(Mat& frame)
{
	//将输入图像frame转为神经网络的输入类型bolb,图像像素值从0~255归一化到0~1,并调整尺寸为--Size(this->inpWidth, this->inpHeight)
	Mat blob;
	blobFromImage(frame, blob, 1 / 255.0, Size(this->inpWidth, this->inpHeight), Scalar(0, 0, 0), true, false);
	//设置网络输入
	this->net.setInput(blob);
	vector<Mat> outs;
	//Runs the forward pass to get output of the output layers  运行前向网络得到输出
	this->net.forward(outs, this->net.getUnconnectedOutLayersNames());
	//去掉置信度过低的包围框
	this->postprocess(frame, outs);

	vector<double> layersTimes;
	double freq = getTickFrequency() / 1000;

	//Put efficiency information. The function getPerfProfile returns the
	//overall time for inference(t) and the timings for each of the layers(in layersTimes)
	double t = net.getPerfProfile(layersTimes) / freq;
	string label = format("%s Inference time : %.2f ms", this->netname, t);
	putText(frame, label, Point(0, 30), FONT_HERSHEY_SIMPLEX, 1, Scalar(0, 0, 255), 2);
	//imwrite(format("%s_out.jpg", this->netname), frame);
}

int main()
{
	YOLO yolo_model(yolo_nets[0]);
	string imgpath = "person.jpg"; 
	Mat srcimg = imread(imgpath);
	yolo_model.detect(srcimg);

	static const string kWinName = "Deep learning object detection in OpenCV";
	namedWindow(kWinName, WINDOW_NORMAL);
	imshow(kWinName, srcimg);
	waitKey(10);
	destroyAllWindows();
}

三、跟踪结果

在这里插入图片描述
跟踪速度还算可以,用GPU可能会快点

  • 3
    点赞
  • 43
    收藏
    觉得还不错? 一键收藏
  • 17
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 17
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值