目标跟踪实践笔记五

Ubuntu OpenCV编译YOLOv3

    打算在我的跟踪器上加一些识别检测的算法,首选当然是YOLOv3。参考的是Deep Learning based Object Detection using YOLOv3 with OpenCV ( Python / C++ ),在YOLOv3在OpenCV4.0.0/OpenCV3.4.2上的C++ demo实现基础上稍微改动,增加CMakeList.txt文件,可以用Cmake进行编译。

1、安装OpenCV 3.4.2


    至少到3.4.2的版本才能够支持YOLOv3,所以建议安装OpenCV3.4.2版本,具体安装方法见笔记二。利用OpenCV运行YOLOv3,相比较Darknet在相同的配置速度更快。

2、下载OpenCV程序和模型


    具体的程序文件在github上。这是一个文件夹地址,下载githu里面的文件夹可以借助DownGit
    下载模型可以通过下面的指令

sudo chmod a+x getModels.sh
./getModels.sh

或者直接输入:

wget https://pjreddie.com/media/files/yolov3.weights
wget https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg?raw=true -O ./yolov3.cfg
wget https://github.com/pjreddie/darknet/blob/master/data/coco.names?raw=true -O ./coco.names

重要:将下载的yolov3.weights文件(包含预先训练的网络权重),yolov3.cfg文件(包含网络配置)和coco.names文件,其中包含COCO数据集中使用的80个不同的类名,都和之前的OpenCV程序,几个图片和视频都放在一个文件夹下(我的文件夹名称 yolo),防止路径错误。

3、更改cpp程序


主要参考这位博主的,YOLOv3在OpenCV4.0.0/OpenCV3.4.2上的C++ demo实现,把下载后的object_detection_yolo.cpp里面的内容换成如下内容:

//YOLOv3 on OpenCV

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <opencv2/dnn/shape_utils.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
#include <fstream>
 
using namespace cv;
using namespace dnn;
using namespace std; 

// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(cv::Mat& frame, std::vector<cv::Mat>& outs);
 
// Get the names of the output layers
std::vector<cv::String> getOutputsNames(const cv::dnn::Net& net);
 
// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, cv::Mat& frame);
 
// Initialize the parameters
float confThreshold = 0.5; // Confidence threshold
float nmsThreshold = 0.4;  // Non-maximum suppression threshold
int inpWidth = 416;        // Width of network's input image
int inpHeight = 416;       // Height of network's input image
 
static const char* about =
"This sample uses You only look once (YOLO)-Detector (https://arxiv.org/abs/1612.08242) to detect objects on camera/video/image.\n"
"Models can be downloaded here: https://pjreddie.com/darknet/yolo/\n"
"Default network is 416x416.\n"
"Class names can be downloaded here: https://github.com/pjreddie/darknet/tree/master/data\n";
 
static const char* params =
"{ help         | false              | ./yolo_opencv -source=../data/3.avi }" 
"{ source       | bird.jpg           | image or video for detection        }" 
"{ device       | 0                  | video for detection                 }"
"{ save         | false              | save result                         }";
 //help 后面的false改成true,就会打印输出help信息
 //source 后面的bird.jpg改成run.mp4就会运行视频的检测,或者把bird.jpg删掉,为空,则会打开电脑的摄像头
 //device 后面的0是选择电脑摄像头还是你自己插入的USB摄像头等设备
 //save 后面的false 改成true,就会保存你视频的结果
 
std::vector<std::string> classes;
 
int main(int argc, char** argv)
{
    cv::CommandLineParser parser(argc, argv, params);
    // Load names of classes
    std::string classesFile = "coco.names"; //绝对路径,我把三个文件和.cpp程序都放在同一个文件夹中,方便
    std::ifstream classNamesFile(classesFile.c_str());
    if (classNamesFile.is_open())
    {
        std::string className = "";
        while (std::getline(classNamesFile, className))
            classes.push_back(className);
    }
    else{
        std::cout<<"can not open classNamesFile"<<std::endl;
    }
     //绝对路径,我把三个文件和.cpp程序都放在同一个文件夹中,方便
    // Give the configuration and weight files for the model
    cv::String modelConfiguration = "yolov3.cfg";
    cv::String modelWeights = "yolov3.weights";
 
    // Load the network
    cv::dnn::Net net = cv::dnn::readNetFromDarknet(modelConfiguration, modelWeights);
    std::cout<<"Read Darknet..."<<std::endl;
    net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
    net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
   
  
    cv::String  outputFile = "yolo_out_cpp.avi";  //保存的视频检测名字
    std::string str;
    cv::VideoCapture cap;
    double frame_count;

    if (parser.get<bool>("help"))
    {
        std::cout << about << std::endl;
        parser.printMessage();
        return 0;
    }

    if (parser.get<cv::String>("source").empty())
    {
        int cameraDevice = parser.get<int>("device");
        cap = cv::VideoCapture(cameraDevice);
        if (!cap.isOpened())
        {
            std::cout << "Couldn't find camera: " << cameraDevice << std::endl;
            return -1;
        }
    }
    else
    {
        str=parser.get<cv::String>("source");
        cap.open(str);
        if (!cap.isOpened())
        {
            std::cout << "Couldn't open image or video: " << parser.get<cv::String>("video") << std::endl;
            return -1;
        }
        frame_count=cap.get(cv::CAP_PROP_FRAME_COUNT);
        std::cout<<"frame_count:"<<frame_count<<std::endl;
    }

    // Get the video writer initialized to save the output video
    cv::VideoWriter video;
    if (parser.get<bool>("save"))
    {
        if(frame_count>1)
        {
            video.open(outputFile, cv::VideoWriter::fourcc('M','J','P','G'), 28, cv::Size(cap.get(cv::CAP_PROP_FRAME_WIDTH),cap.get(cv::CAP_PROP_FRAME_HEIGHT)));
        }
        else
        {
            str.replace(str.end()-4, str.end(), "_yolo_out.jpg");
            outputFile = str;
        }
        
    }


// Process frames.
std::cout <<"Processing..."<<std::endl;
cv::Mat frame;
while (1)
{
    // get frame from the video
    cap >> frame;
 
    // Stop the program if reached end of video
    if (frame.empty()) {
        std::cout << "Done processing !!!" << std::endl;
        if(parser.get<bool>("save"))
        std::cout << "Output file is stored as " << outputFile << std::endl;
        std::cout << "Please enter Esc to quit!" << std::endl;
        if(cv::waitKey(0)==27)
        break;
    }
 
    //show frame
    cv::imshow("frame",frame);
 
    // Create a 4D blob from a frame.
    cv::Mat blob;
    cv::dnn::blobFromImage(frame, blob, 1/255.0, cv::Size(inpWidth, inpHeight), cv::Scalar(0,0,0), true, false);
     
    //Sets the input to the network
    net.setInput(blob);
     
    // Runs the forward pass to get output of the output layers
    std::vector<cv::Mat> outs;
    net.forward(outs, getOutputsNames(net));
     
    // Remove the bounding boxes with low confidence
    postprocess(frame, outs);
     
    // Put efficiency information. The function getPerfProfile returns the 
    // overall time for inference(t) and the timings for each of the layers(in layersTimes)
    std::vector<double> layersTimes;
    double freq = cv::getTickFrequency() / 1000;
    double t = net.getPerfProfile(layersTimes) / freq;
    std::string label = cv::format("Inference time for a frame : %.2f ms", t);
    cv::putText(frame, label, cv::Point(0, 15), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 255));
     
    // Write the frame with the detection boxes
    cv::Mat detectedFrame;
    frame.convertTo(detectedFrame, CV_8U);
    //show detectedFrame
    cv::imshow("detectedFrame",detectedFrame);
    //save result
    if(parser.get<bool>("save"))
    {
        if(frame_count>1)
        {
            video.write(detectedFrame);
           
        }    
        else 
        {
            cv::imwrite(outputFile, detectedFrame);
        } 
    }
 
    if(cv::waitKey(10)==27)
    {
        break;
    }
  
}
    std::cout<<"Esc..."<<std::endl;
    return 0;
}
 
 
// Get the names of the output layers
std::vector<cv::String> getOutputsNames(const cv::dnn::Net& net)
{
    static std::vector<cv::String> names;
    if (names.empty())
    {
        //Get the indices of the output layers, i.e. the layers with unconnected outputs
        std::vector<int> outLayers = net.getUnconnectedOutLayers();
         
        //get the names of all the layers in the network
        std::vector<cv::String> layersNames = net.getLayerNames();
         
        // Get the names of the output layers in names
        names.resize(outLayers.size());
        for (size_t i = 0; i < outLayers.size(); ++i)
        names[i] = layersNames[outLayers[i] - 1];
    }
    return names;
}
 
 
// Remove the bounding boxes with low confidence using non-maxima suppression
void postprocess(cv::Mat& frame, std::vector<cv::Mat>& outs)
{
    std::vector<int> classIds;
    std::vector<float> confidences;
    std::vector<cv::Rect> boxes;
     
    for (size_t i = 0; i < outs.size(); ++i)
    {
        // Scan through all the bounding boxes output from the network and keep only the
        // ones with high confidence scores. Assign the box's class label as the class
        // with the highest score for the box.
        float* data = (float*)outs[i].data;
        for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols)
        {
            cv::Mat scores = outs[i].row(j).colRange(5, outs[i].cols);
            cv::Point classIdPoint;
            double confidence;
            // Get the value and location of the maximum score
            cv::minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);
 
            if (confidence > confThreshold)
            {
                int centerX = (int)(data[0] * frame.cols);
                int centerY = (int)(data[1] * frame.rows);
                int width = (int)(data[2] * frame.cols);
                int height = (int)(data[3] * frame.rows);
                int left = centerX - width / 2;
                int top = centerY - height / 2;
                 
                classIds.push_back(classIdPoint.x);
                confidences.push_back((float)confidence);
                boxes.push_back(cv::Rect(left, top, width, height));
            }
        }
    }
 
    
    // Perform non maximum suppression to eliminate redundant overlapping boxes with
    // lower confidences
    std::vector<int> indices;
    cv::dnn::NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);
    for (size_t i = 0; i < indices.size(); ++i)
    {
        int idx = indices[i];
        cv::Rect box = boxes[idx];
        drawPred(classIds[idx], confidences[idx], box.x, box.y,
                 box.x + box.width, box.y + box.height, frame);
    }
}
 
// Draw the predicted bounding box
void drawPred(int classId, float conf, int left, int top, int right, int bottom, cv::Mat& frame)
{
    //Draw a rectangle displaying the bounding box
    cv::rectangle(frame, cv::Point(left, top), cv::Point(right, bottom), cv::Scalar(0, 0, 255));
     
    //Get the label for the class name and its confidence
    std::string label = cv::format("%.2f", conf);
    if (!classes.empty())
    {
        CV_Assert(classId < (int)classes.size());
        label = classes[classId] + ":" + label;
    }
    else
    {
        std::cout<<"classes is empty..."<<std::endl;
    }
     
    //Display the label at the top of the bounding box
    int baseLine;
    cv::Size labelSize = cv::getTextSize(label, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
    top = std::max(top, labelSize.height);
    cv::putText(frame, label, cv::Point(left, top), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255,255,255));
}

4、编译


为了能够编译,写了CMakeLists.txt文件,参考Contrib: CMake example。具体内容如下:

cmake_minimum_required(VERSION 2.8)
project(object_detection_yolo)

set(FILES_SRC
    object_detection_yolo.cpp
)

find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(object_detection_yolo${FILES_SRC})
target_link_libraries(object_detection_yolo${OpenCV_LIBS})

之后就可以进行编译了。
第一种方法:建立build文件夹,利用cmake进行自动生成makefile文件

cd yolo
mkdir build
cmake ../
make
cp object_detection_yolo ../
cd ..
./object_detection_yolo

如果想要再次改变程序,得把build文件夹和复制的 object_detection_yolo.bin删掉,重新来一遍。

第二种方法:直接在yolo文件夹中建立makefile文件,自己写,内容如下:

USE_MULTI_THREAD=1

LDFLAGS= `pkg-config --libs opencv` -lstdc++ -lopentracker
CXXFLAGS = -Wall -std=c++0x

ifeq ($(USE_MULTI_THREAD), 1)
CXXFLAGS+= -DUSE_MULTI_THREAD
LDFLAGS+= -pthread
endif

all: object_detection_yolo.bin

object_detection_yolo.bin: object_detection_yolo.o
	$(CC) -o $@ $^ $(LDFLAGS) 

%.o: %.c
	$(CC) -c -o $@ $< $(CFLAGS)

%.o: %.cc
	$(CXX) -c -o $@ $< $(CXXFLAGS)

%.o: %.cpp
	$(CXX) -c -o $@ $< $(CXXFLAGS)

.PHONY: clean
clean:
	rm -rf ./.d *.o *.bin *.so *.a */*.o */*.bin

需要生成什么类型文件自己更改就行。

cd yolo
make
./object_detection_yolo.bin

这样生成的文件显得乱,但不用每次都删掉build文件夹,再建,还可以自己定义生成文件类型。

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

经纬的无疆

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值