YOLOV8+TensorRT8.6.1.6+Win10+QT5.9.9推理部署(打包为dll)

因为要做对接,公司那边用的都是QT,所以不得不把项目打包为dll,再在QT上调用

版本:

  • CUDA 11.8
  • CuDNN 8.9
  • TensorRT 8.6.1.6
  • QT 5.9.9
  • VS2019(但编译器是MSVC2017)
  • opencv 4.9.0

第一步:模型转换

安装CUDA这些就不多做赘述,记住版本要匹配!!!

然后将TensorRT,opencv这些添加进系统环境变量(我的opencv是编译过的,应该可以直接用官网编译好的吧,大概)

CUDA的环境变量记得检查一下配置了没 

之后就是,pt模型转为onnx再转为engine,这里采用了部署实战 | 手把手教你在Windows下用TensorRT部署YOLOv8_yolov8 tensorrt部署-CSDN博客

的方法,用ultralytics的export导出onnx,再用tenorRT自带的trtexec工具导出为engine文件

第二步:导出为dll

这里采用了Yolov5训练自己的数据集+TensorRT加速+Qt部署_tensorrt yolov5-CSDN博客

的配置方法,但是具体的实现完全不同(因为和文章用的代码不一样,输出接口都不一样……)

2.1 配置环境

首先是在VS2019里新建动态链接库项目,配置环境

属性管理器创建新属性表

tensorRT中配置包含目录,库目录

链接器->输入 配置外部依赖项

opencv同理

包含目录,库目录

外部依赖项

然后cuda的属性表直接拿现成的,在C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\visual_studio_integration\MSBuildExtensions\CUDA 11.8.props(根据你们自己的位置来)

2.2 文件配置

GitHub - FeiYull/TensorRT-Alpha: 🔥🔥🔥TensorRT for YOLOv8、YOLOv8-Pose、YOLOv8-Seg、YOLOv8-Cls、YOLOv7、YOLOv6、YOLOv5、YOLONAS......🚀🚀🚀CUDA IS ALL YOU NEED.🍎🍎🍎

获取到tensorRT-Alpha仓库,将yolov8和utils文件夹下的文件按我这个样子放(头文件底下的pch.h和framework.h是项目自动生成的)

资源文件的俩是在F:\TensorRT\samples\common文件夹下

放的时候注意位置,能不出错就行

在项目上右键->生成依赖项->生成自定义

然后选择.cu文件,右键属性,项类型选CUDA C/C++

预编译头我选了不使用(不然好像会出一些问题)

stdafx.h

#pragma once
// stdafx.h : 标准系统包含文件的包含文件,
// 或是经常使用但不常更改的
// 特定于项目的包含文件
//


#define WIN32_LEAN_AND_MEAN             //  从 Windows 头文件中排除极少使用的信息
// Windows 头文件: 
#include <windows.h>

#include "targetver.h"

// TODO:  在此处引用程序需要的其他头文件

targetver.h 

#pragma once

// 包括 SDKDDKVer.h 将定义可用的最高版本的 Windows 平台。

// 如果要为以前的 Windows 平台生成应用程序,请包括 WinSDKVer.h,并将
// WIN32_WINNT 宏设置为要支持的平台,然后再包括 SDKDDKVer.h。

#include <SDKDDKVer.h>
2.3 生成dll

这里就直接用函数作为输出的口了,没有用类包装(嫌麻烦)

考虑到实际应用是用相机,所以batch就写死为1,里面很多程序也是以此为前提魔改了(有别的需求的同学建议看tensorRT-Alpha的app_yolov8.cpp)

类的数量,颜色这些在utils.h中修改

ov_yolov8.h

#pragma once

#define OV_YOLOV8_API _declspec(dllexport)

#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
#include <random>
#include <opencv2/opencv.hpp>    //opencv header file
#include <chrono>
#include"../utils/yolo.h"
#include"yolov8.h"


// 定义输出结构体
typedef struct {
    float prob;
    cv::Rect rect;
    int classid;
}Object;

enum InputType {
    IMAGE,
    VIDEO,
    CAMERA
};

int total_batches = 0;
int delay_time = 1;
bool is_show = false;
bool is_save = false;

extern "C"
{
    utils::InitParameter param;

    OV_YOLOV8_API YOLOV8* LoadDetectModel(const std::string& model_path, const std::string& file_path, const int& type, const int& cameraID);
    OV_YOLOV8_API bool YoloDetectInfer(const cv::Mat& src, std::vector<Object>& vecObj, YOLOV8* model);

    void _setParameters(utils::InitParameter& initParameters);
    void _task(YOLOV8* yolo, const utils::InitParameter& param, std::vector<cv::Mat>& imgsBatch, const int& delayTime, const int& batchi,
        const bool& isShow, const bool& isSave, std::vector<Object>& vecObj);
}

ov_yolov8.cpp

这里的task和utils::show函数我修改了,因为我想要显示FPS,如果要用这个功能需要在utils.cpp和utils.h中改一下相关函数

vecObj是每个框的位置,class id和置信度,因为batch为1,这里就很粗暴直接box=object[0]了

#include "../pch.h"
#include "ov_yolov8.h"

void _setParameters(utils::InitParameter& initParameters)
{
	initParameters.class_names = utils::dataSets::ship;
	initParameters.num_class = 6; // for coco
	initParameters.batch_size = 1;
	initParameters.dst_h = 1024;
	initParameters.dst_w = 1024;
	initParameters.input_output_names = { "images",  "output0" };
	initParameters.conf_thresh = 0.25f;
	initParameters.iou_thresh = 0.45f;
	initParameters.save_path = "";
}

void _task(YOLOV8* yolo, const utils::InitParameter& param, std::vector<cv::Mat>& imgsBatch, 
	const int& delayTime, const int& batchi,const bool& isShow, const bool& isSave, std::vector<utils::Box>& box)
{
	auto beforeTime = std::chrono::steady_clock::now();
	
	utils::DeviceTimer d_t0; yolo->copy(imgsBatch);	      float t0 = d_t0.getUsedTime();
	utils::DeviceTimer d_t1; yolo->preprocess(imgsBatch);  float t1 = d_t1.getUsedTime();
	utils::DeviceTimer d_t2; yolo->infer();				  float t2 = d_t2.getUsedTime();
	utils::DeviceTimer d_t3; yolo->postprocess(imgsBatch); float t3 = d_t3.getUsedTime();
	sample::gLogInfo <<
		//"copy time = " << t0 / param.batch_size << "; "
		"preprocess time = " << t1 / param.batch_size << "; "
		"infer time = " << t2 / param.batch_size << "; "
		"postprocess time = " << t3 / param.batch_size << std::endl;
	
	std::vector<std::vector<utils::Box>> objectss = yolo->getObjectss();
	box = objectss[0];

	utils::show(objectss, param.class_names, delayTime, imgsBatch, beforeTime);
	if (isSave)
		utils::save(yolo->getObjectss(), param.class_names, param.save_path, imgsBatch, param.batch_size, batchi);
	yolo->reset();
}

YOLOV8* LoadDetectModel(const std::string& model_path, const std::string& file_path,const int& type, const int& cameraID)
{   
    // set utils params //
	_setParameters(param);
	// source
	utils::InputStream source;
	// path
	std::string video_path;
	std::string image_path;
	// camera' id
	int camera_id = 0;
	switch (type)
	{
	case InputType::IMAGE:
		source = utils::InputStream::IMAGE;
		image_path = file_path;
		break;
	case InputType::VIDEO:
		source = utils::InputStream::VIDEO;
		video_path = file_path;
		break;
	case InputType::CAMERA:
		source = utils::InputStream::CAMERA;
		camera_id = 0;
		break;
	default:
		break;
	}
	// input params
	int size = 1024; // w or h
	int batch_size = 1;
	cv::VideoCapture capture;

	// set input params
	if (!utils::setInputStream(source, image_path, video_path, 
		camera_id, capture, total_batches, delay_time, param))
	{
		sample::gLogError << "read the input data errors!" << std::endl;
		return nullptr;
	}

	// build and read model 
	YOLOV8* model = new YOLOV8(param);
	std::vector<unsigned char> trt_file = utils::loadModel(model_path);
	if (trt_file.empty())
	{
		sample::gLogError << "trt_file is empty!" << std::endl;
		return nullptr;
	}
	// init model
	if (!model->init(trt_file))
	{
		sample::gLogError << "initEngine() ocur errors!" << std::endl;
		return nullptr;
	}
	model->check();
	return model;
}


bool YoloDetectInfer(const cv::Mat& src, std::vector<Object>& vecObj, YOLOV8* model)
{   
	std::vector<cv::Mat> imgs_batch;
	imgs_batch.reserve(param.batch_size);
	imgs_batch.emplace_back(src.clone());

	std::vector<utils::Box> box;
	_task(model, param, imgs_batch, delay_time, 0, is_show, is_save, box);

	for (auto b : box) 
	{
		Object tmp;
		tmp.classid = b.label;
		tmp.prob = b.confidence;

		int x = (int)(b.left + b.right) / 2.;
		int y = (int)(b.top + b.bottom) / 2.;
		int w = (int)(b.right - b.left);
		int h = (int)(b.bottom - b.top);
		tmp.rect = cv::Rect(x, y, w, h);

		vecObj.emplace_back(tmp);
	}
	return true;
}

加上FPS的utils.cpp部分

void utils::show(const std::vector<std::vector<utils::Box>>& objectss, const std::vector<std::string>& classNames,
	const int& cvDelayTime, std::vector<cv::Mat>& imgsBatch, std::chrono::steady_clock::time_point start)
{
	std::string windows_title = "infer result";
	if (!imgsBatch[0].empty())
	{
		cv::namedWindow(windows_title, cv::WINDOW_NORMAL | cv::WINDOW_KEEPRATIO);  // allow window resize(Linux)

		int max_w = 960;
		int max_h = 540;
		if (imgsBatch[0].rows > max_h || imgsBatch[0].cols > max_w)
		{
			cv::resizeWindow(windows_title, max_w, imgsBatch[0].rows * max_w / imgsBatch[0].cols);
		}
	}

	// vis
	cv::Scalar color = cv::Scalar(0, 255, 0);
	cv::Point bbox_points[1][4];
	const cv::Point* bbox_point0[1] = { bbox_points[0] };
	int num_points[] = { 4 };
	for (size_t bi = 0; bi < imgsBatch.size(); bi++)
	{
		if (!objectss.empty())
		{
			for (auto& box : objectss[bi])
			{
				if (classNames.size() == 91) // coco91
				{
					color = Colors::color91[box.label];
				}
				if (classNames.size() == 80) // coco80
				{
					color = Colors::color80[box.label];
				}
				if (classNames.size() == 20) // voc20
				{
					color = Colors::color20[box.label];
				}
				if (classNames.size() == 6) // ship
				{
					color = Colors::color6[box.label];
				}
				cv::rectangle(imgsBatch[bi], cv::Point(box.left, box.top), cv::Point(box.right, box.bottom), color, 2, cv::LINE_AA);
				cv::String det_info = classNames[box.label] + " " + cv::format("%.4f", box.confidence);
				bbox_points[0][0] = cv::Point(box.left, box.top);
				bbox_points[0][1] = cv::Point(box.left + det_info.size() * 11, box.top);
				bbox_points[0][2] = cv::Point(box.left + det_info.size() * 11, box.top - 15);
				bbox_points[0][3] = cv::Point(box.left, box.top - 15);
				cv::fillPoly(imgsBatch[bi], bbox_point0, num_points, 1, color);
				cv::putText(imgsBatch[bi], det_info, bbox_points[0][0], cv::FONT_HERSHEY_DUPLEX, 0.6, cv::Scalar(255, 255, 255), 1, cv::LINE_AA);

				if (!box.land_marks.empty()) // for facial landmarks
				{
					for (auto& pt : box.land_marks)
					{
						cv::circle(imgsBatch[bi], pt, 1, cv::Scalar(255, 255, 255), 1, cv::LINE_AA, 0);
					}
				}
			}
		}
		auto afterTime = std::chrono::steady_clock::now();
		double duration_millsecond = std::chrono::duration<double, std::milli>(afterTime - start).count() / 1000;
		// caculate FPS
		cv::String fps_info = "FPS:" + cv::format("%.2f", imgsBatch.size()/duration_millsecond);
		cv::putText(imgsBatch[bi], fps_info, cv::Point(100, 100), cv::FONT_HERSHEY_DUPLEX, 2, cv::Scalar(255, 0, 0), 1, cv::LINE_AA);

		cv::imshow(windows_title, imgsBatch[bi]);
		char c = cv::waitKey(cvDelayTime);
		if (c == 27) {//ESC
			break;
		}
	}
}

可以看到我实际给外面使用的就只有LoadDetectModel和YoloDetectInfer两个函数

然后就是愉快地生成!生成了lib和dll在release文件夹里

第三部:部署到QT

3.1 配置环境

这一步其实你只要配置好了系统环境,基本就没有任何问题(也就是第一步那里,配置系统环境变量!一般找不到dll或者什么undefined错误都是因为这个!)

把上一步配置环境变量时的包含目录下的东西全复制到下图的include下(如果你愿意一个一个手动添加也行),不要改变它的目录结构!

比如上一步包含目录有F:\TensorRT\samples\common,那就直接把这下面的复制进来,不要在include下新建一个common文件夹了

然后把上一步的lib全部复制到lib下来(就是你添加到链接器的,你自己添加的东西,它自带的不用放)

然后就是在qt的pro文件下配置环境(这是我的配置)

QT       += core gui

greaterThan(QT_MAJOR_VERSION, 4): QT += widgets

CONFIG += c++17

# The following define makes your compiler emit warnings if you use
# any Qt feature that has been marked deprecated (the exact warnings
# depend on your compiler). Please consult the documentation of the
# deprecated API in order to know how to port your code away from it.
DEFINES += QT_DEPRECATED_WARNINGS

# You can also make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000    # disables all the APIs deprecated before Qt 6.0.0

SOURCES += \
    main.cpp \
    widget.cpp

HEADERS += \
    ov_yolov8.h \
    widget.h

FORMS += \
    widget.ui

# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target

LIBS += -L$$PWD/lib/ -lopencv_world490
LIBS += -L$$PWD/lib/ -lyolov8_deploy_dll
LIBS += -L$$PWD/lib/ -lcudadevrt
LIBS += -L$$PWD/lib/ -lcudart
LIBS += -L$$PWD/lib/ -lnvinfer
LIBS += -L$$PWD/lib/ -lnvinfer_plugin
LIBS += -L$$PWD/lib/ -lnvonnxparser
LIBS += -L$$PWD/lib/ -lnvparsers

INCLUDEPATH += $$PWD/include
DEPENDPATH += $$PWD/include
INCLUDEPATH += $$PWD/include/opencv2
DEPENDPATH += $$PWD/include/opencv2

3.2 写QT程序

一开始我用的VS2019写的QT程序,后来移植到QT creator发现也可以,考虑到公司用的都是后者,就用这个开发了

因为它要求先加载文件流,确定输入大小,再读.engine文件,才能推理文件,所以我这里做了一堆按钮(来个大佬给优化下)

以及很不幸的是,camera那个选项因为我手上没有相机,所以无法实验,请自行去掉吧

文件目录非常简单哈

ov_yolov8.h

#pragma once

#define OV_YOLOV8_API _declspec(dllexport)

#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
#include <random>
#include <opencv2/opencv.hpp>    //opencv header file
#include <utils/yolo.h>
#include <src/yolov8.h>

using namespace cv;
using namespace std;


// 定义输出结构体
typedef struct {
    float prob;
    cv::Rect rect;
    int classid;
}Object;

extern "C"
{
    OV_YOLOV8_API YOLOV8* LoadDetectModel(const std::string& model_path, const std::string& file_path, const int& type, const int& cameraID);
    OV_YOLOV8_API bool YoloDetectInfer(const cv::Mat& src, std::vector<Object>& vecObj, YOLOV8* model);

}

widget.h 

#pragma once

#include <QtWidgets/QWidget>
#include <QFileDialog>
#include <QMessageBox>

#include "ov_yolov8.h"
#include "ui_widget.h"

class Widget : public QWidget
{
    Q_OBJECT

public:
    Widget(QWidget *parent = nullptr);
    ~Widget();

    enum InputType {
        IMAGE,
        VIDEO,
        CAMERA
    };

private slots:
    void on_openModel_clicked();
    void on_openFile_clicked();
    void on_close_clicked();
    void on_inputStream_activated(int index);

private:
    Ui::Widget ui;
    YOLOV8* model;

    int type;
    string file_path;
    int camera_id = 0;
};

widget.cpp

#include "widget.h"

Widget::Widget(QWidget *parent): QWidget(parent)
{
    ui.setupUi(this);

    QStringList strList;
    strList << "image" << "video" << "camera";
    ui.inputStream->addItems(strList);
    ui.inputStream->setCurrentIndex(-1);

    ui.openModel->setEnabled(false);
    ui.openFile->setEnabled(false);
    ui.cameraID->setValidator(new QRegExpValidator(QRegExp("[0-9]+$")));

}

Widget::~Widget()
{
}

void Widget::on_openModel_clicked()
{
    QString efilename = QFileDialog::getOpenFileName(this, "open model", "./model", "*.engine");
    string eName_Detect = efilename.toStdString();

    model = LoadDetectModel(eName_Detect,this->file_path,this->type,this->camera_id);
    if (model != nullptr)
    {
        ui.lineEdit->setText("load model");
        ui.openModel->setEnabled(false);
        ui.openFile->setEnabled(true);
        ui.inputStream->setEnabled(false);
    }
}

void Widget::on_openFile_clicked()
{
    QString filename = QFileDialog::getOpenFileName(this, "open infer file", "./video", "*.mp4");
    // 读取视频
    VideoCapture capture(filename.toStdString());

    // 检测推理
    vector<Object> vecObj = {};

    //检查是否成功打开
    if (!capture.isOpened())
    {
        cout << "fail!" << endl;
        return;
    }

    //打印视频参数:宽、高、每秒传输帧数
    cout << "width = " << capture.get(CAP_PROP_FRAME_WIDTH) << endl;
    cout << "height =" << capture.get(CAP_PROP_FRAME_HEIGHT) << endl;
    cout << "fps = " << capture.get(CAP_PROP_FPS) << endl;

    try
    {
        Mat frame;
        while (true)
        {
            capture.read(frame);				//读取视频帧
            if (frame.empty()) {
                break;
            }
            bool InferDetectflag = YoloDetectInfer(frame, vecObj, model);
            if (InferDetectflag == false)return;
        }
    }
    catch (const std::exception& e)
    {
        ui.lineEdit->setText("wrong!");
    }
    capture.release();
    destroyAllWindows();
}

void Widget::on_close_clicked()
{
    if (model != nullptr)
    {
        model = nullptr;
        ui.openModel->setEnabled(true);
        ui.inputStream->setEnabled(true);
        ui.openFile->setEnabled(false);
    }
}

void Widget::on_inputStream_activated(int index)
{
    QString filename;
    string Name_Detect;
    switch (index) {
    default:
        break;
    case InputType::IMAGE:
        filename = QFileDialog::getOpenFileName(this, "open input file", "./video", "*.jpg");
        Name_Detect = filename.toStdString();
        this->type = 0;
        this->file_path = Name_Detect;
        ui.openModel->setEnabled(true);
        break;
    case InputType::VIDEO:
        filename = QFileDialog::getOpenFileName(this, "open input file", "./video", "*.mp4");
        Name_Detect = filename.toStdString();
        this->type = 1;
        this->file_path = Name_Detect;
        ui.openModel->setEnabled(true);
        break;
    case InputType::CAMERA:
        QMessageBox::information(this, "imformation", "please enter camera id");
        this->camera_id = ui.cameraID->text().toInt();
        ui.openModel->setEnabled(true);
        break;
    }
}


widget.ui

<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
 <class>Widget</class>
 <widget class="QWidget" name="Widget">
  <property name="geometry">
   <rect>
    <x>0</x>
    <y>0</y>
    <width>800</width>
    <height>600</height>
   </rect>
  </property>
  <property name="windowTitle">
   <string>Widget</string>
  </property>
  <widget class="QLineEdit" name="lineEdit">
   <property name="geometry">
    <rect>
     <x>50</x>
     <y>190</y>
     <width>431</width>
     <height>31</height>
    </rect>
   </property>
   <property name="readOnly">
    <bool>true</bool>
   </property>
  </widget>
  <widget class="QPushButton" name="openFile">
   <property name="geometry">
    <rect>
     <x>140</x>
     <y>150</y>
     <width>75</width>
     <height>23</height>
    </rect>
   </property>
   <property name="text">
    <string>openFile</string>
   </property>
  </widget>
  <widget class="QPushButton" name="openModel">
   <property name="geometry">
    <rect>
     <x>50</x>
     <y>150</y>
     <width>75</width>
     <height>23</height>
    </rect>
   </property>
   <property name="text">
    <string>openModel</string>
   </property>
  </widget>
  <widget class="QPushButton" name="close">
   <property name="geometry">
    <rect>
     <x>230</x>
     <y>150</y>
     <width>91</width>
     <height>21</height>
    </rect>
   </property>
   <property name="text">
    <string>close</string>
   </property>
  </widget>
  <widget class="QComboBox" name="inputStream">
   <property name="geometry">
    <rect>
     <x>50</x>
     <y>110</y>
     <width>87</width>
     <height>22</height>
    </rect>
   </property>
   <property name="editable">
    <bool>false</bool>
   </property>
  </widget>
  <widget class="QLineEdit" name="cameraID">
   <property name="geometry">
    <rect>
     <x>150</x>
     <y>110</y>
     <width>331</width>
     <height>21</height>
    </rect>
   </property>
   <property name="text">
    <string>请输入摄像头号码</string>
   </property>
  </widget>
 </widget>
 <resources/>
 <connections/>
</ui>

ok,非常nice!

  • 25
    点赞
  • 49
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值