AutoDL平台Tensorrt8.6、OpenCV4.2、protobuf3.11.4的编译安装

AutoDL平台Tensorrt8.6、OpenCV4.2、protobuf3.11.4的编译安装

  • 下面是利用Tensorrt进行目标检测模型推理加速的一般流程:
    • 1、github下载一个开源检测模型(YOLOv5或YOLOv8),根据自己数据集进行微调
    • 2、转换模型(pt->onnx->.trtmodel),根据需要可以量化成FP16、或者Int8
    • 3、前处理、后处理用cuda重写
    • 4、对后处理nms和softmax进行优化
    • 5、多线程pipline
    • 6、根据业务可以提前卡门限,抽帧,改input尺寸等
  • 上述流程中,除了安装CUDA和CUDNN,一般还需要安装Tensorrt以及OpenCV。
  • 这里利用云平台—AutoDL编译安装Tensorrt8.6、OpenCV4.2,然后利用安装好的Tensorrt运行tensorRT_Pro项目。
  • AutoDL平台的租借可以参考:AutoDL平台租借GPU详解

一、Tensorrt8.6的安装

当我们在AutoDL平台上租借好一台服务器后,AutoDL平台就已经帮我们安装好CUDA和CUDNN了。

我们可以通过下面的命令进行查看:

# 查看cuda的版本(11.3)
(base) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:15:13_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0


# 查看cudnn的版本(cudnn的版本8.2.0)
(base) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp# cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2 
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 2
#define CUDNN_PATCHLEVEL 0

然后,我们就可以安装Tensorrt了:

# 1、注册账号才能下载(通过下面的网址进行注册)
https://developer.nvidia.com/login


# 系统版本是Ubuntu 20.04
(base) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp# cat /etc/issue
Ubuntu 20.04.4 LTS \n \l

# 2、Tensorrt下载的地址
https://developer.nvidia.com/nvidia-tensorrt-8x-download

在这里插入图片描述

# 3、tar包上传到服务器上
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# ll
total 73220
drwxr-xr-x 2 root root       72 813 16:25 ./
drwx------ 1 root root     4096 813 16:25 ../
-rw-r--r-- 1 root root 62554112 513 16:25 TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-11.8.tar.gz

# 4、解压安装包
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# tar -zxvf TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-11.8.tar.gz

# 5、添加环境变量
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/lib# pwd
/root/autodl-fs/apps/TensorRT-8.6.1.6


(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/lib# vim  /root/.bashrc 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/root/autodl-fs/apps/TensorRT-8.6.1.6/lib
export PATH=$PATH:/root/autodl-fs/apps/TensorRT-8.6.1.6/bin

# 使之生效
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/lib# source /root/.bashrc

# 6、创建python环境
# 创建虚拟环境
conda create -n tensorrt python=3.8

# 激活环境 
conda activate tensorrt

# 安装相关包
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps# pip install pycuda
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps# pip install onnx==1.16.0
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps# pip install onnxruntime==1.16.0


# 安装TensorRT wheel文件,在该文件夹里有针对不同python版本的whl文件,
# 由于我采用的虚拟环境中的python版本是3.8所以安转cp38对应的whl文件
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/python# pwd
/root/apps/TensorRT-8.6.1.6/python

(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/python# pip install tensorrt-8.6.1-cp38-none-linux_x86_64.whl 

# 安装graphsurgeon wheel文件
# graphsurgeon用于转换TensorFlow图。其功能主要包含在TensorFlow图中查找节点以及修改、添加或删除节点。
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/graphsurgeon# pip install graphsurgeon-0.4.6-py2.py3-none-any.whl 

# 安装onnx_graphsurgeon wheel文件
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/onnx_graphsurgeon# pip install onnx_graphsurgeon-0.3.12-py2.py3-none-any.whl


# uff包用于将经过训练的模型从各种框架转换为通用格式。
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/uff# pip install uff-0.6.9-py2.py3-none-any.whl


# 进行验证
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/python# python
Python 3.8.10 (default, Jun  4 2021, 15:09:15) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
>>> uff.__version__

# 会显示protobuf版本报错
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/uff# pip uninstall protobuf
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/uff# pip install protobuf==3.20.3

# 又显示ModuleNotFoundError: No module named 'tensorflow'
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/uff#pip install tensorflow   # 默认安装适合的最新版本


# 再次进行验证
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/TensorRT-8.6.1.6/python# python
Python 3.8.10 (default, Jun  4 2021, 15:09:15) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import uff
>>> uff.__version__
'0.6.9'
>>> import tensorrt as trt
>>> trt.__version__
'8.6.1'

二、OpenCV4.2的安装

# 1、更新源,安装软件包
sudo apt update && sudo apt install -y cmake g++ wget unzip
 
# 安装依赖
sudo apt-get install build-essential libgtk2.0-dev libavcodec-dev libavformat-dev libjpeg-dev libswscale-dev libtiff5-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install pkg-config


# 2、进入github找到OpenCV和 opencv_contrib(版本必须一致)
https://github.com/opencv


# 两个安装包通过以下命令解压
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# unzip opencv-4.2.0.zip
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# unzip opencv_contrib-4.2.0.zip

# 将文件夹opencv_contrib-4.2.0移动到文件夹opencv-4.2.0中:
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# mv opencv_contrib-4.2.0/ opencv-4.2.0/


# 3、从源代码使用CMake编译OpenCV(编译过程较慢,需要耐心等待)
# 在opencv-4.2.0文件夹中新建“build”目录
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps# cd opencv-4.2.0
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/opencv-4.2.0# mkdir ./build

(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/opencv-4.2.0# cd build/


# 运行如下命令配置:
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/root/autodl-fs/apps/opencv-4.2.0/opencv_contrib-4.2.0/modules ..

# 可能会报错缺失文件(如下所示)
/root/autodl-fs/apps/opencv-4.2.0/opencv_contrib-4.2.0/modules/xfeatures2d/src/boostdesc.cpp:654:20: fatal error: boostdesc_bgm.i: No such file or directory
  654 |           #include "boostdesc_bgm.i"


# 错误可以参考:https://github.com/opencv/opencv_contrib/issues/1301
# 可以通过下面的连接下载所有的缺失文件:
# 缺失文件资源链接:https://pan.baidu.com/s/1n08ztDj2mqjsgDEtPnksCg
# 提取码:x4mw

# 将下载的缺失文件上传到服务器,并且移动【链接中所有文件】到下面目录
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps# mv boostdesc_bgm.i  ./opencv-4.2.0/opencv_contrib-4.2.0/modules/xfeatures2d/src/
......

# 4、如果没有错误发生,运行如下命令编译工程:
(base) root@autodl-container-adbc11ae52-f2ebff02:~/apps/opencv-4.2.0/build# make -j7
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/apps/opencv-4.2.0/build# make install

# 5、测试验证
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:/usr/local/lib/pkgconfig# 
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:/usr/local/lib/pkgconfig# pkg-config opencv --modversion
4.2.0

# 如果没有出现版本信息,出现什么No package错误,可以参考下面步骤解决:
# (1)在终端输入下面指令,创建opencv.pc文件
cd /usr/local/lib
sudo mkdir pkgconfig
cd pkgconfig

# 在创建好的opencv.pc文件中,添加下面的信息
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:/usr/local/lib/pkgconfig# sudo vim opencv.pc
prefix=/usr/local
exec_prefix=${prefix}
includedir=${prefix}/include
libdir=${exec_prefix}/lib
 
Name: opencv
Description: The opencv library
Version:4.2.0
Cflags: -I${includedir}/opencv4
Libs: -L${libdir} -lopencv_shape -lopencv_stitching -lopencv_objdetect -lopencv_superres -lopencv_videostab -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_video -lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann  -lopencv_core

# (3)保存关闭,再执行下面指令即可结束。
export  PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

# 然后我们再次执行验证
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:/usr/local/lib/pkgconfig# pkg-config opencv --modversion
4.2.0


# 6、利用opencv提供的测试案例进行验证
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:/usr/local/lib/pkgconfig# cd /root/autodl-fs/apps/opencv-4.2.0/samples/cpp/example_cmake/

(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/opencv-4.2.0/samples/cpp/example_cmake# ll
total 11
drwxr-xr-x 1 root root 4096 1220  2019 ./
drwxr-xr-x 1 root root 4096 1220  2019 ../
-rw-r--r-- 1 root root 1024 1220  2019 CMakeLists.txt
-rw-r--r-- 1 root root 1119 1220  2019 example.cpp
-rw-r--r-- 1 root root  281 1220  2019 Makefile
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/opencv-4.2.0/samples/cpp/example_cmake# cmake .

(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/opencv-4.2.0/samples/cpp/example_cmake# make

# 出现Built with说明成功编译
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/opencv-4.2.0/samples/cpp/example_cmake# ./opencv_example 
Built with OpenCV 4.2.0

三、protobuf3.11.4的安装

# 1、卸载旧版本
(base) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/protobuf-3.11.4# which protoc
/usr/bin/protoc
(base) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/protobuf-3.11.4# rm -rf /usr/bin/protoc

# 2、连同子仓一起拉取下来
$ git clone -b v3.11.4 https://github.com/protocolbuffers/protobuf.git --recurse-submodules
$ cd protobuf
# 查看拉取的版本
$ git show
commit d0bfd5221182da1a7cc280f3337b5e41a89539cf (HEAD, tag: v3.11.4)
Author: Rafi Kamal <rafikamal@google.com>
Date:   Fri Feb 14 12:13:20 2020 -0800
......

# 3、进行编译(编译过程较慢,需要耐心等待)
$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX="/usr/local/protobuf" ../cmake
$ make -j7
$ make install

# 4、编译完成后进行配置
$ ldconfig
$ vim ~/.bashrc
# 添加配置后如下所示
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/root/autodl-fs/apps/TensorRT-8.6.1.6/lib:/usr/local/protobuf/lib
export PATH=$PATH:/root/autodl-fs/apps/TensorRT-8.6.1.6/bin:/usr/local/protobuf/bin
$ source ~/.bashrc

# 5、版本检查
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-fs/apps/protobuf# protoc --version
libprotoc 3.11.4

四、利用安装好的Tensorrt运行tensorRT_Pro项目

在这里插入图片描述

1、下载项目

git clone https://github.com/shouxieai/tensorRT_Pro.git

2、修改配置文件CMakeLists.txt

# 修改后配置文件如下:
cmake_minimum_required(VERSION 2.6)
project(pro)

option(CUDA_USE_STATIC_CUDA_RUNTIME OFF)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE Debug)
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/workspace)

# 如果要支持python则设置python路径
set(HAS_PYTHON ON)
set(PythonRoot "/root/miniconda3/envs/tensorrt")
set(PythonName "python3.8")

# 如果你是不同显卡,请设置为显卡对应的号码参考这里:https://developer.nvidia.com/zh-cn/cuda-gpus#compute
#set(CUDA_GEN_CODE "-gencode=arch=compute_75,code=sm_75")

# 如果你的opencv找不到,可以自己指定目录
set(OpenCV_DIR   "/usr/local/lib")

set(CUDA_TOOLKIT_ROOT_DIR     "/usr/local/cuda-11.3")
set(CUDNN_DIR    "/usr")
set(TENSORRT_DIR "/root/autodl-fs/apps/TensorRT-8.6.1.6")


# 因为protobuf,需要用特定版本,所以这里指定路径
set(PROTOBUF_DIR "/usr/local/protobuf")
......

3、运行workspace目录下的lesson1.py示例文件生成lesson1.onnx文件

import torch
import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super().__init__()

        self.conv = nn.Conv2d(1, 1, 3, stride=1, padding=1, bias=True)
        self.conv.weight.data.fill_(0.3)
        self.conv.bias.data.fill_(0.2)

    def forward(self, x):
        x = self.conv(x)
        return x.view(int(x.size(0)), -1)
        # return x.view(-1, int(x.numel() // x.size(0)))

model = Model().eval()

"""
>>> conv.weight.data.fill_(0.3)
tensor([[[[0.3000, 0.3000, 0.3000],
          [0.3000, 0.3000, 0.3000],
          [0.3000, 0.3000, 0.3000]]]])
          
>>> conv.bias.data.fill_(0.2)
tensor([0.2000])

>>> x
tensor([[[[1., 1., 1.],
          [1., 1., 1.],
          [1., 1., 1.]]]])

>>> conv(x)
tensor([[[[1.4000, 2.0000, 1.4000],
          [2.0000, 2.9000, 2.0000],
          [1.4000, 2.0000, 1.4000]]]], grad_fn=<ConvolutionBackward0>)

x.view(int(x.size(0)), -1)结果为:
tensor([[1.4000, 2.0000, 1.4000, 2.0000, 2.9000, 2.0000, 1.4000, 2.0000, 1.4000]],
       grad_fn=<ViewBackward0>)
"""

x = torch.full((1, 1, 3, 3), 1.0)
y = model(x)
print(y)

# 导出onnx文件
torch.onnx.export(
    model, (x, ), "lesson1.onnx", verbose=True
)

4、然后把application目录下的app_lesson.cpp文件中的相关注释打开

......
    
static void lesson1(){

    /** 模型编译,onnx到trtmodel **/
    TRT::compile(
         TRT::Mode::FP32,            /** 模式, fp32 fp16 int8  **/
         1,                          /** 最大batch size        **/
         "lesson1.onnx",             /** onnx文件,输入         **/
         "lesson1.fp32.trtmodel"     /** trt模型文件,输出      **/
     );

    /** 加载编译好的引擎 **/
    auto infer = TRT::load_infer("lesson1.fp32.trtmodel");

    /** 设置输入的值 **/
    infer->input(0)->set_to(1.0f);

    /** 引擎进行推理 **/
    infer->forward();

    /** 取出引擎的输出并打印 **/
    auto out = infer->output(0);
    INFO("out.shape = %s", out->shape_string());
    for(int i = 0; i < out->channel(); ++i)
        INFO("%f", out->at<float>(0, i));
}    

......

int app_lesson(){

    iLogger::set_log_level(iLogger::LogLevel::Verbose);
    // test_tensor3();
    lesson1();
    // lesson2();
    // lesson3();
    // lesson_cache1frame();
    return 0;
}

5、然后进行编译

# 通过cmake编译tensorRT_Pro
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp/tensorRT_Pro-main# mkdir build

(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp/tensorRT_Pro-main# cd build

(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp/tensorRT_Pro-main/build# cmake ..

# 可以看到输出的结果和python文件输出结果一致:
(tensorrt) root@autodl-container-adbc11ae52-f2ebff02:~/autodl-tmp/tensorRT_Pro-main/build# make lesson -j8

......
# [100%] Built target pro
# Scanning dependencies of target lesson
# [2024-08-18 15:13:39][info][trt_builder.cpp:474]:Compile FP32 Onnx Model 'lesson1.onnx'.
[2024-08-18 15:14:11][warn][trt_builder.cpp:33]:NVInfer: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[2024-08-18 15:14:11][info][trt_builder.cpp:558]:Input shape is -1 x 1 x 3 x 3
[2024-08-18 15:14:11][info][trt_builder.cpp:559]:Set max batch size = 1
[2024-08-18 15:14:11][info][trt_builder.cpp:560]:Set max workspace size = 1024.00 MB
[2024-08-18 15:14:12][info][trt_builder.cpp:561]:Base device: [ID 0]<NVIDIA GeForce RTX 2080 Ti>[arch 7.5][GMEM 10.35 GB/10.75 GB]
[2024-08-18 15:14:12][info][trt_builder.cpp:564]:Network has 1 inputs:
[2024-08-18 15:14:12][info][trt_builder.cpp:570]:      0.[input] shape is -1 x 1 x 3 x 3
[2024-08-18 15:14:12][info][trt_builder.cpp:576]:Network has 1 outputs:
[2024-08-18 15:14:12][info][trt_builder.cpp:581]:      0.[5] shape is -1 x 9
[2024-08-18 15:14:12][info][trt_builder.cpp:585]:Network has 2 layers:
[2024-08-18 15:14:12][verbo][trt_builder.cpp:610]:  >>> 0.  Convolution        -1 x 1 x 3 x 3    -> -1 x 1 x 3 x 3    channel: 1, kernel: 3 x 3, padding: 1 x 1, stride: 1 x 1, dilation: 1 x 1, group: 1
[2024-08-18 15:14:12][verbo][trt_builder.cpp:610]:  *** 1.  Shuffle            -1 x 1 x 3 x 3    -> -1 x 9            
[2024-08-18 15:14:12][info][trt_builder.cpp:652]:Building engine...
[2024-08-18 15:14:13][info][trt_builder.cpp:672]:Build done 983 ms !
[2024-08-18 15:14:13][warn][trt_builder.cpp:33]:NVInfer: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[2024-08-18 15:14:13][warn][trt_builder.cpp:33]:NVInfer: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2024-08-18 15:14:13][info][app_lesson.cpp:85]:out.shape = 1 x 9
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:1.400000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:2.000000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:1.400000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:2.000000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:2.900000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:2.000000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:1.400000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:2.000000
[2024-08-18 15:14:13][info][app_lesson.cpp:87]:1.400000
[100%] Built target lesson

6、编译好的可执行文件在workspace目录下,我们可以再次运行

在这里插入图片描述

  • 8
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值