Tensorrt下的Yolox部署

一.Ubuntu系统的安装与显卡驱动安装

详见我这篇博客,如果要用Nvidia的显卡,一定要裸机装显卡驱动哦。

https://blog.csdn.net/qq_43515934/article/details/123470400?spm=1001.2014.3001.5502

二.Tensorrt的安装

详见我这篇博客,注意要把pycuda顺便装上哦

https://blog.csdn.net/qq_43515934/article/details/123897435?spm=1001.2014.3001.5502

如果你是30系显卡,请参考我这篇博客,上面博客是个通用的,不如这个快。

https://blog.csdn.net/qq_43515934/article/details/123951927?spm=1001.2014.3001.5501

三.YOLOX的安装

详见我这篇博客

https://blog.csdn.net/qq_43515934/article/details/123610689?spm=1001.2014.3001.5502

四. torch2trt的安装

位置:/home/zhangqi/Documents/Library

git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
python setup.py install

五.engine文件的准备

位置:/home/zhangqi/Documents/Library/YOLOX

根据设备修改源文件

trt.py

    parser.add_argument(
        "-w", '--workspace', type=int, default=32, help='max workspace size in detect'

32是运行时候要用4GB显存,根据你具体的这个显存数量改就行

引擎生成

python3 tools/trt.py -f exps/default/yolox_x.py -c pre_model/yolox_x.pth

更大的显存会由模型生成更优化的引擎,所以trt内部的32应改成其他参数来使得模型优化
yolox只能workspace32,max3但是yoloxnano可以workspace64,max64。所以不同的型号在你的电脑上是不同的参数

六.运行demo

位置/home/zhangqi/Documents/Library/YOLOX/demo/TensorRT/cpp

先改一下CMakeList哦

# cuda
include_directories(/data/cuda/cuda-10.2/cuda/include)
link_directories(/data/cuda/cuda-10.2/cuda/lib64)
# cudnn
include_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/include)
link_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/lib64)
# tensorrt
include_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/include)
link_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/lib)

将这些路径全改为你自己电脑中安装cuda和tensorrt的路径。

# include and link dirs of cuda and tensorrt, you need adapt them if yours are different
# cuda
include_directories(/usr/local/cuda/cuda-10.2/cuda/include)
link_directories(/usr/local/cuda/cuda-10.2/cuda/lib64)
# cudnn
include_directories(/usr/cuda/cuda-10.2/cudnn/v8.0.4/include)
link_directories(/usr/cuda/cuda-10.2/cudnn/v8.0.4/lib64)
# tensorrt
include_directories(/usr/cuda/cuda-10.2/TensorRT/v7.2.1.6/include)
link_directories(/usr/cuda/cuda-10.2/TensorRT/v7.2.1.6/lib)

你如果说,你找不到路径,也不想去找。你只要将这几行注释掉,系统会根据默认安装路径去找的。

# cuda
# include_directories(/data/cuda/cuda-10.2/cuda/include)
# link_directories(/data/cuda/cuda-10.2/cuda/lib64)
# cudnn
# include_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/include)
# link_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/lib64)
# tensorrt
# include_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/include)
# link_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/lib)

构建包

mkdir build
cd build
cmake ..
make

常见错误及解决方法

错误一
fatal error: crt/host_defines.h: No such file or directory
  147 | #include "crt/host_defines.h"
      |          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/yolox.dir/build.make:63: CMakeFiles/yolox.dir/yolox.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/yolox.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

这个是你在CMakelists里面的cuda默认路径闹错啦

首先输入下列命令搜索这个没有找到的文件

updatedb
locate host_defines.h

结果

/home/zhangqi/anaconda3/lib/python3.9/site-packages/nvidia/cuda_runtime/include/host_defines.h
/usr/local/cuda-11.1/targets/x86_64-linux/include/host_defines.h
/usr/local/cuda-11.1/targets/x86_64-linux/include/crt/host_defines.h
/usr/local/cuda-11.6/targets/x86_64-linux/include/host_defines.h

看见结果中只有一个在crt文件夹下,则位置/usr/local/cuda-11.1/targets/x86_64-linux/include/就是cuda库了
将CMakeList对应的改成这样即可

# cuda
 include_directories(/usr/local/cuda-11.1/targets/x86_64-linux/include)
 link_directories(/usr/local/cuda-11.1/targets/x86_64-linux/lib)
错误二
error: looser throw specifier for ‘virtual void Logger::log(nvinfer1::ILogger::Severity, const char*)239 |     void log(Severity severity, const char* msg) override
      |          ^~~

这个是原文件的版本问题,只要将将logging.h中第239行的void log(Severity severity, const char* msg) override

改为void log(Severity severity, nvinfer1::AsciiChar const* msg) noexcept

即可

运行

位置home/Documents/Library/YOLOX/demo/TensorRT/cpp/build

./yolox ../model_trt.engine -i ../../../../assets/dog.jpg
blob image

常见错误

[04/05/2022-10:18:56] [E] [TRT] 3: [runtime.cpp::deserializeCudaEngine::37] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/runtime.cpp::deserializeCudaEngine::37, condition: (blob) != nullptr
)
yolox: /home/zhangqi/Documents/Library/YOLOX/demo/TensorRT/cpp/yolox.cpp:493: int main(int, char**): Assertion `engine != nullptr' failed.
Aborted (core dumped)

engine的路径../model_trt.engine输错了,没找到model_trt.engine这个文件,改一下。

  • 1
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 13
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值