基于onnxruntime的C++版本TensorRt源码编译
提示:基于onnxruntime的C++版本TensorRt源码编译
一、源码地址
onnxruntime源码github地址.
https://github.com/microsoft/onnxruntime
Tensorrt7.2.2.3
https://developer.nvidia.com/nvidia-tensorrt-7x-download
二、步骤
1.基础环境搭建
- Ubuntu18.04
- cmake version 3.20.3
- python3.6.9(一般系统就自带,可以安装3.8的)
ps:onnxruntime的tensorRt的编译需要下载tensorRt7.2.2.3,支持的python版本:2.7,3.4-3.8,目前暂不支持3.9以上- Tensorrt7.2.2.3
- onnxruntime-tensorrt官网编译指导说明
- nvidia驱动
CUDA Toolkit 11.0 Download
Download cuDNN v8.0.5 (November 9th, 2020), for CUDA 11.0
2. 安装配置
# cuda安装包直接执行
./cuda_11.0.2_450.51.05_linux.run
# cuDNN安装
tar -zxvf cudnn-11.0-linux-x64-v8.0.5.39.tgz
sudo cp -rf cuda/include/cudnn* /usr/local/cuda/include/
sudo cp -rf cuda/lib64/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn*
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*
dpkg -i libcudnn8_8.0.5.39-1+cuda11.0_amd64.deb
dpkg -i libcudnn8-dev_8.0.5.39-1+cuda11.0_amd64.deb
dpkg -i libcudnn8-samples_8.0.5.39-1+cuda11.0_amd64.deb
# Tensorrt7.2.2.3安装
tar -zxvf TensorRT-7.2.2.3.Ubuntu-18.04.x86_64-gnu.cuda-11.0.cudnn8.0.tar.gz -C /home/
# 解压后,按照官网[4.5 Tar File Installation安装说明]
Tensorrt Tar File Installation
3.设置环境变量
# vim /etc/profile
export PATH=$PATH:/usr/local/cuda-11.0/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.0/lib64
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-11.0/lib64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT-7.2.2.3/lib
export PATH=$PATH:/home/TensorRT-7.2.2.3
# 保存退出
source /etc/profile
三、源码编译
./build.sh --build_shared_lib --skip_tests --config Release --use_cuda --cudnn_home /usr/local/cuda/ --cuda_home /usr/local/cuda --use_tensorrt --tensorrt_home /home/TensorRT-7.2.2.3/
编译过程展示
onnxruntime_test
下面就是编译完成的库,在onnxruntime/build/Linux/Release中
四、测试
暂时不展示,后续会放在个人github上