ubuntu 下安装 TensorRT 7.0 以及验证

安装环境

Ubuntu 18.04
TensorRT-7.0.0.11
cuda 10.0
cudnn 7.6.5
  • 安装 TensorRT 7.0 版本

    Tar File Installation
    This section contains instructions for installing TensorRT from a tar file.
    About this task

    Note: Before issuing the following commands, you’ll need to replace 7.x.x.x with your specific TensorRT version. The following commands are examples.

    Procedure

    1. Install the following dependencies, if not already present:
      CUDA 10.0
      cuDNN 7.6.5
      Python 3 (Optional)

    2. Download the TensorRT tar file that matches the Linux distribution you are using.

    3. Choose where you want to install TensorRT. This tar file will install everything into a subdirectory called TensorRT-7.x.x.x.

    4. Unpack the tar file.

      version="7.x.x.x"  
      os="<os>"  
      arch=$(uname -m)      
      cuda="cuda-x.x"   
      cudnn="cudnn8.x"   
      tar xzvf TensorRT-${version}.${os}.${arch}-gnu.${cuda}.${cudnn}.tar.gz  
      

      Where:

      • 7.x.x.x is your TensorRT version
      • <os> is:
        Ubuntu-16.04
        Ubuntu-18.04
        CentOS-7.6
      • cuda-x.x is CUDA version 10.2 or 11.0.
      • cudnn8.x is cuDNN version 8.0.
        This directory will have sub-directories like lib, include, data, etc…
      ls TensorRT-${version}
      bin  data  doc  graphsurgeon  include  lib  python  samples  targets  TensorRT-Release-Notes.pdf  uff
      
    5. Add the absolute path to the TensorRTlib directory to the environment variable LD_LIBRARY_PATH:

      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TensorRT-${version}/lib>
      
    6. Install the Python TensorRT wheel file.

      cd TensorRT-${version}/python
      

      If using Python 2.7:

      sudo pip2 install tensorrt-*-cp27-none-linux_x86_64.whl
      

      If using Python 3.x:

      sudo pip3 install tensorrt-*-cp3x-none-linux_x86_64.whl
      
    7. Install the Python UFF wheel file. This is only required if you plan to use TensorRT with TensorFlow.

      cd TensorRT-${version}/uff
      

      If using Python 2.7:

      sudo pip2 install uff-0.6.9-py2.py3-none-any.whl
      

      If using Python 3.x:

      sudo pip3 install uff-0.6.9-py2.py3-none-any.whl
      

      In either case, check the installation with:

      which convert-to-uff
      
    8. Install the Python graphsurgeon wheel file.

      cd TensorRT-${version}/graphsurgeon
      

      If using Python 2.7:

      sudo pip2 install graphsurgeon-0.4.5-py2.py3-none-any.whl
      

      If using Python 3.x:

      sudo pip3 install graphsurgeon-0.4.5-py2.py3-none-any.whl
      
    9. Verify the installation:

      a. Ensure that the installed files are located in the correct directories. For example, run the tree -d command to check whether all supported installed files are in place in the lib, include, data, etc… directories.
      b. Build and run one of the shipped samples, for example, sampleMNIST in the installed directory. You should be able to compile and execute the sample without additional settings. For more information, see the “Hello World” For TensorRT (sampleMNIST).
      c. The Python samples are in the samples/python directory.

  • 编译TensorRT的例子 (sampleMNIST) 出错:

    ~/Downloads/TensorRT/TensorRT-7.0.0.11/samples/sampleMNIST$ make
    ../Makefile.config:7: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
    ../Makefile.config:10: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
    make: Nothing to be done for 'all'.
    

    解决办法:

    vim ~/.bashrc
    
    # tensorrt cuda and cudnn
    export CUDA_INSTALL_DIR=/usr/local/cuda
    export CUDNN_INSTALL_DIR=/usr/local/cuda
    

    :~/Downloads/TensorRT/TensorRT-7.0.0.11/samples/sampleMNIST$ make -j8
    make: Nothing to be done for 'all'.
    

    解决办法:

    make clean
    

    ~/Downloads/TensorRT/TensorRT-7.0.0.11/bin$ ./sample_mnist
    ./sample_mnist: error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
    
    

    解决办法:

    vim ~/.bashrc
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/xxx/Downloads/TensorRT/TensorRT-7.0.0.11/lib
    

    ~/Downloads/TensorRT/TensorRT-7.0.0.11/bin$ ./sample_mnist
    &&&& RUNNING TensorRT.sample_mnist # ./sample_mnist
    [07/14/2020-11:43:08] [I] Building and running a GPU inference engine for MNIST
    [07/14/2020-11:43:10] [I] [TRT] Detected 1 inputs and 1 output network tensors.
    [07/14/2020-11:43:10] [W] [TRT] Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
    Could not find 6.pgm in data directories:
        data/mnist/
        data/samples/mnist/
    &&&& FAILED
    

    解决办法:

    cd /TensorRT-7.0.0.11/data/mnist
    python download_pgms.py 
    

参考:
install and configure tensorrt 4 on ubuntu 16.04
【TensorRT】tensorRT 7.0的安装配置

  • 8
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值