pytorch 编译后,在C++环境中运行

9 篇文章 0 订阅
2 篇文章 0 订阅

一. 准备工作

1.1 首先将要转换的pytorch模型转为TorchScript模型

这里推荐使用追踪的方法进行转换

import torch
from SSD.Build_ssd import SSD
from Configs import _C as cfg

# 先初始化模型
model = SSD(cfg)
model.eval()
# 导入模型参数
model.load_state_dict(torch.load('XXXpkl'))

# 初始化一个输入
sample_input = torch.rand((1,3,320,320))

# 将模型的一个实例和一个示例输入传递给 torch.jit.trace 函数
traced_script_module = torch.jit.trace(model, sample_input)
# 保存模型
traced_script_module.save("model.pt")

这里, 我们已经将模型保存为了torchscript 模型, 也就是 model.pt 文件.

1.2. 要使用C++加载模型, 应用程序必须依赖于PyTorch c++ API(LibTorch).

下载libtorch后解压,文件结构如下所示

新建一文件夹,用于存放 项目模型,以及文件

二. 编译

2.1 编写cpp文件

以输入一个全1张量,作为验证.

#include <torch/torch.h>
#include <torch/script.h> 
#include <iostream>
#include <memory>
#include <vector>
#include <string>

int main(int argc, const char* argv[]) {
    // 加载模型
    torch::jit::script::Module module = torch::jit::load("/home/XXX/libtorch-1.2/example-app(复件)/model.pt");
   
    module.to(at::kCUDA);
    // 初始化一个输入,1*3*320*320的全1张量
    std::vector<torch::jit::IValue> inputs;
    inputs.push_back(torch::ones({1, 3, 320, 320}).to(at::kCUDA));
    // 前向传播
    torch::jit::IValue output = module.forward(inputs);
    std::cout<<output<<std::endl;
}

2.2 编写CMakeLists.txt

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)    
project(example-app)  

set(CMAKE_PREFIX_PATH /home/super/libtorch-1.2/libtorch)   

find_package(Torch REQUIRED)



add_executable(example-app example-app.cpp)
target_link_libraries(example-app ${TORCH_LIBRARIES})

set_property(TARGET example-app PROPERTY CXX_STANDARD 11)

 2.3 开始编译

在example-app文件夹下 打开终端,输入

mkdir build
cd build

# 注意,最后的俩点 不要丢,因为cmakelist.txt 在上级目录,最好直接复制使用
cmake -DCMAKE_PREFIX_PATH=/home/super/libtorch-1.2/libtorch ..

 输出:

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda-10.0 (found version "10.0") 
-- Caffe2: CUDA detected: 10.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda-10.0/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-10.0
-- Caffe2: Header version is: 10.0
-- Found CUDNN: /usr/local/cuda-10.0/include  
-- Found cuDNN: v7.5.0  (include: /usr/local/cuda-10.0/include, library: /usr/local/cuda-10.0/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  7.5 7.5
-- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
-- Found torch: /home/super/libtorch-1.2/libtorch/lib/libtorch.so  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/super/libtorch-1.2/example-app(复件)/build

然后,终端输入

make

输出:

Scanning dependencies of target example-app
[ 50%] Building CXX object CMakeFiles/example-app.dir/example-app.cpp.o
[100%] Linking CXX executable example-app
[100%] Built target example-app

过程截图如下.

 

三. 调用

./example_app

 截图如下.

 与python 端 输出一致. 转换成功.


  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值