tensorRT踩坑日常之训练模型转ONNX转engine
tensorRT是用来干嘛的在这里就不多介绍了
在使用tensorRT提速之前需要先训练模型
在将训练好的模型转ONNX再转engine
一、将训练好的模型转ONNX这里就提供将torch转ONNX,其余的网上还是有很多教程的
import torch
import torch.nn
import onnx
model = torch.load('best.pt')
model.eval()
input_names = ['input']
output_names = ['output']
x = torch.randn(1,3,32,32,requires_grad=True)
torch.onnx.export(model, x, 'flame.onnx', input_names=input_names, output_names=output_names, verbose='True')
输出就行
二、将ONNX转engine
可以直接使用tensorrt自带的trtexec将onnx模型转engine:
进入tensorrt的安装目录下的bin文件,就能看到trtexec:输入
ubuntu下的trtexec
/usr/src/tensorrt/bin
trtexec -h 查看帮助命令
=== Model Options ===
--uff=<file> UFF model
--onnx=<file> ONNX model
--model=<file> Caffe model (default = no model, random weights used)
--deploy=<file> Caffe prototxt file
--output=<name>[,<name>]* Output names (it can be specified multiple times); at least one output is required for UFF and Caffe
--uffInput=<name>,X,Y,Z Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
--uffNHWC Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)
=== Build Options ===
--maxBatch Set max batch size and build an implicit batch engine (default = 1)
--explicitBatch Use explicit batch sizes when building the engine (default = implicit)
--minShapes=spec Build with dynamic shapes using a profile with the min shapes provided
--optShapes=spec Build with dynamic shapes using a profile with the opt shapes provided
--maxShapes=spec Build with dynamic shapes using a profile with the max shapes provided
--minShapesCalib=spec