1.模型导出ONNX
在官方提供的代码中改了两句,输入和模型都加载CPU上,否则报错模型是tensor输入不是或者输入是tensor模型不是。
opset_version = 10这里需要注意,设置太大或太小都转换不成功,第一次设置11的时候转trt不成功,改为10的话就没问题了。
"""Exports a pytorch *.pt model to *.onnx format
Usage:
import torch
$ export PYTHONPATH="$PWD" && python models/onnx_export.py --weights ./weights/yolov5s.pt --img 640 --batch 1
"""
import argparse
import onnx
from models.common import *
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='weights/best_can.pt', help='weights path')
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
opt = parser.parse_args()
print(opt)
device = torch_utils.select_device(opt.device)
# Parameters
f = opt.weights.replace('.pt', '.onnx') # onnx filename
# img = torch.zeros((opt.batch_size, 3, *opt.img_size)) # image size, (1, 3, 320, 192) iDetection
img = torch.zeros((1, 3, opt.img_size[0], opt.img_size[1]), device=device)
# Load pytorch model
google_utils.attempt_download(opt.weights)
model = torch.load(opt.weights, map_location=device)['model']
model.eval()
model.fuse()
# Export to onnx
model.model[-1].export = True # set Detect() layer export=True
_ = model(img) # dry run
torch.onnx.export(model, img, f, verbose=False, opset_version=10, input_names=['images'],
output_names=['output']) # output_names=['classes', 'boxes']
# Check onnx model
model = onnx.load(f) # load onnx model
onnx.checker.check_model(model) # check onnx model
print(onnx.helper.printable_graph(model.graph)) # print a human readable representation of the graph
print('Export complete. ONNX model saved to %s\nView with https://github.com/lutzroeder/netron' % f)
2.为了减少后面的报错,把ONNX转为ONNX_SIM
pip install onnx-simplifier
python -m onnxsim best.onnx best_sim.onnx
3.对于转换完的模型可以看一下网络结构,或者用onnx_sim推理一下查看一下推理效果