现实中训练好的模型部署时经常会有这样的需求,就是模型的输入需要改变,不使用训练时的输入大小,如yolo系列的模型训练时一般都是输入的图片是640x640,但是部署时我希望输入到模型的分辨率是1920x1080(调整为1920x1088,必须为32的整数倍),那么如何通过pt文件导出对应的onnx模型文件呢?静态batch如何导出呢?动态batch如何导出呢?
先说说本人的环境:
Ultralytics YOLOv8.0.53 Python-3.9.13 torch-1.12.1+cu116 CPU
ONNX: starting export with onnx 1.13.0
在yolov8中,官方作者已经写好了转换代码,具体如下:
output_names = ['output0', 'output1'] if isinstance(self.model, SegmentationModel) else ['output0']
dynamic = self.args.dynamic
if dynamic:
dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}} # shape(1,3,640,640)
if isinstance(self.model, SegmentationModel):
dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'} # shape(1,32,160,160)
elif isinstance(self.model, DetectionModel):
dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
torch.onnx.export(
self.model.cpu() if dynamic else self.model, # --dynamic only compatible with cpu
self.im.cpu() if dynamic else self.im,
f,
verbose=False,
opset_version=self.args.opset or get_latest_opset(),
do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False
input_names=['images'],
output_names=output_names,
dynamic_axes=dynamic or None)
# Checks
model_onnx = onnx.load(f) # load onnx model
# onnx.checker.check_model(model_onnx) # check onnx model
# Simplify
if self.args.simplify:
try:
import onnxsim
LOGGER.info(f'{prefix} simplifying with onnxsim {onnxsim.__version__}...')
# subprocess.run(f'onnxsim {f} {f}', shell=True)
model_onnx, check = onnxsim.simplify(model_onnx)
assert check, 'Simplified ONNX model could not be validated'
except Exception as e:
LOGGER.info(f'{prefix} simplifier failure: {e}')
# Metadata
for k, v in self.metadata.items():
meta = model_onnx.metadata_props.add()
meta.key, meta.value = k, str(v)
onnx.save(model_onnx, f)
如果通过动态batch进行导出,我们希望导出输入是batch x 3 x 1920 x 1088 ,输出是batch x 3 x number1 x number2 ,其中 number1和2应该是自动确定的,但是当使用上的代码导出时会发现模型的输入虽然可以,但是输出不对:
想要达到要求,就不能使用动态batch导出,只能使用静态batch导出,具体如下:
import onnxsim
# batch 根据需要进行设置,一般都是1, 这里为了演示使用了10
x = torch.randn([10, 3, 1920, 1088])
print(x.shape)
torch.onnx.export(self.model, x, "Net.onnx", input_names=['images'],output_names=['output0', 'output1'],verbose=False)
model_onnx1 = onnx.load("Net.onnx") # load onnx model
# 需要使用onnxsim.simplify进行简化才能得到想要的结果
model_onnx1, check = onnxsim.simplify(model_onnx1)
onnx.save(model_onnx1, "Net.onnx")
结果:
打印过程:
ONNX: starting export with onnx 1.13.0...
torch.Size([10, 3, 1920, 1088])
11
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: simplifying with onnxsim 0.4.17...
ONNX: export success 27.7s, saved as yolov8n-seg-test.onnx (13.1 MB)
Export complete (29.1s)
Results saved to D:\workspace\yolov8\ultralytics
Predict: yolo predict task=segment model=yolov8n-seg-test.onnx imgsz=640
Validate: yolo val task=segment model=yolov8n-seg-test.onnx imgsz=640 data=coco.yaml
Visualize: https://netron.app