ONNX yolov5导出 convert error --grid

使用的版本

  • https://github.com/ultralytics/yolov5/tree/v5.0

安装onnx

torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'],
                          output_names=['classes', 'boxes'] if y is None else ['output'],
                          dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # size(1,3,640,640)
                                        'output': {0: 'batch', 2: 'y', 3: 'x'}} if opt.dynamic else None)

opset_version=12,安装对应版本的onnx

:~/Documents/pachong/yolov5$ pip install onnx==1.12.0
Collecting onnx==1.12.0
  Downloading onnx-1.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.1/13.1 MB 32.9 kB/s eta 0:00:00
Requirement already satisfied: numpy>=1.16.6 in /home/pdd/anaconda3/envs/yolo/lib/python3.7/site-packages (from onnx==1.12.0) (1.21.6)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/pdd/anaconda3/envs/yolo/lib/python3.7/site-packages (from onnx==1.12.0) (4.4.0)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(104, '连接被对方重设'))': /simple/protobuf/
Collecting protobuf<=3.20.1,>=3.12.2
  Downloading protobuf-3.20.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 26.1 kB/s eta 0:00:00
Installing collected packages: protobuf, onnx
  Attempting uninstall: protobuf
    Found existing installation: protobuf 3.20.3
    Uninstalling protobuf-3.20.3:
      Successfully uninstalled protobuf-3.20.3
  Attempting uninstall: onnx
    Found existing installation: onnx 1.13.0
    Uninstalling onnx-1.13.0:
      Successfully uninstalled onnx-1.13.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorboard 2.10.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.1 which is incompatible.
Successfully installed onnx-1.12.0 protobuf-3.20.1

export model

  • python ./models/export.py --weights ./weights/yolov5s.pt --img 640 --batch 1

When given a 640x640 input image, the model outputs the following 3 tensors.

// https://medium.com/axinc-ai/yolov5-the-latest-model-for-object-detection-b13320ec516b
(1, 3, 80, 80, 85) # anchor 0
(1, 3, 40, 40, 85) # anchor 1
(1, 3, 20, 20, 85) # anchor 2

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

YOLOv5 输出网格被展平并连接以形成单个输出

  • python ./models/export.py --weights ./weights/yolov5s.pt --img 640 --batch 1 --grid

在这里插入图片描述

gird 起作用的位置

    # Input
    img = torch.zeros(opt.batch_size, 3, *opt.img_size).to(device)  # image size(1,3,320,192) iDetection

    # Update model
    for k, m in model.named_modules():
        m._non_persistent_buffers_set = set()  # pytorch 1.6.0 compatibility
        if isinstance(m, models.common.Conv):  # assign export-friendly activations
            if isinstance(m.act, nn.Hardswish):
                m.act = Hardswish()
            elif isinstance(m.act, nn.SiLU):
                m.act = SiLU()
        # elif isinstance(m, models.yolo.Detect):
        #     m.forward = m.forward_export  # assign forward (optional)
    model.model[-1].export = not opt.grid  # set Detect() layer grid export
    y = model(img)  # dry run

在这里插入图片描述

在这里插入图片描述

detect独有属性export

在这里插入图片描述

在这里插入图片描述

class Detect(nn.Module):
    stride = None  # strides computed during build
    export = **  # onnx export  

    def __init__(self, nc=80, anchors=(), ch=()):  # detection layer
        super(Detect, self).__init__()
        self.nc = nc  # number of classes
        self.no = nc + 5  # number of outputs per anchor
        self.nl = len(anchors)  # number of detection layers
        self.na = len(anchors[0]) // 2  # number of anchors
        self.grid = [torch.zeros(1)] * self.nl  # init grid
        a = torch.tensor(anchors).float().view(self.nl, -1, 2)
        self.register_buffer('anchors', a)  # shape(nl,na,2)
        self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2)
        self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv

其他细节

class Focus(nn.Module):
    # Focus wh information into c-space
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Focus, self).__init__()
        self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
        # self.contract = Contract(gain=2)

    # def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
    #     return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
    #     # return self.conv(self.contract(x))

    def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
        if torch.onnx.is_in_onnx_export():# TODO
            a, b = x[..., ::2, :].transpose(-2, -1), x[..., 1::2, :].transpose(-2, -1)
            c = torch.cat([a[..., ::2, :], b[..., ::2, :], a[..., 1::2, :], b[..., 1::2, :]], 1).transpose(-2, -1)
            return self.conv(c)
        else:
            return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))

simplifier

  • 但我其实没有用这个,因为我安装这个后再次导出onnx出现了问题
  • https://github.com/daquexian/onnx-simplifier
  • pip install onnx-simplifier -i https://pypi.tuna.tsinghua.edu.cn/simple
  • python -m onnxsim ./weights/yolov5s.onnx ./weights/yolov5.onnx

在这里插入图片描述

使用带有 --grid 错误的 ONNX Simplifier 导出 https://github.com/ultralytics/yolov5/issues/2558

python ./models/export.py --weights ./weights/best.pt --img 640 --batch 1 https://www.cnblogs.com/ryzemagic/p/17089528.html

CG

first of all, onnx-simplifier need to be installed with pip install onnx-simplifier,
then, the simplification codes are:

    # ONNX export
    try:
        import onnx
        from onnxsim import simplify

        print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
        f = opt.weights.replace('.pt', '.onnx')  # filename
        torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=['images'],
                          output_names=['output'] if y is None else ['output'])

        # Checks
        onnx_model = onnx.load(f)  # load onnx model
        model_simp, check = simplify(onnx_model)
        assert check, "Simplified ONNX model could not be validated"
        onnx.save(model_simp, f)
        # print(onnx.helper.printable_graph(onnx_model.graph))  # print a human readable model
        print('ONNX export success, saved as %s' % f)
    except Exception as e:
        print('ONNX export failure: %s' % e)
 class Detect(nn.Module): 
     stride = None  # strides computed during build 
     export = False  # onnx export 
  
     def __init__(self, nc=80, anchors=(), ch=()):  # detection layer 
         super(Detect, self).__init__() 
         self.nc = nc  # number of classes 
         self.no = nc + 5  # number of outputs per anchor 
         self.nl = len(anchors)  # number of detection layers 
         self.na = len(anchors[0]) // 2  # number of anchors 
         self.grid = [torch.zeros(1)] * self.nl  # init grid 
         a = torch.tensor(anchors).float().view(self.nl, -1, 2) 
         self.register_buffer('anchors', a)  # shape(nl,na,2) 
         self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2) 
         self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv 
  
     def forward(self, x): 
         # x = x.copy()  # for profiling 
         z = []  # inference output 
         self.training |= self.export 
         for i in range(self.nl): 
             x[i] = self.m[i](x[i])  # conv 
             bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85) 
             x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() 
  
             if not self.training:  # inference 
                 if self.grid[i].shape[2:4] != x[i].shape[2:4]: 
                     self.grid[i] = self._make_grid(nx, ny).to(x[i].device) 
  
                 y = x[i].sigmoid() 
                 y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i]  # xy 
                 y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh 
                 z.append(y.view(bs, -1, self.no)) 
  
         return x if self.training else (torch.cat(z, 1), x) 

error: (-2:Unspecified error) Can’t create layer “onnx_node!Range_288” of type “Range” in function ‘getLayerInstance’

error: (-2:Unspecified error) Can’t create layer “onnx_node!ScatterND_378” of type “ScatterND” in function ‘getLayerInstance’

what(): Load model from /home/pdd/Documents/yolov5-5.0/weights/yolov5.onnx failed:Node (Mul_918) Op (Mul) [ShapeInferenceError] Incompatible dimensions

what(): Load model from /home/pdd/Documents/yolov5-5.0/weights/yolov5.onnx failed:Node (Mul_918) Op (Mul) [ShapeInferenceError] Incompatible dimensions

C:\Users\interesting\.conda\envs\yolov12\Lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) Ultralytics 8.3.63 🚀 Python-3.11.11 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB) YOLOv12s summary (fused): 352 layers, 9,261,840 parameters, 0 gradients, 21.4 GFLOPs Found https://ultralytics.com/images/bus.jpg locally at bus.jpg Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\interesting\.conda\envs\yolov12\Scripts\yolo.exe\__main__.py", line 7, in <module> File "C:\Users\interesting\Desktop\yolov12-main\ultralytics\cfg\__init__.py", line 983, in entrypoint getattr(model, mode)(**overrides) # default args from model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\interesting\Desktop\yolov12-main\ultralytics\engine\model.py", line 558, in predict return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\interesting\Desktop\yolov12-main\ultralytics\engine\predictor.py", line 188, in predict_cli for _ in gen: # sourcery skip: remove-empty-nested-block, noqa File "C:\Users\interesting\.conda\envs\yolov12\Lib\site-packages\torch\utils\_contextlib.py", line 36, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "C:\Users\interesting\Desktop\yolov12-main\ultralytics\engine\predictor.py", line 266, in stream_inference self.results = self.postprocess(preds, im, im0s) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\interesting\Desktop\yolov12-main\ultralytics\models\yolo\detect\predict.py", line 25, in postprocess preds = ops.non_max_suppressi
03-15
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值