onnx踩坑记录

坑一:多输入问题

多输入用元组存放

 这样就报错了:

 TypeError: forward() takes 2 positional arguments but 3 were given

 更正为:

 坑二:自适应池化层问题

ONNX export of operator adaptive pooling, since output_size is not constant

参考这篇文章

自适应池化层快速转换为池化层_无情的阅读机器的博客-CSDN博客

坑三:grid_sampler

原因是onnx不支持F.grid_sample

Exporting the operator grid_sampler to ONNX opset version 13 is not supported. 

解决就是换:

将 grid_sample 替换为 mmcv 里的 bilinear_grid_sample

这里的源码,直接将源码粘贴到项目里面

def bilinear_grid_sample(im, grid, align_corners=False):
 
    n, c, h, w = im.shape
 
    gn, gh, gw, _ = grid.shape
 
    assert n == gn
 
 
 
    x = grid[:, :, :, 0]
 
    y = grid[:, :, :, 1]
 
 
 
    if align_corners:
 
        x = ((x + 1) / 2) * (w - 1)
 
        y = ((y + 1) / 2) * (h - 1)
 
    else:
 
        x = ((x + 1) * w - 1) / 2
 
        y = ((y + 1) * h - 1) / 2
 
 
 
    x = x.view(n, -1)
 
    y = y.view(n, -1)
 
 
 
    x0 = torch.floor(x).long()
 
    y0 = torch.floor(y).long()
 
    x1 = x0 + 1
 
    y1 = y0 + 1
 
 
 
    wa = ((x1 - x) * (y1 - y)).unsqueeze(1)
 
    wb = ((x1 - x) * (y - y0)).unsqueeze(1)
 
    wc = ((x - x0) * (y1 - y)).unsqueeze(1)
 
    wd = ((x - x0) * (y - y0)).unsqueeze(1)
 
 
 
    # Apply default for grid_sample function zero padding
 
    im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0)
 
    padded_h = h + 2
 
    padded_w = w + 2
 
    # save points positions after padding
 
    x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1
 
 
 
    # Clip coordinates to padded image size
 
    # x0 = torch.where(x0 < 0, torch.tensor(0), x0)
 
    # x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0)
 
    # x1 = torch.where(x1 < 0, torch.tensor(0), x1)
 
    # x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1)
 
    # y0 = torch.where(y0 < 0, torch.tensor(0), y0)
 
    # y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0)
 
    # y1 = torch.where(y1 < 0, torch.tensor(0), y1)
 
    # y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1)
 
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
 
    x0 = torch.where(x0 < 0, torch.tensor(0).to(device), x0)
 
    x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1).to(device), x0)
 
    x1 = torch.where(x1 < 0, torch.tensor(0).to(device), x1)
 
    x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1).to(device), x1)
 
    y0 = torch.where(y0 < 0, torch.tensor(0).to(device), y0)
 
    y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1).to(device), y0)
 
    y1 = torch.where(y1 < 0, torch.tensor(0).to(device), y1)
 
    y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1).to(device), y1)
 
 
 
    im_padded = im_padded.view(n, c, -1)
 
 
 
    x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
 
    x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
 
    x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1)
 
    x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1)
 
 
 
    Ia = torch.gather(im_padded, 2, x0_y0)
 
    Ib = torch.gather(im_padded, 2, x0_y1)
 
    Ic = torch.gather(im_padded, 2, x1_y0)
 
    Id = torch.gather(im_padded, 2, x1_y1)
 
 
 
    return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw)

 如果报错RuntimeError: view size is not compatible with input tensor‘s size and stride.

说明x,y的内存有所变化,因为view()需要Tensor中的元素地址是连续的,因为可能出现Tensor不连续的情况。 

在.view前加.contiguous(),使其变为连续就ok。

    x = x.contiguous().view(n, -1)

    y = y.contiguous().view(n, -1)

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

无情的阅读机器

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值