onnx.export报警告:WARNING: The shape inference of prim::Constant type is missing...解决方法

在将pytorch模型转换为onnx模型时报警:

```

WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.

```

经过模块尝试发现是`F.interpolate`函数(换成F.upsmple也报)导致的,下面给出两种解决方法:

方法1. 用反卷积代替上采样(只是猜测,未尝试)

方法2. 将onnx.export函数的属性加上opset_version=11。(我自己用的这个方法)

  • 8
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
This error occurs when you try to access the 'module' attribute of a list object. It means that you are trying to call a method or attribute that is not defined for a list. To fix this error, you need to check your code and make sure that you are calling the 'module' attribute on the correct object. It's possible that you are passing a list object to a function that expects a model object. If you are working with a PyTorch model, make sure that you have defined it correctly and that you are calling the 'module' attribute on the right object. The 'module' attribute is used to access the underlying model when using DataParallel. Here's an example of how to fix this error when working with a PyTorch model: ```python import torch.nn as nn import torch.optim as optim class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.fc1 = nn.Linear(64 * 16 * 16, 10) def forward(self, x): x = self.conv1(x) x = nn.functional.relu(x) x = self.pool(x) x = x.view(-1, 64 * 16 * 16) x = self.fc1(x) return x model = MyModel() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) # Train the model for epoch in range(10): for data in dataloader: inputs, labels = data optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # Access the underlying model when using DataParallel if torch.cuda.device_count() > 1: model = nn.DataParallel(model) model.module.training = False # Test the model correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ``` In this example, we define a simple PyTorch model and train it using an SGD optimizer. After training, we check if there are multiple GPUs available and wrap the model with DataParallel if necessary. Finally, we use the 'module' attribute to access the underlying model when running inference.
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值