torch to onnx加速,复杂网络pytorch转onnx报错记录及解决


RuntimeError: Exporting the operator grid_sampler to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

这里意思是opset version 9 版本太低了,grid_sampler 算子不支持,所以在torch.onnx.export参数里设置opset_version为更高版本,例如我的最高支持16(\torch\onnx\symbolic_helper.py里可以看)

https://github.com/onnx/onnx/blob/main/docs/Operators.md 可以查看每个算子支持的版本,可以发现grid_sampler是从opset16开始才有的,所以至少需要在torch.onnx.export中设置参数opset_version=16

# 835行左右
_default_onnx_opset_version = 9
_onnx_main_opset = 14
_onnx_stable_opsets = [7, 8, 9, 10, 11, 12, 13]
_export_onnx_opset_version = _default_onnx_opset_version
Traceback (most recent call last):
  File "D:\soft\anaconda\envs\project\lib\site-packages\IPython\core\interactiveshell.py", line 3457, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-35-1befd64b13a9>", line 8, in <module>
    output_names=renderer_out_names
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\__init__.py", line 320, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 111, in export
    custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 729, in _export
    dynamic_axes=dynamic_axes)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 501, in _model_to_graph
    module=module)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 216, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\__init__.py", line 373, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 1028, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 982, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\symbolic_registry.py", line 125, in get_registered_op
    raise RuntimeError(msg)
RuntimeError: Exporting the operator grid_sampler to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

raise RuntimeError("Unsupported: ONNX export of instance_norm for unknown ",Unsupported: ONNX export of instance_norm for unknown channel size.

设置opset为更高版本后,提示该错误。意思是instance_norm的输出未知。

当前只记录了解决该问题时的追溯过程,后面再看看怎么解决。 即找到最后该问题的起因,因为虽然从报错中知道了是instance_norm 的问题,但定位是torch.nn.functional的代码,而不是我自己的代码,所以需要一步步追溯到是自己代码中哪里出的问题;
期间走了很多弯路,因为没办法一步就确定直接问题,例如虽然这里报错instance norm的问题,可以将问题范围缩小到自己代码中跟instance norm相关的部分,但通过验证发现只有instance norm是没问题的,所以实际上是由前面未报错的其他问题导致的instance norm报错;
同时由于多个问题相互牵连,这个问题解决了又会导致另一个问题,不同版本存在矛盾等,所以光追溯定位问题就花了很长时间。

太长不看版:

为了解决上面grid_sampler问题使用了opset=16后,提示instance_norm的channel size未知,通过onnx中间过程的graph,发现是前面用了resize/interpolate操作,onnx resize输出大小在graph中为未知,因此导致了instance norm的输入的channel size未知;设置opset=11就不会存在该问题,但是会存在grid_sampler算子未定义的问题,所以解决该问题的思路有两个:

  1. 在opset11中添加grid_sampler的算子支持
  2. 在opset16中修改onnx resize操作,使其输出明确的shape

具体的问题解决过程,仅提供一种思路,如果出现其他问题也可以像这样慢慢追溯问题并解决

WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.

Traceback (most recent call last):
  File "D:\soft\anaconda\envs\project\lib\site-packages\IPython\core\interactiveshell.py", line 3457, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-36-94c4200ff6e0>", line 8, in <module>
    output_names=renderer_out_names
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\__init__.py", line 320, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 111, in export
    custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 729, in _export
    dynamic_axes=dynamic_axes)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 501, in _model_to_graph
    module=module)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 216, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\__init__.py", line 373, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\utils.py", line 1032, in _run_symbolic_function
    return symbolic_fn(g, *inputs, **attrs)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\symbolic_helper.py", line 172, in wrapper
    return fn(g, *args, **kwargs)
  File "D:\soft\anaconda\envs\project\lib\site-packages\torch\onnx\symbolic_opset9.py", line 1395, in instance_norm
    raise RuntimeError("Unsupported: ONNX export of instance_norm for unknown "
RuntimeError: Unsupported: ONNX export of instance_norm for unknown channel size.


489 defined in (%489 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%478, %flow_module.spade_layer_1.conv_1.weight, %flow_module.spade_layer_1.conv_1.bias) # D:\soft\anaconda\envs\project\lib\site-packages\torch\nn\modules\conv.py:443:0
  %425 : Tensor? = prim::Constant()
  %426 : Tensor? = prim::Constant()
  %427 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%326, %425, %426, %424) # D:\soft\anaconda\envs\project\lib\site-packages\torch\nn\functional.py:3712:0
  # 删掉perceptual部分后,将downsample直接用troch.nn.interpolate替换后,还是这样
  %391 : Long(2, strides=[1], device=cpu) = onnx::Cast[to=7](%384)
  %392 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%390, %391)
  %393 : Tensor? = prim::Constant()
  %394 : Tensor? = prim::Constant()
  %395 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%294, %393, %394, %392) # D:\soft\anaconda\envs\project\lib\site-packages\torch\nn\functional.py:3712:0
  %396 : NoneType = prim::Constant()
  %397 : NoneType = prim::Constant()
  %398 : NoneType = prim::Constant()
  %399 : NoneType = prim::Constant()
Renderer(
  (flow_module): DenseFlowNetwork(
    (conv1): Conv2d(6, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
    (conv1_bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv1_relu): ReLU()
    (conv2): Conv2d(32, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (conv2_bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2_relu): ReLU()
    (spade_layer_1): SPADE(
      (conv_1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (spade_layer_2): SPADE(
      (conv_1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (pixel_shuffle_1): PixelShuffle(upscale_factor=2)
    (spade_layer_4): SPADE(
      (conv_1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(15, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (conv_4): Conv2d(64, 2, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
    (conv_5): Sequential(
      (0): Conv2d(64, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
      (1): ReLU()
      (2): Conv2d(32, 1, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
      (3): Sigmoid()
    )
  )
  (translation): TranslationNetwork(
    (audio_encoder): Sequential(
      (0): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (1): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (2): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (3): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(3, 1), padding=(1, 1))
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (4): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (5): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (6): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(3, 3), padding=(1, 1))
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (7): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (8): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (9): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(3, 2), padding=(1, 1))
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (10): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (11): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1))
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
      (12): Conv2d(
        (conv_block): Sequential(
          (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))
          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (act): ReLU()
      )
    )
    (conv1): Conv2d(18, 32, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
    (conv1_bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv1_relu): ReLU()
    (conv2): Conv2d(32, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (conv2_bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2_relu): ReLU()
    (spade_1): SPADE(
      (conv_1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (adain_1): AdaIN(
      (conv_1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (adain_layer_1): AdaINLayer(
        (InstanceNorm2d): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (mlp_shared): Sequential(
          (0): Linear(in_features=512, out_features=128, bias=True)
          (1): ReLU()
        )
        (mlp_gamma): Linear(in_features=128, out_features=256, bias=True)
        (mlp_beta): Linear(in_features=128, out_features=256, bias=True)
      )
      (adain_layer_2): AdaINLayer(
        (InstanceNorm2d): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (mlp_shared): Sequential(
          (0): Linear(in_features=512, out_features=128, bias=True)
          (1): ReLU()
        )
        (mlp_gamma): Linear(in_features=128, out_features=256, bias=True)
        (mlp_beta): Linear(in_features=128, out_features=256, bias=True)
      )
    )
    (pixel_suffle_1): PixelShuffle(upscale_factor=2)
    (spade_2): SPADE(
      (conv_1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(32, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(32, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (adain_2): AdaIN(
      (conv_1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (adain_layer_1): AdaINLayer(
        (InstanceNorm2d): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (mlp_shared): Sequential(
          (0): Linear(in_features=512, out_features=128, bias=True)
          (1): ReLU()
        )
        (mlp_gamma): Linear(in_features=128, out_features=64, bias=True)
        (mlp_beta): Linear(in_features=128, out_features=64, bias=True)
      )
      (adain_layer_2): AdaINLayer(
        (InstanceNorm2d): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (mlp_shared): Sequential(
          (0): Linear(in_features=512, out_features=128, bias=True)
          (1): ReLU()
        )
        (mlp_gamma): Linear(in_features=128, out_features=64, bias=True)
        (mlp_beta): Linear(in_features=128, out_features=64, bias=True)
      )
    )
    (spade_4): SPADE(
      (conv_1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (leaky_relu): LeakyReLU(negative_slope=0.2)
      (spade_layer_1): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(3, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (spade_layer_2): SPADELayer(
        (instance_norm): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
        (conv1): Conv2d(3, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (gamma): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (beta): Conv2d(256, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (leaky_relu): LeakyReLU(negative_slope=0.01)
    (conv_last): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
    (Sigmoid): Sigmoid()
  )
  (perceptual): PerceptualLoss(
    (model): _PerceptualNetwork(
      (network): Sequential(
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): ReLU(inplace=True)
        (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (6): ReLU(inplace=True)
        (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (8): ReLU(inplace=True)
        (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU(inplace=True)
        (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): ReLU(inplace=True)
        (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (15): ReLU(inplace=True)
        (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (17): ReLU(inplace=True)
        (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (20): ReLU(inplace=True)
        (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (22): ReLU(inplace=True)
        (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (24): ReLU(inplace=True)
        (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (26): ReLU(inplace=True)
        (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (29): ReLU(inplace=True)
        (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (31): ReLU(inplace=True)
        (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (33): ReLU(inplace=True)
        (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (35): ReLU(inplace=True)
        (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
    )
    (criterion): L1Loss()
  )
)

所以提示的问题可能出在instancenorm上,查找了一下我的网络中,确实有几处使用了nn.InstanceNorm2d,AdaINLayer,SPADELayer

分网络转化看看:

DenseFlowNetwork中既有interpolate也有instancenorm,报错与renderer转onnx时一样
TranslationNetwork中没有interpolate,尝试只转translationnetwork看报什么错:可以顺利运行,说明:

TranslationNetwork中有instancenorm可以顺利转onnx,说明instancenorm没有问题,问题实际出在interpolate

重新定位到子网络后继续分析

DenseFlowNetwork()

  %199 : NoneType = prim::Constant()
  %200 : Long(4, strides=[1], device=cpu) = onnx::Shape(%108)
  %201 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
  %202 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
  %203 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]()
  %204 : Long(2, strides=[1], device=cpu) = onnx::Slice(%200, %202, %203, %201)
  %205 : Long(2, strides=[1], device=cpu) = onnx::Cast[to=7](%198)
  %206 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%204, %205)
  %207 : Tensor? = prim::Constant()
  %208 : Tensor? = prim::Constant()
  %209 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%108, %207, %208, %206) # D:\soft\anaconda\envs\project\lib\site-packages\torch\nn\functional.py:3712:0
  %210 : NoneType = prim::Constant()
  %211 : NoneType = prim::Constant()
  %212 : NoneType = prim::Constant()
  %213 : NoneType = prim::Constant()
  %214 : Bool(requires_grad=0, device=cpu) = onnx::Constant[value={1}]()
  %215 : Double(requires_grad=0, device=cpu) = onnx::Constant[value={0.1}]()

根据%209追踪(%108, %207, %208, %206) ,可见207和208都无法确定类型,108行可定位到某一具体的变量,继续追踪该变量的发现在interpolate处使用了,所以确定是interpolate出的问题,而这两个无法确定类型的应该是interpolate中输入。

尝试更高版本的torch,附带更高版本的opset

将torch从1.10升级到1.12.1,opset从最高14到最高16,使用opset16
结果一样。

%209 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3938:0

通过给出的位置信息继续定位出该问题的函数,追踪到torch.nn.functional的如下位置:

 if input.dim() == 3 and mode == "nearest":
  	 return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors)
 if input.dim() == 4 and mode == "nearest":
     return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
 if input.dim() == 5 and mode == "nearest":
     return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)

问题源于:
torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)

graph图:

graph(%0 : Float(1, 3, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %1 : Float(1, 3, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %T_driving_sketch : Float(1, 5, 3, 128, 128, strides=[245760, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %conv1.weight : Float(32, 6, 7, 7, strides=[294, 49, 7, 1], requires_grad=1, device=cpu),
      %conv1.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.weight : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.running_mean : Float(32, strides=[1], requires_grad=0, device=cpu),
      %conv1_bn.running_var : Float(32, strides=[1], requires_grad=0, device=cpu),
      %conv1_bn.num_batches_tracked : Long(requires_grad=0, device=cpu),
      %conv2.weight : Float(256, 32, 3, 3, strides=[288, 9, 3, 1], requires_grad=1, device=cpu),
      %conv2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.weight : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.running_mean : Float(256, strides=[1], requires_grad=0, device=cpu),
      %conv2_bn.running_var : Float(256, strides=[1], requires_grad=0, device=cpu),
      %conv2_bn.num_batches_tracked : Long(requires_grad=0, device=cpu),
      %spade_layer_1.conv_1.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_2.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_1.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_2.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_1.weight : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_1.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_2.weight : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_2.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.gamma.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.gamma.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.beta.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.beta.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.gamma.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.gamma.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.beta.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.beta.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %conv_4.weight : Float(2, 64, 7, 7, strides=[3136, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_4.bias : Float(2, strides=[1], requires_grad=1, device=cpu),
      %conv_5.0.weight : Float(32, 64, 7, 7, strides=[3136, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_5.0.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv_5.2.weight : Float(1, 32, 7, 7, strides=[1568, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_5.2.bias : Float(1, strides=[1], requires_grad=1, device=cpu)):
  %71 : Long(device=cpu) = onnx::Constant[value={0}]()
  %72 : Long(device=cpu) = onnx::Constant[value={0}]()
  %73 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %74 : Long(device=cpu) = onnx::Constant[value={1}]()
  %75 : Long(device=cpu) = onnx::Constant[value={1}]()
  %76 : Long(device=cpu) = onnx::Constant[value={0}]()
  %77 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %76) # D:\selfcode.py:218:0
  %78 : Long(device=cpu) = onnx::Constant[value={0}]()
  %79 : Long(device=cpu) = onnx::Constant[value={0}]()
  %80 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %81 : Long(device=cpu) = onnx::Constant[value={1}]()
  %82 : Long(device=cpu) = onnx::Constant[value={1}]()
  %83 : Long(device=cpu) = onnx::Constant[value={1}]()
  %84 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %83) # D:\selfcode.py:218:0
  %85 : Long(device=cpu) = onnx::Constant[value={0}]()
  %86 : Long(device=cpu) = onnx::Constant[value={0}]()
  %87 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %88 : Long(device=cpu) = onnx::Constant[value={1}]()
  %89 : Long(device=cpu) = onnx::Constant[value={1}]()
  %90 : Long(device=cpu) = onnx::Constant[value={2}]()
  %91 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %90) # D:\selfcode.py:218:0
  %92 : Long(device=cpu) = onnx::Constant[value={0}]()
  %93 : Long(device=cpu) = onnx::Constant[value={0}]()
  %94 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %95 : Long(device=cpu) = onnx::Constant[value={1}]()
  %96 : Long(device=cpu) = onnx::Constant[value={1}]()
  %97 : Long(device=cpu) = onnx::Constant[value={3}]()
  %98 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %97) # D:\selfcode.py:218:0
  %99 : Long(device=cpu) = onnx::Constant[value={0}]()
  %100 : Long(device=cpu) = onnx::Constant[value={0}]()
  %101 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %102 : Long(device=cpu) = onnx::Constant[value={1}]()
  %103 : Long(device=cpu) = onnx::Constant[value={1}]()
  %104 : Long(device=cpu) = onnx::Constant[value={4}]()
  %105 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %104) # D:\selfcode.py:218:0
  %106 : Tensor[] = prim::ListConstruct(%77, %84, %91, %98, %105)
  %107 : Long(device=cpu) = onnx::Constant[value={1}]()
  %driving_sketch : Float(1, 15, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=1](%77, %84, %91, %98, %105) # D:\selfcode.py:218:0
  %109 : Long(device=cpu) = onnx::Constant[value={0}]()
  %110 : Long(device=cpu) = onnx::Constant[value={0}]()
  %111 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %112 : Long(device=cpu) = onnx::Constant[value={1}]()
  %113 : Long(device=cpu) = onnx::Constant[value={1}]()
  %114 : Long(device=cpu) = onnx::Constant[value={0}]()
  %115 : Float(1, 3, 128, 128, strides=[147456, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%0, %114) # D:\selfcode.py:224:0
  %116 : Long(device=cpu) = onnx::Constant[value={1}]()
  %117 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}]() # D:\selfcode.py:225:0
  %118 : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze(%115, %117) # D:\selfcode.py:225:0
  %119 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]()
  %120 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %121 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]() # D:\selfcode.py:225:0
  %122 : Long(1, strides=[1], device=cpu) = onnx::Shape(%121) # D:\selfcode.py:225:0
  %123 : Long(5, device=cpu) = onnx::ConstantOfShape[value={1}](%122) # D:\selfcode.py:225:0
  %124 : Long(device=cpu) = onnx::Constant[value={-1}]() # D:\selfcode.py:225:0
  %125 : Long(5, strides=[1], device=cpu) = onnx::Mul(%123, %124) # D:\selfcode.py:225:0
  %126 : Bool(5, strides=[1], device=cpu) = onnx::Equal(%121, %125) # D:\selfcode.py:225:0
  %127 : Long(5, strides=[1], device=cpu) = onnx::Where(%126, %123, %121) # D:\selfcode.py:225:0
  %ref_img : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Expand(%118, %127) # D:\selfcode.py:225:0
  %129 : Long(device=cpu) = onnx::Constant[value={0}]()
  %130 : Long(device=cpu) = onnx::Constant[value={0}]()
  %131 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=0](%ref_img, %130) # D:\selfcode.py:226:0
  %132 : Tensor[] = prim::ListConstruct(%131)
  %133 : Long(device=cpu) = onnx::Constant[value={0}]()
  %ref_img.3 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=0](%131) # D:\selfcode.py:226:0
  %135 : Long(device=cpu) = onnx::Constant[value={0}]()
  %136 : Long(device=cpu) = onnx::Constant[value={0}]()
  %137 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %138 : Long(device=cpu) = onnx::Constant[value={1}]()
  %139 : Long(device=cpu) = onnx::Constant[value={1}]()
  %140 : Long(device=cpu) = onnx::Constant[value={0}]()
  %141 : Float(1, 3, 128, 128, strides=[147456, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%1, %140) # D:\selfcode.py:228:0
  %142 : Long(device=cpu) = onnx::Constant[value={1}]()
  %143 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}]() # D:\selfcode.py:229:0
  %144 : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze(%141, %143) # D:\selfcode.py:229:0
  %145 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]()
  %146 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %147 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]() # D:\selfcode.py:229:0
  %148 : Long(1, strides=[1], device=cpu) = onnx::Shape(%147) # D:\selfcode.py:229:0
  %149 : Long(5, device=cpu) = onnx::ConstantOfShape[value={1}](%148) # D:\selfcode.py:229:0
  %150 : Long(device=cpu) = onnx::Constant[value={-1}]() # D:\selfcode.py:229:0
  %151 : Long(5, strides=[1], device=cpu) = onnx::Mul(%149, %150) # D:\selfcode.py:229:0
  %152 : Bool(5, strides=[1], device=cpu) = onnx::Equal(%147, %151) # D:\selfcode.py:229:0
  %153 : Long(5, strides=[1], device=cpu) = onnx::Where(%152, %149, %147) # D:\selfcode.py:229:0
  %ref_sketch : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Expand(%144, %153) # D:\selfcode.py:229:0
  %155 : Long(device=cpu) = onnx::Constant[value={0}]()
  %156 : Long(device=cpu) = onnx::Constant[value={0}]()
  %157 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=0](%ref_sketch, %156) # D:\selfcode.py:230:0
  %158 : Tensor[] = prim::ListConstruct(%157)
  %159 : Long(device=cpu) = onnx::Constant[value={0}]()
  %160 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=0](%157) # D:\selfcode.py:230:0
  %161 : Tensor[] = prim::ListConstruct(%ref_img.3, %160)
  %162 : Long(device=cpu) = onnx::Constant[value={1}]()
  %input : Float(1, 6, 128, 128, strides=[98304, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=1](%ref_img.3, %160) # D:\selfcode.py:233:0
  %164 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %165 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
  %166 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %167 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %168 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %169 : Long(device=cpu) = onnx::Constant[value={1}]()
  %170 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %171 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %172 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %173 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.3 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[7, 7], pads=[3, 3, 3, 3], strides=[1, 1]](%input, %conv1.weight, %conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %175 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %176 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %177 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %178 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.7 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=1, device=cpu) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%input.3, %conv1_bn.weight, %conv1_bn.bias, %conv1_bn.running_mean, %conv1_bn.running_var) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2439:0
  %input.11 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=1, device=cpu) = onnx::Relu(%input.7) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1457:0
  %181 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 2  2 [ CPULongType{2} ]]()
  %182 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %183 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %184 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %185 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %186 : Long(device=cpu) = onnx::Constant[value={1}]()
  %187 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %188 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %189 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %190 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.15 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%input.11, %conv2.weight, %conv2.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %192 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %193 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %194 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %195 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.19 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%input.15, %conv2_bn.weight, %conv2_bn.bias, %conv2_bn.running_mean, %conv2_bn.running_var) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2439:0
  %input.23 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Relu(%input.19) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1457:0
  %198 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 64  64 [ CPULongType{2} ]]()
  %199 : NoneType = prim::Constant()
  %200 : Long(4, strides=[1], device=cpu) = onnx::Shape(%driving_sketch) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %201 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %202 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %203 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %204 : Long(2, strides=[1], device=cpu) = onnx::Slice(%200, %202, %203, %201) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %205 : Long(2, strides=[1], device=cpu) = onnx::Cast[to=7](%198) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %206 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%204, %205) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %207 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %208 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %input.27 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%driving_sketch, %207, %208, %206) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %210 : NoneType = prim::Constant()
  %211 : NoneType = prim::Constant()
  %212 : NoneType = prim::Constant()
  %213 : NoneType = prim::Constant()
  %214 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %215 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %216 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %217 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %218 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %219 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %220 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::InstanceNormalization[epsilon=1.0000000000000001e-05](%input.23, %218, %219) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %221 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %222 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %223 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %224 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %225 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %226 : Long(device=cpu) = onnx::Constant[value={1}]()
  %227 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %228 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %229 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %230 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.31 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.27, %spade_layer_1.spade_layer_1.conv1.weight, %spade_layer_1.spade_layer_1.conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %232 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %233 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %234 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %235 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %236 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %237 : Long(device=cpu) = onnx::Constant[value={1}]()
  %238 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %239 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %240 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %241 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %242 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.gamma.weight, %spade_layer_1.spade_layer_1.gamma.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %243 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %244 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %245 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %246 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %247 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %248 : Long(device=cpu) = onnx::Constant[value={1}]()
  %249 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %250 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %251 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %252 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %253 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.beta.weight, %spade_layer_1.spade_layer_1.beta.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %254 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Mul(%220, %242) # D:\selfcode.py:82:0
  %255 : Long(device=cpu) = onnx::Constant[value={1}]()
  %256 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%220, %254) # D:\selfcode.py:82:0
  %257 : Long(device=cpu) = onnx::Constant[value={1}]()
  %input.35 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%256, %253) # D:\selfcode.py:82:0
  %259 : Double(device=cpu) = onnx::Constant[value={0.2}]()
  %input.39 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::LeakyRelu[alpha=0.20000000000000001](%input.35) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1633:0
  %261 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %262 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %263 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %264 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %265 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %266 : Long(device=cpu) = onnx::Constant[value={1}]()
  %267 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %268 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %269 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %270 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.43 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.39, %spade_layer_1.conv_1.weight, %spade_layer_1.conv_1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %272 : NoneType = prim::Constant()
  %273 : NoneType = prim::Constant()
  %274 : NoneType = prim::Constant()
  %275 : NoneType = prim::Constant()
  %276 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %277 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %278 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %279 : Bool(device=cpu) = onnx::Constant[value={1}]()
  return ()

单独建立一个只有torch.nn.functional.interpolate的模型也不会报错

class test_interpolate(torch.nn.Module):
    def __init__(self):
        super(test_interpolate, self).__init__()

    def forward(self, x):
        x = torch.nn.functional.interpolate(x, size=(64, 64), mode='nearest')
        x = nn.functional.softmax(x, dim=1)
        return x
Exported graph: graph(%interin : Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu),
      %onnx::Concat_12 : Long(2, strides=[1], requires_grad=0, device=cpu)):
  %onnx::Slice_2 : Long(4, strides=[1], device=cpu) = onnx::Shape[onnx_name="Shape_0"](%interin) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Slice_3 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}, onnx_name="Constant_1"]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Slice_4 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}, onnx_name="Constant_2"]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Slice_5 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}, onnx_name="Constant_3"]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Concat_6 : Long(2, strides=[1], device=cpu) = onnx::Slice[onnx_name="Slice_4"](%onnx::Slice_2, %onnx::Slice_4, %onnx::Slice_5, %onnx::Slice_3) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Resize_8 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="Concat_5"](%onnx::Concat_6, %onnx::Concat_12) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Resize_9 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %onnx::Resize_10 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %interout : Float(*, *, *, *, strides=[12288, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor", onnx_name="Resize_6"](%interin, %onnx::Resize_9, %onnx::Resize_10, %onnx::Resize_8) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  return (%interout)

具体的graph

graph(%x.1 : Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu)):
  %3 : int = prim::Constant[value=64]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %4 : int = prim::Constant[value=64]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %5 : int[] = prim::ListConstruct(%3, %4)
  %6 : NoneType = prim::Constant()
  %x : Float(1, 3, 64, 64, strides=[12288, 4096, 64, 1], requires_grad=0, device=cpu) = aten::upsample_nearest2d(%x.1, %5, %6) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %8 : int = prim::Constant[value=1]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1834:0
  %9 : NoneType = prim::Constant()
  %10 : Float(1, 3, 64, 64, strides=[12288, 4096, 64, 1], requires_grad=0, device=cpu) = aten::softmax(%x, %8, %9) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1834:0
  return (%10)

_optimize_graph后的graph

graph(%x.1 : Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu)):
  %1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 64  64 [ CPULongType{2} ]]()
  %2 : Long(4, strides=[1], device=cpu) = onnx::Shape(%x.1) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %3 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %4 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %5 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %6 : Long(2, strides=[1], device=cpu) = onnx::Slice(%2, %4, %5, %3) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %7 : Long(2, strides=[1], device=cpu) = onnx::Cast[to=7](%1) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %8 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%6, %7) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %9 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %10 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %x : Float(*, *, *, *, strides=[12288, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%x.1, %9, %10, %8) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %12 : Float(*, *, *, *, strides=[12288, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Softmax[axis=1](%x) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1834:0
  return (%12)

发现即使onnx::Resize输出的都是*,也不一定会报错。
所以并不是单独的interpolate或者instancenorm造成的报错,单独使用任意一个都没问题,但是两者都有的时候就出问题了
那还是回去看下instance为什么会报错。由于报错所以只有原始graph,没有_optimize_graph后的全部graph,只有一部分:

graph(%0 : Float(1, 3, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %1 : Float(1, 3, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %T_driving_sketch : Float(1, 5, 3, 128, 128, strides=[245760, 49152, 16384, 128, 1], requires_grad=0, device=cpu),
      %conv1.weight : Float(32, 6, 7, 7, strides=[294, 49, 7, 1], requires_grad=1, device=cpu),
      %conv1.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.weight : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv1_bn.running_mean : Float(32, strides=[1], requires_grad=0, device=cpu),
      %conv1_bn.running_var : Float(32, strides=[1], requires_grad=0, device=cpu),
      %conv1_bn.num_batches_tracked : Long(requires_grad=0, device=cpu),
      %conv2.weight : Float(256, 32, 3, 3, strides=[288, 9, 3, 1], requires_grad=1, device=cpu),
      %conv2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.weight : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %conv2_bn.running_mean : Float(256, strides=[1], requires_grad=0, device=cpu),
      %conv2_bn.running_var : Float(256, strides=[1], requires_grad=0, device=cpu),
      %conv2_bn.num_batches_tracked : Long(requires_grad=0, device=cpu),
      %spade_layer_1.conv_1.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_2.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.conv_2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_1.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_1.spade_layer_2.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_1.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_2.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.conv_2.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_1.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.gamma.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.gamma.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.beta.weight : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_2.spade_layer_2.beta.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_1.weight : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_1.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_2.weight : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.conv_2.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.gamma.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.gamma.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.beta.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_1.beta.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.conv1.weight : Float(256, 15, 3, 3, strides=[135, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.conv1.bias : Float(256, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.gamma.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.gamma.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.beta.weight : Float(64, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=1, device=cpu),
      %spade_layer_4.spade_layer_2.beta.bias : Float(64, strides=[1], requires_grad=1, device=cpu),
      %conv_4.weight : Float(2, 64, 7, 7, strides=[3136, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_4.bias : Float(2, strides=[1], requires_grad=1, device=cpu),
      %conv_5.0.weight : Float(32, 64, 7, 7, strides=[3136, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_5.0.bias : Float(32, strides=[1], requires_grad=1, device=cpu),
      %conv_5.2.weight : Float(1, 32, 7, 7, strides=[1568, 49, 7, 1], requires_grad=1, device=cpu),
      %conv_5.2.bias : Float(1, strides=[1], requires_grad=1, device=cpu)):
  %71 : Long(device=cpu) = onnx::Constant[value={0}]()
  %72 : Long(device=cpu) = onnx::Constant[value={0}]()
  %73 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %74 : Long(device=cpu) = onnx::Constant[value={1}]()
  %75 : Long(device=cpu) = onnx::Constant[value={1}]()
  %76 : Long(device=cpu) = onnx::Constant[value={0}]()
  %77 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %76) # D:\selfcode.py:225:0
  %78 : Long(device=cpu) = onnx::Constant[value={0}]()
  %79 : Long(device=cpu) = onnx::Constant[value={0}]()
  %80 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %81 : Long(device=cpu) = onnx::Constant[value={1}]()
  %82 : Long(device=cpu) = onnx::Constant[value={1}]()
  %83 : Long(device=cpu) = onnx::Constant[value={1}]()
  %84 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %83) # D:\selfcode.py:225:0
  %85 : Long(device=cpu) = onnx::Constant[value={0}]()
  %86 : Long(device=cpu) = onnx::Constant[value={0}]()
  %87 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %88 : Long(device=cpu) = onnx::Constant[value={1}]()
  %89 : Long(device=cpu) = onnx::Constant[value={1}]()
  %90 : Long(device=cpu) = onnx::Constant[value={2}]()
  %91 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %90) # D:\selfcode.py:225:0
  %92 : Long(device=cpu) = onnx::Constant[value={0}]()
  %93 : Long(device=cpu) = onnx::Constant[value={0}]()
  %94 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %95 : Long(device=cpu) = onnx::Constant[value={1}]()
  %96 : Long(device=cpu) = onnx::Constant[value={1}]()
  %97 : Long(device=cpu) = onnx::Constant[value={3}]()
  %98 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %97) # D:\selfcode.py:225:0
  %99 : Long(device=cpu) = onnx::Constant[value={0}]()
  %100 : Long(device=cpu) = onnx::Constant[value={0}]()
  %101 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %102 : Long(device=cpu) = onnx::Constant[value={1}]()
  %103 : Long(device=cpu) = onnx::Constant[value={1}]()
  %104 : Long(device=cpu) = onnx::Constant[value={4}]()
  %105 : Float(1, 3, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%T_driving_sketch, %104) # D:\selfcode.py:225:0
  %106 : Tensor[] = prim::ListConstruct(%77, %84, %91, %98, %105)
  %107 : Long(device=cpu) = onnx::Constant[value={1}]()
  %driving_sketch : Float(1, 15, 128, 128, strides=[245760, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=1](%77, %84, %91, %98, %105) # D:\selfcode.py:225:0
  %109 : Long(device=cpu) = onnx::Constant[value={0}]()
  %110 : Long(device=cpu) = onnx::Constant[value={0}]()
  %111 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %112 : Long(device=cpu) = onnx::Constant[value={1}]()
  %113 : Long(device=cpu) = onnx::Constant[value={1}]()
  %114 : Long(device=cpu) = onnx::Constant[value={0}]()
  %115 : Float(1, 3, 128, 128, strides=[147456, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%0, %114) # D:\selfcode.py:231:0
  %116 : Long(device=cpu) = onnx::Constant[value={1}]()
  %117 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}]() # D:\selfcode.py:232:0
  %118 : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze(%115, %117) # D:\selfcode.py:232:0
  %119 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]()
  %120 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %121 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]() # D:\selfcode.py:232:0
  %122 : Long(1, strides=[1], device=cpu) = onnx::Shape(%121) # D:\selfcode.py:232:0
  %123 : Long(5, device=cpu) = onnx::ConstantOfShape[value={1}](%122) # D:\selfcode.py:232:0
  %124 : Long(device=cpu) = onnx::Constant[value={-1}]() # D:\selfcode.py:232:0
  %125 : Long(5, strides=[1], device=cpu) = onnx::Mul(%123, %124) # D:\selfcode.py:232:0
  %126 : Bool(5, strides=[1], device=cpu) = onnx::Equal(%121, %125) # D:\selfcode.py:232:0
  %127 : Long(5, strides=[1], device=cpu) = onnx::Where(%126, %123, %121) # D:\selfcode.py:232:0
  %ref_img : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Expand(%118, %127) # D:\selfcode.py:232:0
  %129 : Long(device=cpu) = onnx::Constant[value={0}]()
  %130 : Long(device=cpu) = onnx::Constant[value={0}]()
  %131 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=0](%ref_img, %130) # D:\selfcode.py:233:0
  %132 : Tensor[] = prim::ListConstruct(%131)
  %133 : Long(device=cpu) = onnx::Constant[value={0}]()
  %ref_img.3 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=0](%131) # D:\selfcode.py:233:0
  %135 : Long(device=cpu) = onnx::Constant[value={0}]()
  %136 : Long(device=cpu) = onnx::Constant[value={0}]()
  %137 : Long(device=cpu) = onnx::Constant[value={9223372036854775807}]()
  %138 : Long(device=cpu) = onnx::Constant[value={1}]()
  %139 : Long(device=cpu) = onnx::Constant[value={1}]()
  %140 : Long(device=cpu) = onnx::Constant[value={0}]()
  %141 : Float(1, 3, 128, 128, strides=[147456, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=1](%1, %140) # D:\selfcode.py:235:0
  %142 : Long(device=cpu) = onnx::Constant[value={1}]()
  %143 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={1}]() # D:\selfcode.py:236:0
  %144 : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze(%141, %143) # D:\selfcode.py:236:0
  %145 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]()
  %146 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %147 : Long(5, strides=[1], device=cpu) = onnx::Constant[value=-1  1 -1 -1 -1 [ CPULongType{5} ]]() # D:\selfcode.py:236:0
  %148 : Long(1, strides=[1], device=cpu) = onnx::Shape(%147) # D:\selfcode.py:236:0
  %149 : Long(5, device=cpu) = onnx::ConstantOfShape[value={1}](%148) # D:\selfcode.py:236:0
  %150 : Long(device=cpu) = onnx::Constant[value={-1}]() # D:\selfcode.py:236:0
  %151 : Long(5, strides=[1], device=cpu) = onnx::Mul(%149, %150) # D:\selfcode.py:236:0
  %152 : Bool(5, strides=[1], device=cpu) = onnx::Equal(%147, %151) # D:\selfcode.py:236:0
  %153 : Long(5, strides=[1], device=cpu) = onnx::Where(%152, %149, %147) # D:\selfcode.py:236:0
  %ref_sketch : Float(1, 1, 3, 128, 128, strides=[147456, 49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Expand(%144, %153) # D:\selfcode.py:236:0
  %155 : Long(device=cpu) = onnx::Constant[value={0}]()
  %156 : Long(device=cpu) = onnx::Constant[value={0}]()
  %157 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Gather[axis=0](%ref_sketch, %156) # D:\selfcode.py:237:0
  %158 : Tensor[] = prim::ListConstruct(%157)
  %159 : Long(device=cpu) = onnx::Constant[value={0}]()
  %160 : Float(1, 3, 128, 128, strides=[49152, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=0](%157) # D:\selfcode.py:237:0
  %161 : Tensor[] = prim::ListConstruct(%ref_img.3, %160)
  %162 : Long(device=cpu) = onnx::Constant[value={1}]()
  %input : Float(1, 6, 128, 128, strides=[98304, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Concat[axis=1](%ref_img.3, %160) # D:\selfcode.py:240:0
  %164 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %165 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
  %166 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %167 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %168 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %169 : Long(device=cpu) = onnx::Constant[value={1}]()
  %170 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %171 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %172 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %173 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.3 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[7, 7], pads=[3, 3, 3, 3], strides=[1, 1]](%input, %conv1.weight, %conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %175 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %176 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %177 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %178 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.7 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=1, device=cpu) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%input.3, %conv1_bn.weight, %conv1_bn.bias, %conv1_bn.running_mean, %conv1_bn.running_var) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2439:0
  %input.11 : Float(1, 32, 128, 128, strides=[524288, 16384, 128, 1], requires_grad=1, device=cpu) = onnx::Relu(%input.7) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1457:0
  %181 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 2  2 [ CPULongType{2} ]]()
  %182 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %183 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %184 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %185 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %186 : Long(device=cpu) = onnx::Constant[value={1}]()
  %187 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %188 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %189 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %190 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.15 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[2, 2]](%input.11, %conv2.weight, %conv2.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %192 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %193 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %194 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %195 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.19 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%input.15, %conv2_bn.weight, %conv2_bn.bias, %conv2_bn.running_mean, %conv2_bn.running_var) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2439:0
  %input.23 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Relu(%input.19) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1457:0
  %198 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 64  64 [ CPULongType{2} ]]()
  %199 : NoneType = prim::Constant()
  %200 : Long(4, strides=[1], device=cpu) = onnx::Shape(%driving_sketch) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %201 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %202 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %203 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %204 : Long(2, strides=[1], device=cpu) = onnx::Slice(%200, %202, %203, %201) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %205 : Long(2, strides=[1], device=cpu) = onnx::Cast[to=7](%198) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %206 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%204, %205) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %207 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %208 : Tensor? = prim::Constant() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %input.27 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%driving_sketch, %207, %208, %206) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %210 : NoneType = prim::Constant()
  %211 : NoneType = prim::Constant()
  %212 : NoneType = prim::Constant()
  %213 : NoneType = prim::Constant()
  %214 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %215 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %216 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %217 : Bool(device=cpu) = onnx::Constant[value={1}]()
  return ()

上面为遇到第一个instancenorm时的输出,没问题,继续运行到第二个instancenorm前,下面220为第一个instancenorm:

%218 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %219 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %220 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::InstanceNormalization[epsilon=1.0000000000000001e-05](%input.23, %218, %219) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
  %221 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %222 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %223 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %224 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %225 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %226 : Long(device=cpu) = onnx::Constant[value={1}]()
  %227 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %228 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %229 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %230 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.31 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.27, %spade_layer_1.spade_layer_1.conv1.weight, %spade_layer_1.spade_layer_1.conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %232 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %233 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %234 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %235 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %236 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %237 : Long(device=cpu) = onnx::Constant[value={1}]()
  %238 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %239 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %240 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %241 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %242 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.gamma.weight, %spade_layer_1.spade_layer_1.gamma.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %243 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %244 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %245 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %246 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %247 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %248 : Long(device=cpu) = onnx::Constant[value={1}]()
  %249 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %250 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %251 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %252 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %253 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.beta.weight, %spade_layer_1.spade_layer_1.beta.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %254 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Mul(%220, %242) # D:\selfcode.py:89:0
  %255 : Long(device=cpu) = onnx::Constant[value={1}]()
  %256 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%220, %254) # D:\selfcode.py:89:0
  %257 : Long(device=cpu) = onnx::Constant[value={1}]()
  %input.35 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%256, %253) # D:\selfcode.py:89:0
  %259 : Double(device=cpu) = onnx::Constant[value={0.2}]()
  %input.39 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::LeakyRelu[alpha=0.20000000000000001](%input.35) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1633:0
  %261 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %262 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %263 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 1  1 [ CPULongType{2} ]]()
  %264 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %265 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 0  0 [ CPULongType{2} ]]()
  %266 : Long(device=cpu) = onnx::Constant[value={1}]()
  %267 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %268 : Bool(device=cpu) = onnx::Constant[value={0}]()
  %269 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %270 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %input.43 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.39, %spade_layer_1.conv_1.weight, %spade_layer_1.conv_1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
  %272 : NoneType = prim::Constant()
  %273 : NoneType = prim::Constant()
  %274 : NoneType = prim::Constant()
  %275 : NoneType = prim::Constant()
  %276 : Bool(device=cpu) = onnx::Constant[value={1}]()
  %277 : Double(device=cpu) = onnx::Constant[value={0.1}]()
  %278 : Double(device=cpu) = onnx::Constant[value={1e-05}]()
  %279 : Bool(device=cpu) = onnx::Constant[value={1}]()
  return ()

第一个instancenorm的三个输入变量都是明确的:

%input.23 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Relu(%input.19) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1457:0
%218 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0
%219 : Float(256, strides=[1], device=cpu) = onnx::Constant[value=<Tensor>]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0

%220 : Float(1, 256, 64, 64, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::InstanceNormalization[epsilon=1.0000000000000001e-05](%input.23, %218, %219) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:2484:0

第二个instancenorm由于要以%inpu.43为输入,而%inpu.43为*,所以报错了:

%input.27 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%driving_sketch, %207, %208, %206) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
%input.31 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.27, %spade_layer_1.spade_layer_1.conv1.weight, %spade_layer_1.spade_layer_1.conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
%253 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.beta.weight, %spade_layer_1.spade_layer_1.beta.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
%256 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%220, %254) # D:\selfcode.py:89:0
%input.35 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%256, %253) # D:\selfcode.py:89:0
%input.39 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::LeakyRelu[alpha=0.20000000000000001](%input.35) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1633:0
%input.43 : Float(*, *, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.39, %spade_layer_1.conv_1.weight, %spade_layer_1.conv_1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0

通过一层层向上追溯,发现✳就是从resize开始的,导致了instancenormde 输入大小为✳,怎么解决?

opset=13,会提示:WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.

opset=11时,先不运行到grid_sampler,看看能不能正常运行完resize和instancenorm,此时两个tensor?不见了,也没有提示warning,原来?处现在变成了有值的:

%205 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %206 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]() # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %input.27 : Float(*, *, *, *, strides=[61440, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="asymmetric", cubic_coeff_a=-0.75, mode="nearest", nearest_mode="floor"](%driving_sketch, %205, %206, %204) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:3910:0
  %208 : NoneType = prim::Constant()

而且后面以%input27为输入的后续操作也不为全*,是有channel的:

%input.31 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.27, %spade_layer_1.spade_layer_1.conv1.weight, %spade_layer_1.spade_layer_1.conv1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
%240 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.gamma.weight, %spade_layer_1.spade_layer_1.gamma.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
%251 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.31, %spade_layer_1.spade_layer_1.beta.weight, %spade_layer_1.spade_layer_1.beta.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0
%252 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Mul(%218, %240) # D:\selfcode.py:89:0
%254 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%218, %252) # D:\selfcode.py:89:0
%input.35 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::Add(%254, %251) # D:\selfcode.py:89:0
%input.39 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=1, device=cpu) = onnx::LeakyRelu[alpha=0.20000000000000001](%input.35) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\functional.py:1633:0
%input.43 : Float(*, 256, *, *, strides=[1048576, 4096, 64, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input.39, %spade_layer_1.conv_1.weight, %spade_layer_1.conv_1.bias) # D:\soft\anaconda\envs\tensorrt\lib\site-packages\torch\nn\modules\conv.py:454:0

有问题可以评论区留言讨论或指教

  • 37
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值