tensorrt的view以及../builder/Network.cpp::addPoolingNd问题解决

一、问题

      在项目中对某些项目进行tensorrt加速的时候发现会报如下的错误。在这里插入图片描述
      上述错误大概是说view输入参数大小是512x7x7的,输出的参数大小确是512x36的,如此造成了输入输出规格不同。但是这在实际的代码中打断点调试根本看不出来,必须深入到tensorrt代码去查。

二、问题排查

  1. 打开tensorrt源码中的view.py文件查看并将输入输出的数据进行shape对比,如下
def convert_view(ctx):
    input = ctx.method_args[0]
    print('before input.shape = ',input.shape)
    input_trt = add_missing_trt_tensors(ctx.network, [input])[0]
    print('after input.shape = ',input_trt.shape)
    output = ctx.method_return
    layer = ctx.network.add_shuffle(input_trt)
    layer.reshape_dims = tuple(output.shape[1:])
    output._trt = layer.get_output(0)

然后运行输出,看调试信息
在这里插入图片描述
这就和错误信息对上号了。看来是add_missing_trt_tensors函数改变了原数据的shape

  1. 打开torch2trt.py文件并定位到目标函数,通过调试发现数据走的是第二个if分支,即是

在这里插入图片描述
看样子是t._trt的问题。于是我们在开始的时候添加一个print函数将这两个的shape打印出来

def add_missing_trt_tensors(network, tensors):
    """Creates missing TensorRT tensors as constants and attaches them to the Torch Tensors"""
    trt_tensors = [None] * len(tensors)

    dtype = check_torch_dtype(*tensors)

    for i, t in enumerate(tensors):
        print('add_missing   i = {}, t.shape = {} t._trt={}'.format(i,t.shape,t._trt.shape))
        trt_tensor = None

        # GET TRT TENSOR (OR CREATE TRT CONSTANT)

        # get tensor w/ _trt
        # or... add constant for scalar primitive
        if isinstance(t, float) or isinstance(t, int):
            #省略内容
        elif hasattr(t, "_trt"):
            trt_tensor = t._trt
        # or... add constant for leaf tensor w/o _trt
        else:
            #省略内容
        assert trt_tensor is not None
        trt_tensors[i] = trt_tensor

    return trt_tensors

最后结果如下:
在这里插入图片描述
通过结果我们可以看到,前面tt._trt的shape都是相同的,但是后期就不一样了,而程序里又执行了trt_tensor = t._trt,这就导致了问题的产生。

三、问题解决

  1. 经验证,这是由于输入图像规格太大导致的,将输入数据规格缩小即可避免这个问题,而这个数字如果不正确的话,即使解决了view的问题,也可能会出现下面中所示的问题。

在这里插入图片描述

  1. 我将规格改成196,完美的解决上述两个问题。
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed
06-02

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

贱贱的剑

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值