onnx-tensorrt/builtin_op_importers.cpp:766:12: error: ‘class nvinfer1::IDeconvolutionLayer’ has no m

编译onnxruntime源码时,报错:

/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:766:12: error: ‘class nvinfer1::IDeconvolutionLayer’ has no member named ‘setDilationNd’
     layer->setDilationNd(dilations);
            ^~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importGemm(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:1250:18: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         squeeze->setZeroIsPlaceholder(false);
                  ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importGRU(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:1536:20: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         unsqueeze->setZeroIsPlaceholder(false);
                    ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importLSTM(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:2051:22: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         reshapeBias->setZeroIsPlaceholder(false);
                      ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importRNN(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:3202:22: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
         reshapeBias->setZeroIsPlaceholder(false);
                      ^~~~~~~~~~~~~~~~~~~~
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp: In function ‘onnx2trt::NodeImportResult onnx2trt::{anonymous}::importTRT_Shuffle(onnx2trt::IImporterContext*, const onnx::NodeProto&, std::vector<onnx2trt::TensorOrWeights>&)’:
/home/zxq/cxx/onnxruntime/cmake/external/onnx-tensorrt/builtin_op_importers.cpp:4219:12: error: ‘class nvinfer1::IShuffleLayer’ has no member named ‘setZeroIsPlaceholder’
     layer->setZeroIsPlaceholder(zeroIsPlaceholder);
            ^~~~~~~~~~~~~~~~~~~~
external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/build.make:103: recipe for target 'external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o' failed
make[2]: *** [external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o] Error 1
CMakeFiles/Makefile2:2581: recipe for target 'external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/all' failed
make[1]: *** [external/onnx-tensorrt/CMakeFiles/nvonnxparser_static.dir/all] Error 2
Makefile:165: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1986, in <module>
    sys.exit(main())
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1921, in main
    build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 1007, in build_targets
    run_subprocess(cmd_args, env=env)
  File "/home/zxq/cxx/onnxruntime/tools/ci_build/build.py", line 528, in run_subprocess
    return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
  File "/home/zxq/cxx/onnxruntime/tools/python/util/run.py", line 41, in run
    completed_process = subprocess.run(
  File "/home/zxq/anaconda3/lib/python3.8/subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/home/zxq/cxx/onnxruntime/build/Linux/Release', '--config', 'Release']' returned non-zero exit status 2.

原因:

从错误信息可以看到是和oonnxruntime的第三方包external/onnx-tensorrt有关,然后看到

onnxruntime -rel-1.7.2的第三方包onnx-tensorrt,这个包依赖于TensorRT的版本是7.2.2,但是我安装的是7.0.0,TensorRT版本过低

我之所以安装7.0.0,是因为我之前安装的cuda是10.0,而cuda10.0最高支持TensorRT 7.0.0。

解决办法:

(1)重新下载合适版本onnxruntime,这个下载太慢,时间代价太大。

(2)更新cuda之11.0,这个,cuda11.0可以支持最新的TensorRT 7.2.3,重新安装cuda和tensorRT参考教程

 

YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed
06-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Mr.Q

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值