【python】tensorrt8版本下的onnx转tensorrt engine

背景

最近解决了python版本为啥执行不了trtexec.exe生成的trt文件的bug,是环境里pytorch自带的cuda和trt的冲突,卸了重装了CPU版本就OK了。但是在我尝试解决的过程中出现了问题,环境有点玩坏了,之后用trtexec.exe生成的engine直接摆烂,输出的全是NaN。行吧,那我在python环境里生成吧。但是网上搜索的onnx2tensorrt代码基本上是7代之前的,我之前试了一些跑不出东西来。今天参考官方代码,记录以下tensorrt8在python环境下如何把onnx转为engine。

参考

github代码

简单流程

实际上用不到官方代码的那么多,只需要选一部分就行了:

import tensorrt as trt
import os

EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
TRT_LOGGER = trt.Logger()


def get_engine(onnx_file_path, engine_file_path=""):
    """Attempts to load a serialized engine if available, otherwise builds a new TensorRT engine and saves it."""

    def build_engine():
        """Takes an ONNX file and creates a TensorRT engine to run inference with"""
        with trt.Builder(TRT_LOGGER) as builder, builder.create_network(
            EXPLICIT_BATCH
        ) as network, builder.create_builder_config() as config, trt.OnnxParser(
            network, TRT_LOGGER
        ) as parser, trt.Runtime(
            TRT_LOGGER
        ) as runtime:
            config.max_workspace_size = 1 << 32  # 4GB
            builder.max_batch_size = 1
            # Parse model file
            if not os.path.exists(onnx_file_path):
                print(
                    "ONNX file {} not found, please run yolov3_to_onnx.py first to generate it.".format(onnx_file_path)
                )
                exit(0)
            print("Loading ONNX file from path {}...".format(onnx_file_path))
            with open(onnx_file_path, "rb") as model:
                print("Beginning ONNX file parsing")
                if not parser.parse(model.read()):
                    print("ERROR: Failed to parse the ONNX file.")
                    for error in range(parser.num_errors):
                        print(parser.get_error(error))
                    return None

            # # The actual yolov3.onnx is generated with batch size 64. Reshape input to batch size 1
            # network.get_input(0).shape = [1, 3, 608, 608]

            print("Completed parsing of ONNX file")
            print("Building an engine from file {}; this may take a while...".format(onnx_file_path))
            plan = builder.build_serialized_network(network, config)
            engine = runtime.deserialize_cuda_engine(plan)
            print("Completed creating Engine")
            with open(engine_file_path, "wb") as f:
                f.write(plan)
            return engine

    if os.path.exists(engine_file_path):
        # If a serialized engine exists, use it instead of building an engine.
        print("Reading engine from file {}".format(engine_file_path))
        with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
            return runtime.deserialize_cuda_engine(f.read())
    else:
        return build_engine()


def main():
    """Create a TensorRT engine for ONNX-based YOLOv3-608 and run inference."""

    # Try to load a previously generated YOLOv3-608 network graph in ONNX format:
    onnx_file_path = "model.onnx"
    engine_file_path = "model.trt"

    get_engine(onnx_file_path, engine_file_path)


if __name__ == "__main__":
    main()

摘抄下来之后我根据自己的模型删了一些东西,参考的这个模型的onnx好像batch是10,它自己就加了转为1的部分,我手头的模型文件已经是了,就删掉了。动态输入大小我没需要搞,就没看咋加。用pycharm试了下失败了,好像是显存分配有点问题,不知道设置的1<<32的大小是不是有点莽。但是我改到pycharm的终端敲代码执行成功了,就很抽象。

(mypytorch) PS F:\DeepStereo\AppleShow2> python onnx2trt.py      
onnx2trt.py:20: DeprecationWarning: Use set_memory_pool_limit instead.
  config.max_workspace_size = 1 << 32  # 4GB
Loading ONNX file from path G:\jupyter\Model_Zoo\resources_iter10_modify\crestereo_combined_iter10_240x320.onnx...
Beginning ONNX file parsing
[06/16/2022-16:59:16] [TRT] [W] onnx2trt_utils.cpp:365: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down t
o INT32.
Completed parsing of ONNX file
Building an engine from file G:\jupyter\Model_Zoo\resources_iter10_modify\crestereo_combined_iter10_240x320.onnx; this may take a while...
[06/16/2022-17:03:52] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.3.1
[06/16/2022-17:07:42] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.3.1
[06/16/2022-17:07:43] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.3.1
Completed creating Engine
  • 8
    点赞
  • 47
    收藏
    觉得还不错? 一键收藏
  • 9
    评论
好的,我可以回答这个问题。首先,需要安装TensorRTONNX Runtime。然后,按照以下步骤进行操作: 1. 使用ONNX将模型导出为ONNX格式。例如,使用以下命令: ```python import torch import onnx from onnxruntime.quantization import QuantType, quantize # 加载 PyTorch 模型 model = torch.load("model.pth") # 将 PyTorch 模型换为 ONNX 格式 dummy_input = torch.randn(1, 3, 224, 224) input_names = ["input"] output_names = ["output"] onnx_model_path = "model.onnx" torch.onnx.export(model, dummy_input, onnx_model_path, input_names=input_names, output_names=output_names) ``` 2. 使用TensorRTONNX模型换为TensorRT引擎。例如,使用以下代码: ```python import tensorrt as trt import onnx # 加载 ONNX 模型 onnx_model_path = "model.onnx" onnx_model = onnx.load(onnx_model_path) # 创建 TensorRT 的构建器 TRT_LOGGER = trt.Logger(trt.Logger.WARNING) builder = trt.Builder(TRT_LOGGER) # 设置最大批处理大小和最大工作空间 max_batch_size = 1 max_workspace_size = 1 << 30 builder.max_batch_size = max_batch_size builder.max_workspace_size = max_workspace_size # 创建 TensorRT 的优化器 config = builder.create_builder_config() config.max_workspace_size = max_workspace_size config.set_flag(trt.BuilderFlag.FP16) # 创建 TensorRT 的网络 network = builder.create_network() # 将 ONNX 模型换为 TensorRT 的网络 parser = trt.OnnxParser(network, TRT_LOGGER) success = parser.parse(onnx_model.SerializeToString()) if not success: print("Failed to parse ONNX model.") exit() # 创建 TensorRT 的引擎 engine = builder.build_cuda_engine(network) # 将 TensorRT 引擎保存到文件 engine_path = "model.engine" with open(engine_path, "wb") as f: f.write(engine.serialize()) ``` 这样就可以将ONNX模型换为TensorRT引擎,并将其保存到文件中。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值