ONNX转Tensor RT 7

ONNX转Tensor RT 7

我的环境:colab,python3.7,tensorrt-7.2.3.4,cuda11.0

import os
import tensorrt as trt
import sys

TRT_LOGGER = trt.Logger()
model_path = '/content/drive/tensorrt/onnx/gan512.onnx'
engine_file_path = "/content/drive/tensorrt/onnx/gan512.trt"
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)  # batchsize=1

with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network,TRT_LOGGER) as parser:
    builder.max_workspace_size = 1 << 28
    builder.max_batch_size = 1
    print(network)
    if not os.path.exists(model_path):
        print('ONNX file {} not found.'.format(model_path))
        exit(0)
    print('Loading ONNX file from path {}...'.format(model_path))
    with open(model_path, 'rb') as model:
        print('Beginning ONNX file parsing')
        if not parser.parse(model.read()):
            print('ERROR: Failed to parse the ONNX file.')
            for error in range(parser.num_errors):
                print('parser.get_error(error)', parser.get_error(error))
    #不加下面两行,生成的engine为None
    last_layer = network.get_layer(network.num_layers - 1)
    network.mark_output(last_layer.get_output(0))

    network.get_input(0).shape = [1, 3, 512, 512] #此处记得修改成自己的inputsize
    print('Completed parsing of ONNX file')
    engine = builder.build_cuda_engine(network)
    with open(engine_file_path, "wb") as f:
        f.write(engine.serialize())
        print('save  trt success!!')

Tensorrt的推理请参考:https://blog.csdn.net/ZHOUYONGXYZ/article/details/111872518

https://blog.csdn.net/weixin_42476942/article/details/113757059

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值