yolo的tensorrt加速,转化的trt/engine模型文件的加载和推理

0x00 经过yolo文件夹中的export.py文件实现onnx转化

0x01 onnx实现到trt文件的转化

参考博客:tensorRT加速遇到的若干问题

0x02 yolo文件夹中detect.py文件能实现trt和engine的加载和推理,但是新建单独的py文件如何加载和推理,直接给出代码。

import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import time
import cv2
import numpy as np
import torch
np.bool = np.bool_

f = open("best32.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()

# 加载图像
image_path = "left_2.jpg"
input_image = cv2.imread(image_path)  # 使用 OpenCV 加载图像
# 调整图像大小和通道顺序,以适应模型输入
input_image = cv2.resize(input_image, (640, 480))  # 调整大小为模型输入大小
input_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)  # 调整通道顺序为 RGB
input_image = np.transpose(input_image, (2, 0, 1))  # 调整通道顺序为 CxHxW
input_image = input_image / 255.0
# 添加批处理维度
input_tensor = np.expand_dims(input_image, axis=0)
input_tensor = np.ascontiguousarray(input_tensor, dtype=np.float32)

output = np.empty([1, 18900, 6], dtype=np.float32)

# allocate device memory
d_input = cuda.mem_alloc(1 * input_tensor.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)

bindings = [int(d_input), int(d_output)]
stream = cuda.Stream()

def predict(preprocessed_images):  # result gets copied into output
    # transfer input data to device
    cuda.memcpy_htod_async(d_input, preprocessed_images, stream)  # execute model
    context.execute_async_v2(bindings, stream.handle, None)  # transfer predictions back
    cuda.memcpy_dtoh_async(output, d_output, stream)  # syncronize threads
    stream.synchronize()
    d_input.free()
    d_output.free()
    return output

t0 = time.time()
pred = predict(input_tensor)
print(pred)
print(pred.shape)

t1 = time.time()
print(f'One frame spends time = ({t1 - t0:.3f}s)')


运行此代码出现报错:
[04/11/2024-13:29:04] [TRT] [E] 3: [executionContext.cpp::enqueueInternal::622] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::622, condition: bindings[x] || nullBindingOK
)
或者
condition: binding[x] != nullptr
且输出打印出来全为0

[04/11/2024-13:29:04] [TRT] [E] 3: [executionContext.cpp::enqueueInternal::622] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::622, condition: bindings[x] || nullBindingOK
)
[[[0. 0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0.]
  ...
  [0. 0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0.]]]
(1, 18900, 6)
One frame spends time = (0.036s)

Process finished with exit code 0

参考博客:
TensorRT推理过程出现condition: binding[x] != nullptr,output全0

我们在代码:

f = open("best32.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()

后面加入代码:

for binding in engine:
    dims = engine.get_binding_shape(binding)
    size = trt.volume(dims)
    print("The size of binding is", size)
    print("The dimension of binding is", dims)
    print(binding)
    print("input = ", engine.binding_is_input(binding))
    print("dtype =", trt.nptype(engine.get_binding_dtype(binding)))

输出可以看到:

The size of binding is 921600
The dimension of binding is (1, 3, 480, 640)
images
input =  True
dtype = <class 'numpy.float32'>
The size of binding is 86400
The dimension of binding is (1, 3, 60, 80, 6)
onnx::Sigmoid_456
input =  False
dtype = <class 'numpy.float32'>
The size of binding is 21600
The dimension of binding is (1, 3, 30, 40, 6)
onnx::Sigmoid_509
input =  False
dtype = <class 'numpy.float32'>
The size of binding is 5400
The dimension of binding is (1, 3, 15, 20, 6)
onnx::Sigmoid_562
input =  False
dtype = <class 'numpy.float32'>
The size of binding is 113400
The dimension of binding is (1, 18900, 6)
output
input =  False
dtype = <class 'numpy.float32'>

可以看出输入一个(1, 3, 480, 640),输出不止一个(1, 18900, 6)
最开始的代码

image_path = "left_2.jpg"
input_image = cv2.imread(image_path)  # 使用 OpenCV 加载图像
# 调整图像大小和通道顺序,以适应模型输入
input_image = cv2.resize(input_image, (640, 480))  # 调整大小为模型输入大小
input_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)  # 调整通道顺序为 RGB
input_image = np.transpose(input_image, (2, 0, 1))  # 调整通道顺序为 CxHxW
input_image = input_image / 255.0
# 添加批处理维度
input_tensor = np.expand_dims(input_image, axis=0)
input_tensor = np.ascontiguousarray(input_tensor, dtype=np.float32)
output = np.empty([1, 18900, 6], dtype=np.float32)

# allocate device memory
d_input = cuda.mem_alloc(1 * input_tensor.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)

bindings = [int(d_input), int(d_output)]

需要进行修改,不废话了,给出正确代码。也就是给另外的三个输出分配存储空间。给出完整代码。

import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import time
import cv2
import numpy as np
import torch
np.bool = np.bool_

f = open("best32.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()

# for binding in engine:
#     dims = engine.get_binding_shape(binding)
#     size = trt.volume(dims)
#     print("The size of binding is", size)
#     print("The dimension of binding is", dims)
#     print(binding)
#     print("input = ", engine.binding_is_input(binding))
#     print("dtype =", trt.nptype(engine.get_binding_dtype(binding)))

# 加载图像
image_path = "left_2.jpg"
input_image = cv2.imread(image_path)  # 使用 OpenCV 加载图像
# 调整图像大小和通道顺序,以适应模型输入
input_image = cv2.resize(input_image, (640, 480))  # 调整大小为模型输入大小
input_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)  # 调整通道顺序为 RGB
input_image = np.transpose(input_image, (2, 0, 1))  # 调整通道顺序为 CxHxW
input_image = input_image / 255.0
# 添加批处理维度
input_tensor = np.expand_dims(input_image, axis=0)
input_tensor = np.ascontiguousarray(input_tensor, dtype=np.float32)
# print(input_tensor.shape)
# print(input_tensor)

output = np.empty([1, 18900, 6], dtype=np.float32)
output1 = np.empty([1, 3, 60, 80, 6], dtype=np.float32)
output2 = np.empty([1, 3, 30, 40, 6], dtype=np.float32)
output3 = np.empty([1, 3, 15, 20, 6], dtype=np.float32)


# allocate device memory
d_input = cuda.mem_alloc(1 * input_tensor.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)
d1_output = cuda.mem_alloc(1 * output1.nbytes)
d2_output = cuda.mem_alloc(1 * output2.nbytes)
d3_output = cuda.mem_alloc(1 * output3.nbytes)
bindings = [int(d_input), int(d1_output), int(d2_output), int(d3_output), int(d_output)]
stream = cuda.Stream()

def predict(preprocessed_images):  # result gets copied into output
    # transfer input data to device
    cuda.memcpy_htod_async(d_input, preprocessed_images, stream)  # execute model
    context.execute_async_v2(bindings, stream.handle, None)  # transfer predictions back
    cuda.memcpy_dtoh_async(output, d_output, stream)  # syncronize threads 只需要d_output的结果。
    stream.synchronize()
    d_input.free()
    d_output.free()
    return output

t0 = time.time()

pred = predict(input_tensor)

print(pred)
print(pred.shape)

# 之后的pred 直接使用detect.py里面的non_max_suppression函数就好了  
# 显示输出图片之类的用detect.py里面的for i, det in enumerate(pred): 

t1 = time.time()
print(f'One frame spends time = ({t1 - t0:.3f}s)')


  • 5
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
将Paddle训练好的YOLO模型进行TensorRT推理加速,可以大幅提高模型推理速度。 以下是大致的步骤: 1. 转换模型格式:将Paddle训练好的YOLO模型转换为TensorRT可读取的格式,比如ONNX或TensorRT格式。 2. 构建TensorRT引擎:使用TensorRT API构建推理引擎,其中包括模型的输入输出设置、推理精度设置、推理策略设置等。 3. 加载数据:将需要推理的数据加载TensorRT引擎。 4. 执行推理:调用TensorRT引擎的推理接口进行推理,得到结果。 具体步骤如下: 1. 安装Paddle和TensorRT,并确认两者版本兼容。 2. 将Paddle训练好的YOLO模型转换为ONNX格式或TensorRT格式。其中,转换为ONNX格式可以使用Paddle的 `paddle2onnx` 工具,转换为TensorRT格式可以使用TensorRT自带的 `uff-converter-tf` 工具。 3. 使用TensorRT API构建推理引擎。具体的代码实现可以参考TensorRT官方文档和示例代码。 4. 加载数据。对于YOLO模型,需要将输入数据进行预处理,包括图像的缩放、填充和通道的交换等操作。 5. 执行推理。调用TensorRT引擎的推理接口进行推理,得到结果。对于YOLO模型,需要对输出结果进行后处理,包括解码、非极大值抑制和类别置信度筛选等操作。 参考代码: ```python import pycuda.driver as cuda import pycuda.autoinit import tensorrt as trt import numpy as np # Load the serialized ONNX model with open('yolov3.onnx', 'rb') as f: engine_bytes = f.read() # Create a TensorRT engine trt_logger = trt.Logger(trt.Logger.WARNING) trt_engine = trt.Runtime(trt_logger).deserialize_cuda_engine(engine_bytes) # Allocate memory for the input and output buffers host_input = cuda.pagelocked_empty(trt.volume(trt_engine.get_binding_shape(0)), dtype=np.float32) host_output = cuda.pagelocked_empty(trt.volume(trt_engine.get_binding_shape(1)), dtype=np.float32) cuda.memcpy_htod_async(input_buffer, host_input, stream) cuda.memcpy_htod_async(output_buffer, host_output, stream) # Load the input data with open('input.bin', 'rb') as f: input_data = np.fromfile(f, dtype=np.float32) np.copyto(host_input, input_data) # Execute the inference context = trt_engine.create_execution_context() context.execute(batch_size=1, bindings=[int(input_buffer), int(output_buffer)]) cuda.memcpy_dtoh_async(host_output, output_buffer, stream) # Post-process the output with open('output.bin', 'wb') as f: host_output.tofile(f) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值