deepstream部署jetson

deepstream部署

1.整体结构

.
├── samples
│   ├── configs
│   │   └── deepstream-app
│   ├── models
│   │   ├── Primary_Detector
│   │   ├── Primary_Detector_Nano
│   │   ├── Secondary_CarColor
│   │   ├── Secondary_CarMake
│   │   ├── Secondary_VehicleTypes
│   │   └── Segmentation
│   │       ├── industrial
│   │       └── semantic
│   └── streams
└── sources
    ├── apps
    │   ├── apps-common
    │   │   ├── includes
    │   │   └── src
    │   └── sample_apps
    │       ├── deepstream-app
    │       ├── deepstream-dewarper-test
    │       │   └── csv_files
    │       ├── deepstream-gst-metadata-test
    │       ├── deepstream-image-decode-test
    │       ├── deepstream-infer-tensor-meta-test
    │       ├── deepstream-nvof-test
    │       ├── deepstream-perf-demo
    │       ├── deepstream-segmentation-test
    │       ├── deepstream-test1
    │       ├── deepstream-test2
    │       ├── deepstream-test3
    │       ├── deepstream-test4
    │       ├── deepstream-test5
    │       │   └── configs
    │       └── deepstream-user-metadata-test
    ├── gst-plugins
    │   ├── gst-dsexample
    │   │   └── dsexample_lib
    │   ├── gst-nvinfer
    │   ├── gst-nvmsgbroker
    │   └── gst-nvmsgconv
    ├── includes
    ├── libs
    │   ├── amqp_protocol_adaptor
    │   ├── azure_protocol_adaptor
    │   │   ├── device_client
    │   │   └── module_client
    │   ├── kafka_protocol_adaptor
    │   ├── nvdsinfer
    │   ├── nvdsinfer_customparser
    │   └── nvmsgconv
    ├── objectDetector_FasterRCNN
    │   └── nvdsinfer_custom_impl_fasterRCNN
    ├── objectDetector_SSD
    │   └── nvdsinfer_custom_impl_ssd
    ├── objectDetector_Yolo
    │   └── nvdsinfer_custom_impl_Yolo
    └── tools
        └── nvds_logger

相关版本对应

Jetpack 5.1.1 [L4T 35.3.1]

Jetson Orin NX 安装JetPack SDK 5.1.1,对应 [L4T 35.3.1]。下方网站包含该版本所能支持的一些功能

JetPack SDK 5.1.1 版本 |NVIDIA 开发者

在这里插入图片描述

deepstream支持版本

该网站包括了jetson盒子所支持的相关依赖版本。同时有如何部署的文档说明

Quickstart Guide — DeepStream documentation 6.4 documentation (nvidia.com)

平台对应图

将yolov7的pt文件转换成onnx文件,随后再编译调用

[YOLOV7部署DeepStream流程_deeptream 6.2 yolov7-CSDN博客](https://blog.csdn.net/m0_73702795/article/details/134714606?ops_request_misc=%7B%22request%5Fid%22%3A%22170444588916800225525632%22%2C%22scm%22%3A%2220140713.130102334…%22%7D&request_id=170444588916800225525632&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduend~default-1-134714606-null-null.142v99pc_search_result_base1&utm_term=deeptream 6.2 yolov7&spm=1018.2226.3001.4187)

官方文档

marcoslucianops/DeepStream-Yolo:适用于 YOLO 模型的 NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 实现 (github.com)

换yolov7的pt文件为onnx文件

将Deepstream-Yolo中的utils中的export_yoloV7.py拷贝到原本的YOLOv7项目中。export_yoloV7.py会引用其中yolov7的库,因此必须要放在yolov7根目录中。

运行python3 export_yoloV7.py -w safety_hemlt.pt 命令可以得到safety_hemlt.onnx文件

在这里插入图片描述

编译

打开DeepStream-Yolo-master/nvdsinfer_custom_impl_Yolo文件夹,根据服务器的CUDA版本对Makefile进行修改。编译后得到libnvdsinfer_custom_impl_Yolo.so文件

在这里插入图片描述

配置文件

打开deepstream_app_config.txt文件,将primary-gie文件进行更改。改为YOLOV7的配置文件。
在这里插入图片描述

打开config_infer_primary_yolov7.txt对onnx-file进行更改。将我们之前得到的onnx文件和labels.txt移动到该文件夹下,并用新的得到的onnx文件的名称替换onnx-file。

在这里插入图片描述

运行

deepstream-app -c deepstream_app_config.txt

初次运行时会编译engine文件来使用tensort加速

在这里插入图片描述

问题

安装libssl3时发现找不到该库,跟随链接完成

[install openssl 3.0 ubuntu 20.04-掘金 (juejin.cn)](https://juejin.cn/s/install openssl 3.0 ubuntu 20.04)

进行python绑定

可根据下方教程安装

Jetson Nano( 十)Deepstream6.0.1 + python API安装-CSDN博客

编译好的pyds whl下载地址如下

Releases · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

官方教程

v1.1.8 的 deepstream_python_apps/bindings ·英伟达-AI-IOT/deepstream_python_apps (github.com)

替换官方模型为网上示例模型yolov7(网上教程极少)

根据下方github开源项目进行修改

zhouyuchong/yolov5-deepstream-python: yolov5-deepstream-python (github.com)

deepstream python yolov5使用记录-CSDN博客

构建与测试示例一样的文件类型

在这里插入图片描述

yolov7配置文件deepstream-yolov7-config.txt

包括yolov7的onnx和engine文件,和编译产生的libnvdsinfer_custom_impl_Yolo.so文件,还有label.txt类别文件

官方实例yolov7配置文件

DeepStream-Yolo/config_infer_primary_yoloV7.txt at master · marcoslucianops/DeepStream-Yolo (github.com)

#yolov7
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
batch-size=1
network-type=0
cluster-mode=2
symmetric-padding=1
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo #模型相关的边界框函数,需要与模型一一对应
engine-create-func-name=NvDsInferYoloCudaEngineGet
#####################NEED TO MODIFY THE CORRECT PATH##################################################

custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo-master/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
model-engine-file=/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo-master/model_b1_gpu0_fp32.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo-master/labels.txt
onnx-file=/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo-master/yolov7.onnx

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

parse-bbox-func-name=NvDsInferParseCustomYoloV5 #

parse-bbox-func-name 的 Python 绑定示例—智能视频分析/DeepStream SDK - NVIDIA 开发者论坛

运行文件deepstream-yolov7.py

其中有两处需要修改

#yolov7
#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
# import keyboard
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
import ctypes

import pyds

ctypes.cdll.LoadLibrary('/opt/nvidia/deepstream/deepstream-6.2/sources/DeepStream-Yolo-master/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so')		#不清楚到底需不需要,好像不太需要,注释掉也能够运行。与官方的代码有一些不同。

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3


def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        

        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            # set bbox color in rgba
            print(obj_meta.class_id, obj_meta.obj_label, obj_meta.confidence)
            # set the border width in pixel
            obj_meta.rect_params.border_width=0
            obj_meta.rect_params.has_bg_color=1
            obj_meta.rect_params.bg_color.set(0.0, 0.5, 0.3, 0.4)

            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        # py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        # print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	



def main(args):
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "deepstream-yolov7-config.txt") #这里可以选择修改配置文件路径

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    #GObject.timeout_add_seconds(5, pipeline_pause(pipeline))
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()

    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

相关模型文件

#这些文件在之前已经得到,需要与配置文件相对应
libnvdsinfer_custom_impl_Yolo.so ##编译后得到的链接库文件
model_b1_gpu0_fp32.engine #模型的engine文件
labels.txt #模型的类别文件
yolov7.onnx #onnx文件

导出pipeline图

test1单文件输入

1.在你要运行的py文件import的部分添加:

import os
os.environ["GST_DEBUG_DUMP_DOT_DIR"] = "/tmp"
os.putenv('GST_DEBUG_DUMP_DIR_DIR', '/tmp')

2.最后,在文件中快开始允许loop之前添加Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline"),如下所示:

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline")

# start play back and listen to events
print("Starting pipeline \n")
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
pipeline.set_state(Gst.State.NULL)

test3多摄像头输入

1.第一步和上面一样】

2.第二步有一些不一样,但也是在loop创建后

 loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)
    pgie_src_pad=pgie.get_static_pad("src")
    if not pgie_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        if not disable_probe:
            pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
            Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline_rtsp")
            # perf callback function to print fps every 5 sec
            GLib.timeout_add(5000, perf_data.perf_print_callback)

pipeline_rtsp

python运行deepstream实现多摄像头输出

多输入源问题

当多输入源时,yolov7示例模型属于静态模型。原来的engine模型无用,deepstream会自动将onnx模型转换成engine格式后还是会出现参数不对的问题。

0:00:04.255570198 1080512     0x15396b60 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1920> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:04.255776278 1080512     0x15396b60 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2097> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream_python_yolov7_rtsp/model_b2_gpu0_fp32.engine failed to match config params, trying rebuild
0:00:04.279077297 1080512     0x15396b60 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files

0:09:16.163450777 1078615     0x3897ab60 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1920> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:09:16.163478169 1078615     0x3897ab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2052> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream_python_yolov7_rtsp/model_b2_gpu0_fp32.engine failed to match config params
0:09:16.395646670 1078615     0x3897ab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:09:16.395698062 1078615     0x3897ab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:09:16.395739790 1078615     0x3897ab60 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:09:16.395751789 1078615     0x3897ab60 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary-inference> error: Config file path: deepstream-yolov7-config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

**PERF:  {'stream0': 0.0, 'stream1': 0.0} 

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: deepstream-yolov7-config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app


改成单输入源就可以运行。如果要改成多输入源需要修改静态模型的batch_size,下面链接包含了如何修改engine的batch_size。

Using trtexec to generate an engine file from an ONNX works error with two RTSP input source - Jetson & Embedded Systems / Jetson Orin Nano - NVIDIA Developer Forums

多输入源问题解决

当多输入源时,yolov7的onnx模型属于静态模型,原来的engine模型无用,deepstream会自动将onnx模型再次转换成多输入源的engine文件,但会发现无法静态onnx模型已经限定死了模型内部的batch维度为1。通过https://netron.app/网址可以查看onnx的网络结构。

关于pt模型转换成onnx模型相关参数设置可以参考模型部署入门教程(三):PyTorch 转 ONNX 详解 - 知乎 (zhihu.com)

静态onnx模型

可以通过下图看到静态onnx的模型输入维度为1×3×640×640

在这里插入图片描述

动态onnx模型

可以通过下图看到动态onnx模型的输入维度为batch×3×640×640

在这里插入图片描述

关于生成的engine模型文件解释

经过转换后的onnx动态模型可以支持多输入源的engine模型转换。经过实验,deepstream-yolov7-config.txt中的batch-size对于输入源的数量没有影响,只改变模型的同时推理图片数量。模型是否需要重新生成,受模型的精度、类别、输入源影响。其中onnx在第一次经过n个输入源后,是可以支持小于等于n个输入源的所有输入源个数。

关于pt文件转换成onnx输出结果不同导致生成的engine模型推理后结果无法被解析问题

使用官方转换工具将pt文件转换成onnx文件,使用netron查看onnx结构可以发现输出有三个。也就可以知道deepstream只能解析三个输出结果的engine文件。DeepStream-Yolo/utils/export_yoloV7.py at master · marcoslucianops/DeepStream-Yolo (github.com)

在这里插入图片描述

解决方案(使用deepstream-python会出现无检测框情况,待解决)

deepstream多模型并行(适用于deepstream6.1.1以上)

官方虽有有教程,但并不完全,并且有着许多的bugNVIDIA-AI-IOT/deepstream_parallel_inference_app:一个演示如何使用 nvmetamux 并行运行多个模型的项目。 (github.com)

下载库文件

apt install git-lfs
git lfs install --skip-repo
git clone https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app.git

生成推理引擎

apt-get install -y libjson-glib-dev libgstrtspserver-1.0-dev
#下载一些东西,很慢
/opt/nvidia/deepstream/deepstream/samples/triton_backend_setup.sh


## Only DeepStream 6.1.1 GA need to copy the metamux plugin library. Skip this copy command if DeepStream version is above 6.1.1 GA(以下复制操作只需要在deepstream6.1.1以上进行操作)
cp tritonclient/sample/gst-plugins/gst-nvdsmetamux/libnvdsgst_metamux.so /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_metamux.so


## set power model and boost CPU/GPU/EMC clocks
nvpmodel -m 0 && jetson_clocks

#编译模型
cd tritonserver/
./build_engine.sh

构建与运行

cd tritonclient/sample/
source build.sh
./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

运行错误

经过运行会报错可能是因为其中的engine文件生成失败,导致这个配置文件中的模型找不到。

NVDSMETAMUX_CFG_PARSER: Group 'user-configs' ignored
** ERROR: <create_primary_gie_bin:120>: Failed to create 'primary_gie'
** ERROR: <create_primary_gie_bin:185>: create_primary_gie_bin failed
** ERROR: <create_parallel_infer_bin:1199>: create_parallel_infer_bin failed
creating parallel infer bin failed

解决方案

进入bodypose_yolo_win1,修改了source4_1080p_dec_parallel_infer.yml为source4_1080p_dec_parallel_infer1.yml,主要改变其中的模型推理配置文件。经过测试,可以运行,但出现了不会显示窗口,只保存视频的现象,同时视频内容没有推理结果框。

#branch1 yolo
#brach2  bodypose
#
#
#
application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 5
  ##gie-kitti-output-dir=streamscl

tiled-display:
  enable: 1
  rows: 2
  columns: 2
  width: 1280
  height: 720
  gpu-id: 0
  #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
  #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
  #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
  #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
  #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
  nvbuf-memory-type: 0
  #which source should be showed, -1 means showing all.
  show-source: -1

source:
  csv-file-path: sources_4.csv
  #csv-file-path: sources_4_different_source.csv
  #csv-file-path: sources_4_rtsp.csv

sink0:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 7=nv3dsink (Jetson only)
  type: 2
  sync: 1
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0

sink1:
  enable: 1
  type: 3
  #1=mp4 2=mkv
  container: 1
  #1=h264 2=h265
  codec: 1
  #encoder type 0=Hardware 1=Software
  enc-type: 1
  sync: 1
  #iframeinterval=10
  bitrate: 2000000
  #H264 Profile - 0=Baseline 2=Main 4=High
  #H265 Profile - 0=Main 1=Main10
  profile: 0
  output-file: out.mp4
  source-id: 0

sink2:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
  type: 4
  #1=h264 2=h265
  codec: 1
  #encoder type 0=Hardware 1=Software
  enc-type: 1
  sync: 0
  bitrate: 4000000
  #H264 Profile - 0=Baseline 2=Main 4=High
  #H265 Profile - 0=Main 1=Main10
  profile: 0
  # set below properties in case of RTSPStreaming
  rtsp-port: 8554
  udp-port: 5400

sink3:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
  type: 6
  msg-conv-config: dstest5_msgconv_sample_config.yml
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 0
  msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  #Provide your msg-broker-conn-str here
  msg-broker-conn-str: <host>;<port>;<topic>
  topic: <topic>
  #Optional:
  #msg-broker-config: ../../deepstream-test4/cfg_kafka.txt
 
sink4:
  enable: 0
  #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
  type: 6
  msg-conv-config: dstest5_msgconv_sample_config.yml
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 0
  msg-conv-msg2p-new-api: 0
  #Frame interval at which payload is generated
  msg-conv-frame-interval: 30
  msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  #Provide your msg-broker-conn-str here
  msg-broker-conn-str: 10.23.136.84;9092
  topic: dstest
  disable-msgconv: 1
  #Optional:
  #msg-broker-config: ../../deepstream-test4/cfg_kafka.txt


# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv :  1
message-converter:
  enable: 0
  msg-conv-config: dstest5_msgconv_sample_config.yml
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 0
  # Name of library having custom implementation.
  msg-conv-msg2p-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so
  # Id of component in case only selected message to parse.
  #msg-conv-comp-id: <val>
  
osd:
  enable: 1
  gpu-id: 0
  border-width: 1
  text-size: 15
  #value changed
  text-color: 1;1;1;1
  text-bg-color: 0.3;0.3;0.3;1
  font: Serif
  show-clock: 0
  clock-x-offset: 800
  clock-y-offset: 820
  clock-text-size: 12
  clock-color: 1;0;0;0
  nvbuf-memory-type: 0

streammux:
  gpu-id: 0
  ##Boolean property to inform muxer that sources are live
  live-source: 0
  buffer-pool-size: 4
  batch-size: 4
  ##time out in usec, to wait after the first buffer is available
  ##to push the batch even if the complete batch is not formed
  batched-push-timeout: 400000
  ## Set muxer output width and height
  width: 1920
  height: 1080
  ##Enable to maintain aspect ratio wrt source, and allow black borders, works
  ##along with width, height properties
  enable-padding: 0
  nvbuf-memory-type: 0

primary-gie0:
  enable: 1
  #(0): nvinfer; (1): nvinferserver
  plugin-type: 0
  gpu-id: 0
  #input-tensor-meta: 1
  batch-size: 4
  #Required by the app for OSD, not a plugin property
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;0;1;1
  bbox-border-color3: 0;1;0;1
  #interval: 0
  gie-unique-id: 1
  nvbuf-memory-type: 0
  config-file: ../../yolov4/config_yolov4_infer.txt
  # config-file: ../../yolov4/config_yolov4_inferserver.txt

branch0:
  ## pgie's id
  pgie-id: 1
  ## select sources by sourceid
  src-ids: 0;1;2

tracker0:
  enable: 0
  cfg-file-path: tracker0.yml

nvds-analytics0:
  enable: 0
  cfg-file-path: analytics0.txt  
  
secondary-gie0:
  enable: 0
  ##support mulptiple sgie.
  cfg-file-path: secondary-gie0.yml

primary-gie1:
  enable: 1
  #(0): nvinfer; (1): nvinferserver
  plugin-type: 0
  gpu-id: 0
  #input-tensor-meta: 1
  batch-size: 4
  #Required by the app for OSD, not a plugin property
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;0;1;1
  bbox-border-color3: 0;1;0;1
  interval: 0
  gie-unique-id: 2
  nvbuf-memory-type: 0
  #config-file: ../../bodypose2d/config_body2_inferserver.txt
  # config-file: ../../bodypose2d/config_body2_infer.txt
  config-file: ../../yolov4/config_yolov4_infer.txt

branch1:
  ## pgie's id
  pgie-id: 2
  ## select sources by sourceid
  src-ids: 1;2;3

tracker1:
  enable: 0      
  cfg-file-path: tracker1.yml
  
nvds-analytics1:
  enable: 0
  cfg-file-path: analytics1.txt

secondary-gie1:
  enable: 0
  ##supoort multiple sgie
  cfg-file-path: secondary-gie1.yml  

meta-mux:
  enable: 1
  config-file: ../../metamux/config_metamux0.txt


tests:
  file-loop: 0

下载graphviz库

https://graphviz.org/download/官网下载graphviz,并解压。

cd graphviz-11.0.0
sudo ./configure
sudo make
sudo make install
#使用dot -V查看是否安装成功
dot -V

在这里插入图片描述

生成dot图并转换成png

GST_DEBUG_DUMP_DOT_DIR后跟保存路径

#运行deepstream并保存dot文件
sudo GST_DEBUG_DUMP_DOT_DIR=apps GST_DEBUG=3 ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c con
figs/apps/bodypose_yolo_win1/source4_1080p_dec_parallel_infer1.yml

#将dot格式文件转换成png
sudo dot -Tpng pipeline.dot -o pipeline_graph.png
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值