DS 配置python的环境

DS 配置python的环境

所有前提要先确认Deepstream大环境已经安装完成。

1 下载deepstream_python_apps源码并拷贝

​ 在NVIDIA DS GUIDE TEXT的官网(Python Sample Apps and Bindings Source Details — DeepStream 6.1.1 Release documentation (nvidia.com))找到Python Sample Apps and Bindings Source Details目录,打开连接NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)下载对应的deepstream_python_apps的源码。并将源码拷贝在DS目录下,我的是拷贝在了在/opt/nvidia/deepstream/deepstream-6.1/sources/路径下。需要注意的是下载的源码版本需与Deepstream SDK一致,此时拥有路径/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/

或 按照如下命令依次执行(注意下载deepstream_python_apps的版本要一致)

cd /opt/nvidia/deepstream/deepstream/sources
export GIT_SSL_NO_VERIFY=true 
git clone https://gitee.com/qst1998/deepstream_python_apps.git(此处需注意版本一致性)
2 下载3rdparty涉及的gst-python和pybind11的源码

​ 上述可参考如下命令

# 如下命令依次执行是没有问题的
cd deepstream_python_apps/3rdparty
rm -rf *
git clone https://gitee.com/qst1998/gst-python.git
cd gst-python
git checkout 1a8f48a
git clone https://gitee.com/qst1998/pybind11.git
cd ../pybind11
git checkout 3b1dbeb
3 安装gst-python

输入如下四行命令:

cd ../gst-python
./autogen.sh
make
make install

但是在./autogen.sh的时候 提示No package ‘pygobject-3.0’ found,导致configure failed。查资料,说是需要

sudo apt install python-gi-dev

但是运行sudo apt install python-gi-dev命令,又报【E: 有未能满足的依赖关系。请尝试不指明软件包的名字来运行“apt --fix-broken install”(也可以指定一个解决办法)。】,所以根据提示,输入

sudo apt --fix-broken install

sudo apt --fix-broken install后,提示了一堆Error和Failed。。。也不知道什么原因。

后面又运行一遍sudo apt-get install python-gi-dev又没有报错了。

sudo apt-get install python-gi-dev
# 运行上面代码,提示如下
准备解压 .../python-gi-dev_3.36.0-1_amd64.deb  ...
正在解压 python-gi-dev (3.36.0-1) ...
正在设置 python-gi-dev (3.36.0-1) ...
# 然后应该就没有问题了

上述python-gi-dev安好后,继续

cd gst-python
git checkout 5343aeb
./autogen.sh PYTHON=python3

问题:运行【./autogen.sh PYTHON=python3】又报【error: Python libs not found. Windows requires Python modules to be explicitly linked to libpython. configure failed】针对问题,应该是需要更新Python库为3.8版本。我原来在root目录先输入python命令回车之后打印的版本是3.7.1。此版本(3.7.1是我安装的conda的base下的python版本),貌似conda base下的python版本会覆盖usr/bin/python,所以要通过命令 vim ~/.bashrc ,然后i,再把conda环境变量#注释掉,执行下列代码

# 如下供参考
# 查看软连接: /usr/bin/python是否指向所安装的python
ll /usr/bin/ -il | grep python
# 若未指向
rm -f /usr/bin/python
ln -s /usr/bin/python3.8 /usr/bin/python

然后执行

cd ./gst-python
sudo ./autogen.sh

就不会报错了详情截图如下:
在这里插入图片描述
按照上图最后的提示,接下来可以make了。

make

又报错【error: ‘/home/mec/anaconda3/lib/libgobject-2.0.la’ is not a valid libtool archive】

​ !!!可能还是python版本的问题,实在受不了了,重新无脑安装python3.8,参考链接:(26条消息) ubuntu20.4安装python3.8_sir.K的博客-CSDN博客_ubuntu python3.8 中的1-5.详情如下

# 安装python3.8, 并将软连接python与pip指向python3.8
1  将 deadsnakes PPA 添加到你的系统源列表中
sudo apt update
sudo apt install software-properties-common
2  将 deadsnakes PPA 添加到你的系统源列表中
sudo add-apt-repository ppa:deadsnakes/ppa
# 当被提示时,输入回车按键,继续
# Press [ENTER] to continue or Ctrl-c to cancel adding it.
3  一旦软件源仓库被启用,安装 Python 3.8
sudo apt install python3.8
4  验证安装过程是否成功
python3.8 --version
5  创建python软连接
# 查看软连接: /usr/bin/python是否指向所安装的python
ll /usr/bin/ -il | grep python
# 若未指向
rm -f /usr/bin/python
ln -s /usr/bin/python3.8 /usr/bin/python

【不知道是不是make 的时候又报错了 】

【后面再sudo make 的时候没有报错】

而后又继续代码 即

cd ../gst-python
sudo ./autogen.sh
sudo make
sudo make install
4 编译安装python-binding
cd /deepstream/sources/deepstream_python_apps/bindings
mkdir build
cd build
cmake ..  -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8
make
# 根据自己的版本install对应的pyds版本(我的时1.1.3版本)
pip3 install ./pyds-1.1.3*.whl   

上述一路执行的时候会非常顺利。

5 运行示例python程序
cd apps/deepstream-test1
python3 deepstream_test_1.py <input .h264 file>

其中,<input .h264 file>可以选择 …/…/…/…/samples/streams/sample_720p.h264,即

python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264

​ 运行时,提示【ModuleNotFoundError: No module named ‘gi’】,可能安装gi, 但是pip3 install gi的时候一直不成功。结果原因是自己的环境在root目录下,所以需要sudo -s,然后再运行示例程序。

​ 注意python3 deepstream_test_1.py …/…/…/…/samples/streams/sample_720p.h264运行时需要在本地运行,远程SSH运行会卡在Start pipeline之后提示Unsupported … …导致无法继续运行。

#!/usr/bin/env python3
# sources/deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys

# 如需绘制pipline图,在此处插入相关代码

sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds
import time

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3

"""需要推理的数据buffer即帧数据或目标数据。
   推理结果统计。
   显示的标签及显示位置颜色等"""
def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    # print("info\n", info)
    # print()
    # print("pad\n", pad)
    # print()
    # print("u_data\n",u_data)
    # time.sleep(10)
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	


"""
"""
def main(args):
    # Check input arguments
    # print("args\n", args)
    # time.sleep(100)   # 秒(s)
    # if len(args) != 2:
    #     sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
    #     sys.exit(1)
    if len(args) == 1:
        # 如果参数只有1个,那么需增加第2个参数为要检测的视频源
        args.append('../../../../samples/streams/sample_720p.h264')


    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    # 创建与其他元素相连接的GStreamer管道元素,
    print("=======Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    # 读取视频文件的插件
    print("=======Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    # h264解码器
    print("=======Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    # 在GPU上使用nvdec_h264进行硬件加速解码
    print("("=======Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    # 创建nvstreammux实例,以从一个或多个源合并成为batches
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    # 使用nvinfer对解码器的输出运行推断,通过配置文件设置推断的行为,
    # 可以不设置名称,如参数"primary-inference"可省略
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    # 如果存在二级推理,可以仿照194-196行创建二级推理插件,如下199-201行
    # sgie = Gst.ElementFactory.make("nvinfer", "second-inference")
    # if not sgie:
    #     sys.stderr.write(" Unable to create sgie \n")


    # Use convertor to convert from NV12 to RGBA as required by nvosd
    # 根据nvosd的需求,使用转换器将NV12转换为RGBA
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    # 创建OSD以在转换后的RGBA缓冲区上绘制图像
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    # Finally render the osd output
    # OSD输出:1=Fakesink, 2=EGL (nveglglessink), 
    #          3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
    print("("=======Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    # 从传入的source路径读取相关视频源
    source.set_property('location', args[1])
    # 设置streammux的分辨率
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)

    # nvstreammux插件从多个输入源形成一批帧。将源连接到nvstreammux(多路复用器)时,
    # 必须使用gst_element_get_request_pad()和pad模板sink_ % u从多路复用器请求新的pad。
    # # 设置batch-size,即支持的最大的视频源个数,
    streammux.set_property('batch-size', 1)
    # 即使未形成完整的批,在第一个缓冲区可用于推送批后等待的超时时间
    #(以微秒为单位),1000000微秒等于1秒。
    streammux.set_property('batched-push-timeout', 4000000)
    # 设置基础推理模型
    pgie.set_property('config-file-path', "dstest1_pgie_config.txt")
    # 设置二级推理模型
    # sgie.set_property('config-file-path', "dstest1_sgie1_config.txt")  

    print("Adding elements to Pipeline \n")
    # 将各个插件添加到管道之中(顺序无所谓)
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    # pipeline.add(sgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    # 在管道中按顺序连接各个插件(注意顺序)
    print("Linking elements in the Pipeline \n")
    #  source连接h264parser解析器
    source.link(h264parser)
    # h264parser连接解码器decoder
    h264parser.link(decoder) 

    # decoder解码后,传入到srcpad,从解码结果中获取src
    # 从decoder解码后的src能获取视频源
    #(参数"src"是从decoder之后获取视频源,参数改为"sink"则是从decoder之前获取视频源)
    srcpad = decoder.get_static_pad("src") 
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")

    
    # 从streammux中获取初始sink,sink_num,
    #     sink_num中num代表第几个视频源,一般从0开始命名,最大为4294967295,32位
    # 两个元素直接连接是通过Pad,数据流从一个元件的src到另一元件的sink。
    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    
    
    # 将srcpad送入到streammux的sinkpad中
    srcpad.link(sinkpad)
    # 将streammux送入到主模型中,此处为检测模型
    streammux.link(pgie)
    # 将主模型检测结果送入到二级模型中
    # pgie.link(sgie)  

    # 将pgie连接nvvidconv,nvvidconv是根据nvosd的需求,使用转换器将NV12转换为RGBA
    pgie.link(nvvidconv)  # sgie.link(nvvidconv)
    # nvvidconv 连接OSD展示
    nvvidconv.link(nvosd)
    if is_aarch64():  # 若为开发板
        nvosd.link(transform)
        transform.link(sink)
    else:  #若为x86
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    # 创建一个事件循环:将信息传入监控bus的信息
    loop = GLib.MainLoop()  ## 获取主循环迭代器
    bus = pipeline.get_bus()## 从管道中获取bus
    bus.add_signal_watch()  ## 给bus送入启动钥匙
    # 将bus连接到主循环,并设置日志buscall
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    # 1. 添加探测器以获得生成的元数据的消息,我们将探测器添加到osd元素的缓冲区,
    # 因为到那时,缓冲区将得到所有的元数据。
    # 2. 两个插件的连接是通过Pad,数据流从一个元件的src到另一插件的sink。
    # src是一个插件的输出,sink是一个插件的输入
    
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    # 将对buffer即每一帧的处理函数加到osdsinkpad
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    # 开始处理并监听事件
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup 结束后释放资源
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))


涉及的相关插件及其输入输出接口参看 (https://blog.csdn.net/weixin_44077524/article/details/124609700)

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

爱吃油淋鸡的莫何

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值