deepstream python

git 地址:https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

简介:

python通过pybindings访问deepstream的C库,pybindings使用的第三方库是pybind11。

pybind11 is a lightweight header-only library that exposes C++ types in Python。

环境搭建:

先安装编译各种库,参见:deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

binddings的编译:

deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

主要步骤是:

3.1.1 Quick build (x86-ubuntu-20.04 | python 3.8 | Deepstream 6.1)
cd deepstream_python_apps/bindings
mkdir build
cd build
cmake ..
make
4.1 Installing the pip wheel
apt install libgirepository1.0-dev libcairo2-dev
pip3 install ./pyds-1.1.2-py3-none*.whl
测试步骤:

python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264
 

runtime_source_add_delete

This application demonstrates how to:
* Add and delete sources at runtime.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
  supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
  batch for better resource utilization.
* Configure the tracker (referred to as nvtracker in this sample) using
  config file dstest_tracker_config.txt

add_sources{

 #达到最大个数后,开始删source。

if (g_num_sources == MAX_NUM_SOURCES):

GObject.timeout_add_seconds(10, delete_sources, g_source_bin_list)

return False

 }

main(){

GObject.timeout_add_seconds(10, add_sources, g_source_bin_list)  //每隔开10秒加source.

}

deepstream-imagedata-multistream

这个例子的目的:
* Access imagedata in a multistream source
* Modify the images in-place. Changes made to the buffer will reflect in the downstream but  
  color format, resolution and numpy transpose operations are not permitted.  

* Make a copy of the image, modify it and save to a file. These changes are made on the copy  
  of the image and will not be seen downstream.

* Extract the stream metadata, imagedata, which contains useful information about the
  frames in the batched buffer.
* Annotating detected objects within certain confidence interval
* Use OpenCV to draw bboxes on the image and save it to file.
* Use multiple sources in the pipeline.
* Use a uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer
  supported container format, and any codec can be used as input.
* Configure the stream-muxer to generate a batch of frames and infer on the
  batch for better resource utilization.

标红的特色例子,其他的在其他例子都有出现。总结下来是:1.修改源buffer, 用opencv在上面画个图,但不能修改buffer的颜色格式,分辨率等。2. copy一份源buffer,并修改保存,不会影响源buffer。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

山西茄子

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值