Nvidia jetson TX2上测试deepstream_tlt_apps示例

deepstream_tlt_apps配置地址:https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps

根据官方github的使用说明进行环境和资源的下载(在下载模型的时候,需要连接外网)

1. 运行frcnn案例:

./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264
 

结果:

Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: pgie_frcnn_tlt_config.txt
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:00.206507907 11350   0x559f4ef700 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:25.414269236 11350   0x559f4ef700 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /home/nvidia/Public/deepstream_tlt_apps/models/frcnn/faster_rcnn_resnet10.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input_image     3x272x480       
1   OUTPUT kFLOAT proposal        300x4x1         
2   OUTPUT kFLOAT dense_regress_td/BiasAdd 300x16x1x1      
3   OUTPUT kFLOAT dense_class_td/Softmax 300x5x1x1       

0:00:25.446320761 11350   0x559f4ef700 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:pgie_frcnn_tlt_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 0 
End of stream
Returned, stopping playback
Deleting pipeline
 如果想要显示视频,需要增加参数 -d

2. 运行yolov3:./deepstream-custom -c pgie_yolov3_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

错误提示
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: pgie_yolov3_tlt_config.txt
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:00.213974058 11548   0x558b573d00 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.777201415 11548   0x558b573d00 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
 

其他示例也是同样的问题,正在解决中。

 

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值