基于Qualcomm Robotics RB5边缘侧AI应用 — 年龄和表情及性别估计

4 篇文章 0 订阅
2 篇文章 0 订阅

该解决方案将使用网络摄像头、基于高通 Robotics RB5 Edge AI 硬件板实时输入流分析和计算机视觉系统。解决方案提供了一个端到端的应用程序,可以快速集成到你的项目中。

相关代码可以在这里获取到:

https://github.com/quic/sample-apps-for-robotics-platforms/tree/master/RB5/linux_kernel_5_x/AI-ML-apps/AI_Age_Gender_Emotion_Solution

所需使用硬件设备

  1. 一台Ubuntu 20.04 PC
  2. Qualcomm Robotics RB5 开发板: https://developer.qualcomm.com/qualcomm-robotics-rb5-kithttps://developer.qualcomm.com/qualcomm-robotics-rb5-kit
  3. 一个 USB 摄像头
  4. 一台显示器

 该应用程序实现了以下基于视觉的人脸检测解决方案:1.年龄检测  2.性别检测  3.情绪检测

此应用程序支持使用SNPE的人脸检测用例的年龄预测。 请参阅DesignDetails.md,了解将模型集成到应用程序中的实现细节和步骤。

相关环境配置

1. 前提条件

2. x86 Host 安装

2.1 安装 Qualcomm Neural Processing Software Development Kit (SNPE SDK)

下载地址: Qualcomm Neural Processing SDK for AI Tools & Resources Archive - Qualcomm Developer Network

高通SNPE SDK提供了用于模型转换(onnx到dlc)、模型量化和执行的工具。请参阅SDK中详细文档中给出的步骤进行安装.Snapdragon Neural Processing Engine SDK: Main Page

3. 安装 RTSP 流媒体服务器

在本节中,演示如何准备测试视频,使用live555 作为RTSP流媒体服务器,RTSP数据流可以模拟IP摄像头采集的视频。所有的安装步骤在X86主机配置。

3.1 准备测试视频

如果测试视频是mp4、mkv或其他格式,请准备一个测试视频。需要转换为H264原始视频。以下步骤演示如何将MP4转换为H264原始视频。

wget https://test-videos.co.uk/vids/bigbuckbunny/mp4/h264/1080/Big_Buck_Bunny_1080_10s_1MB.mp4

sudo apt install ffmpeg

ffmpeg -i Big_Buck_Bunny_1080_10s_1MB.mp4 -f h264 -vcodec libx264 Big_Buck_Bunny_1080_10s_1MB.264
3.2 Live555 服务器安装
wget http://www.live555.com/liveMedia/public/live555-latest.tar.gz
tar -zxvf live555-latest.tar.gz
cd live/
./genMakefiles linux-64bit
make -j4
cd ..

copy test video to mediaServer folder

cp Big_Buck_Bunny_1080_10s_1MB.264 ./live/mediaServer
cd ./live/mediaServer
./live555MediaServer

rtsp://192.168.4.111:8554/ is the rtsp url.
  • "192.168.4.111" is RTSP server IP address
  • 8554 is the default port
  • is the video file name under mediaServer folder In this case, url "rtsp://192.168.4.111:8554/Big_Buck_Bunny_1080_10s_1MB.264" is the video address.
3.3 验证 Live555 服务器

在Windows桌面上下载并安装VLC媒体播放器VLC: Official site - Free multimedia solutions for all OS! - VideoLAN

Launch VLC player, choose "Media->Open Network Stream" input RTSP url rtsp://192.168.4.111:8554/Big_Buck_Bunny_1080_10s_1MB.264

单击播放以测试Live555服务器是否正常工作。注意:确保网络地址可访问

4. 在目标设备上部署高通SNPE SDK库的步骤

从如下地址下载高通神经处理软件开发工具包(SNPE SDK):Qualcomm Neural Processing SDK for AI Tools & Resources Archive - Qualcomm Developer Network

Windows

cd snpe-1.68.0\snpe-1.68.0.3932
adb push lib\aarch64-ubuntu-gcc7.5\. /usr/lib/
adb push lib\aarch64-ubuntu-gcc7.5\libsnpe_dsp_domains_v2.so /usr/lib/rfsa/adsp/
adb push lib\dsp\. /usr/lib/rfsa/adsp/
adb push bin\aarch64-ubuntu-gcc7.5\snpe-net-run /usr/bin/

Linux

cd snpe-1.68.0/snpe-1.68.0.3932/
adb push lib/aarch64-ubuntu-gcc7.5/* /usr/lib/
adb push lib/aarch64-ubuntu-gcc7.5/libsnpe_dsp_domains_v2.so /usr/lib/rfsa/adsp/
adb push lib/dsp/* /usr/lib/rfsa/adsp/
adb push bin/aarch64-ubuntu-gcc7.5/snpe-net-run /usr/bin/

Verify SNPE version

adb shell
chmod +x /usr/bin/snpe-net-run
snpe-net-run --version

5. 安装 OpenCV 4.5.5

从如下地址下载OpenCV 4.5.5源代码:

https://codeload.github.com/opencv/opencv/tar.gz/refs/tags/4.5.5

adb shell
wget https://codeload.github.com/opencv/opencv/tar.gz/refs/tags/4.5.5 -O opencv-4.5.5.tar.gz
tar  -zxvf opencv-4.5.5.tar.gz
cd ./opencv-4.5.5
安装依赖项
apt install build-essential cmake unzip git pkg-config
apt install libjpeg-dev libpng-dev libtiff-dev
apt-get install libjsoncpp-dev libjson-glib-dev libgflags-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
apt install libjasper-dev
apt-get install libeigen3-dev

如果您收到一个关于libjasper-dev丢失的错误,请按照以下说明进行操作:

wget http://ports.ubuntu.com/ubuntu-ports/pool/main/j/jasper/libjasper-dev_1.900.1-debian1-2.4ubuntu1.3_arm64.deb
dpkg -i libjasper-dev_1.900.1-debian1-2.4ubuntu1.3_arm64.deb

wget http://ports.ubuntu.com/ubuntu-ports/pool/main/j/jasper/libjasper1_1.900.1-debian1-2.4ubuntu1.3_arm64.deb
dpkg -i libjasper1_1.900.1-debian1-2.4ubuntu1.3_arm64.deb

否则(如果安装了libjasper-dev),继续进行:

apt install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
apt install libxvidcore-dev libx264-dev

OpenCV的highgui模块依赖于GTK库进行GUI操作。安装GTK命令:

apt install libgtk-3-dev

安装 Python 3头文件和库:

apt install libatlas-base-dev gfortran

Build and Install

mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D CMAKE_INSTALL_PREFIX=/usr/local/opencv4.5 \
 -D OPENCV_ENABLE_NONFREE=ON \
 -D OPENCV_GENERATE_PKGCONFIG=YES \
 -D WITH_QT=ON \
 -D WITH_OPENGL=ON \
 -D BUILD_EXAMPLES=OFF \
 -D INSTALL_PYTHON_EXAMPLES=OFF \
 ..
make -j8
make install      

如何配置、构建和运行此应用程序 

1. 使用如下命令克隆存储库

adb shellcd /home/
git clone https://github.com/quic/sample-apps-for-robotics-platforms.git
cd sample-apps-for-robotics-platforms/RB5/linux_kernel_5_x/AI-ML-apps/AI-Age_Gender_Emotion-Solutions/

2. 更新应用程序配置

所有解决方案的配置在data/config.json文件中进行了描述。应更新此配置文件以选择所需的解决方案、模型配置和输入/输出流。应用程序可以将rtsp/camera流作为输入,并可以将输出转储到mp4或hdmi监视器。

Table 1-1 show all the configuration items:

输入配置

key

Value

Description

input-config-name

string

Name of the input config

stream-type

string

Input stream type camera or rtsp

stream-width

int

Width of the input stream

stream-height

int

Height of the input stream

SkipFrame

int

Number of frames to skip

camera-url

string

rtsp stream path if the input stream is rtsp

模型配置

key

Value

Description

model-name

string

Name of the model

model-path

string

Path of the dlc file

label-path

string

Path of the label file

runtime

string

SNPE Runtime (GPU, CPU, DSP)

nms-threshold

float

NMS threshold

conf-threshold

float

Confidence threshold

labels

int

Number of labels

input-layers

string

Name of the input layers

output-layers

string

Name of the output layers

output-tensors

string

Name of the output tensors

解决方案配置

key

Value

Description

solution-name

string

Name of the Solution

model-name

string

Name of the model configuration to be used

input-config-name

string

Name of the Input configuration to be used

Enable

bool

1 to Enable and 0 to Disable the solution

output-type

string

Filesink to save the output in mp4

Wayland if display the output on hdmi monitor

output-path

string

Path of the output, Enabled if output type

is filesink

Example 1: 从相机获取输入流并在hdmi监视器上输出的配置

{
    "input-configs":[
        {
            "input-config-name":"camera",
            "stream-type":"camera",
            "stream-width":1280,
            "stream-height":720,
            "SkipFrame":1,
            "fps-n":30,
            "fps-d":1
        },
 ],
"model-configs":[
        {
            "model-name":"face-detect",
            "model-type":"centerface",
            "model-path":"../models/centerface_quantized.dlc",
            "runtime":"DSP",
            "nms-threshold":0.3,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input.1"
            ],
            "output-layers":[
                "Neuron_42",
                "Conv2d_40",
                "Conv2d_41",
                "Conv2d_42"
            ],
            "output-tensors":[
                "537",
                "538",
                "539",
                "540"
            ],
            "global-threshold":0.2
        },
 
        {
            "model-name":"age",
            "model-type":"googlenet",
            "model-path":"../models/age_caffe_quant.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "data"
            ],
            "output-layers":[
                "prob"
            ],
            "output-tensors":[
                "prob"
            ],
            "global-threshold":0.2
        },
        {
            "model-name":"gender",
            "model-type":"gendernet",
            "model-path":"../models/gender_googlenet_quant.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input"
            ],
            "output-layers":[
                "loss3/loss3"
            ],
            "output-tensors":[
                "loss3/loss3_Y"
            ],
            "global-threshold":0.2
        },
        {
            "model-name":"emotion",
            "model-type":"FERplus",
            "model-path":"../models/Emotion1.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input.1"
            ],
            "output-layers":[
                "Gemm_26"
            ],
            "output-tensors":[
                "94"
            ],
            "global-threshold":0.2
        }
 
    ],
    "solution-configs":[
        {
            "solution-name":"face-detection",
            "model-name":["face-detect","age","gender","emotion"],
            "input-config-name":"camera",
            "Enable":1,
            "output-type":"wayland",
            "output-path":"/root/video.mp4"
        }
    ]
}
 

Example 2: 设备上rtsp输入流和输出的配置

"input-configs":[
    {
        "input-config-name":"rtsp3",
        "stream-type":"rtsp",
        "camera-url":"rtsp://10.147.243.253:8554/crack_video.264",
        "SkipFrame":1
    },
        
"model-configs":[
        {
            "model-name":"face-detect",
            "model-type":"centerface",
            "model-path":"../models/centerface_quantized.dlc",
            "runtime":"DSP",
            "nms-threshold":0.3,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input.1"
            ],
            "output-layers":[
                "Neuron_42",
                "Conv2d_40",
                "Conv2d_41",
                "Conv2d_42"
            ],
            "output-tensors":[
                "537",
                "538",
                "539",
                "540"
            ],
            "global-threshold":0.2
        },
 
        {
            "model-name":"age",
            "model-type":"googlenet",
            "model-path":"../models/age_caffe_quant.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "data"
            ],
            "output-layers":[
                "prob"
            ],
            "output-tensors":[
                "prob"
            ],
            "global-threshold":0.2
        },
        {
            "model-name":"gender",
            "model-type":"gendernet",
            "model-path":"../models/gender_googlenet_quant.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input"
            ],
            "output-layers":[
                "loss3/loss3"
            ],
            "output-tensors":[
                "loss3/loss3_Y"
            ],
            "global-threshold":0.2
        },
        {
            "model-name":"emotion",
            "model-type":"FERplus",
            "model-path":"../models/Emotion1.dlc",
            "runtime":"DSP",
            "nms-threshold":0.5,
            "conf-threshold":0.5,
            "grids":25200,
            "input-layers":[
                "input.1"
            ],
            "output-layers":[
                "Gemm_26"
            ],
            "output-tensors":[
                "94"
            ],
            "global-threshold":0.2
        }
 
    ],
    "solution-configs":[
        {
            "solution-name":"face-detection",
            "model-name":["face-detect","age","gender","emotion"],
            "input-config-name":"camera",
            "Enable":1,
            "output-type":"wayland",
            "output-path":"/root/video.mp4"
        }
    ]
 
Use model-name and input-config-name to select model and input stream respectively.

3. 模型集成

将model推送到应用程序中的model目录中,并更新config.json文件。更新输出层和输出张量。要检查输出层和输出张量节点,请在Netron应用程序Netron中打开模型,然后单击图像中提到的Conv层。在centerface.onnx中,输出节点为onnx:536,538,539和540。

​编辑

在centerface.dlc中,输出层和输出张量为 536(Conv2d_39), 538(Conv2d_40), 539(Conv2d_41) and 540(Conv2d_42) 

  "model-configs":[
      {
          "model-name":"model-name", --> Add model name here. It should match with the model name in solution config
          "model-type":"model type", --> Select type of the model.
          "model-path":"../models/model.dlc", --> Path of the quantized model
          "label-path":"../data/label.txt", --> Path to the label file
          "runtime":"DSP", 
          "labels":85, --> Update label here.
          "grids":25200,
          "nms-threshold":0.5,
          "conf-threshold":0.4,
          "input-layers":[
              "images" --> Open the model in netron.app and get the input-layers names.
          ],
          "output-layers":[ --> Refer the steps given above to know the output-layers and output-tensors
              "Conv_271",
              "Conv_305",
              "Conv_339"
          ],
          "output-tensors":[
              "443",
              "496",
              "549"
          ],
          "global-threshold":0.2
      },

4. 执行应用程序

4.1 构建应用程序

adb shell
cd sample-apps-for-robotics-platforms/RB5/linux_kernel_5_x/AI-ML-apps/AI-Age_Gender_Emotion-Solutions/
mkdir build 
cd build
cmake -DSNPE_SDK_BASE_DIR=<SDK Directory Path>/snpe-1.68.0.3932 ..
make -j8

4.2 运行应用程序

在监视器上显示输出。请通过HDMI电缆将显示器连接到设备。按照以下说明启用weston:

export XDG_RUNTIME_DIR=/run/user/root
cd build
./out/main -c ../data/config.json

验证结果

如果输出类型为filesink,请检查“输出路径”的目录是否为filesink。或者,请检查与HDMI连接的显示器的输出。

文章作者:高通工程师,廖洋洋

更多高通开发者资源及技术问题请访问:高通开发者论坛

  • 26
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值