ardupilot开发 --- Jetson Orin Nano 前篇

0. 一些概念

  • 官网:https://www.nvidia.com/en-us/

  • Developers

  • Documentation

  • Getting Started

  • Jetson Developer Kits

  • User Guide

  • 论坛
    提问、项目分享。

  • 博客
    案例分享。提问题请去 论坛

  • 对于开发小白,Developers、Documentation、Getting Started、User Guide,到底先看哪个呢?
    在这里插入图片描述
    建议小白从 Getting Started 开始!
    再看 User Guide
    在开发过程中从Documentation 按需查看。
    使用案例中寻找开发灵感。
    提问和 trouble shooting 请上 论坛
    图文、视频教程请浏览 Tech
    资源下载请移步 Download

  • Jetson Orin Nano 开发板简介
    在这里插入图片描述

  • 几个有用的导航
    可以找到你想要的,如资源、教程、社区、相关项目、案例、下载等等…

    • Developers 页面下的 Resource 栏
      在这里插入图片描述
    • Developers ~> Jetson developer kits 页面下的 Community 栏
      在这里插入图片描述
  • 教程、视频教程
    对于Jetson Orin Nano开发版,导航到教程的方式如下:
    Developers
    ~> Browse by Solution Areas
    ~> Autonomous Machines
    ~> Jetson developer kits
    ~> Community
    ~> TechForum

  • JetPack SDK 是什么?
    JetPack 为构建人工智能应用程序提供了完整的开发环境。
    JetPack 包含以下组件(模块):

    • 一个 Ubuntu 系统
      这个 Ubuntu 系统由以下模块构成。Ubuntu 版本、Jetson Linux系统版本因 JetPack 版本而异。
      (1)Jetson Linux系统
      (2)Linux 内核
      (3)系统引导(bootloader)
      (4)Ubuntu 文件系统
    • 一些具有GPU加速的计算机视觉库;
    • 一些用于 AI 计算的SDK,如 CUDA 、Tensorrt、cuDNN、VPI 等。
    • 一些例程和文档;。
  • NVIDIA SDK Manager 又是什么?
    顾名思义就是一个管理 NVIDIA SDK 的工具。例如可用于为开发板安装 JetPack SDK 或操作系统。

    • 适用于不同系列的NVIDIA开发板;
    • 可以为开发板安装不同的操作系统;
    • 快速便捷地配置开发环境;
    • A central location for all your software development needs.
    • Coordination of SDKs, library and driver combinations, and any needed compatibilities or dependencies.
    • Notifications for software updates keep your system up-to-date with the latest releases.
  • SD Card image
    用于为开发板首次安装操作系统的镜像文件,本质上是JetPack 。

  • pre-configured SD card image
    用于为开发板首次安装操作系统的自定义镜像文件,可预安装一些包如ROS、OpenCV等。

  • Jetson 旗下的 机载端人工智能平台有哪些?
    如Jetson Orin Nano开发板等…

  • Jetson 生态系统
    与 Jetson 平台合作的厂商、兼容的传感器、硬件设备…

  • ROS + Intel RealSense camera 实现室内 VSLAM
    Jetson Isaac ROS Visual SLAM Tutorial
    在这里插入图片描述

1. 系统安装(刷机、flash)

1.1 使用SD卡刷机

  • 参考:Getting Started
  • 过程简述:把SD Card Image 镜像制作到SD卡中,然后用SD卡去Boot开发板以实现系统的首次安装。
  • 本质:通过SD卡为Jetson开发板安装 JetPack SDK,JetPack SDK 包含了一个适配NVIDIA开发板的Ubuntu系统。
  • 这里的系统安装与下一节的【使用 SD Card Image 方式安装 JetPack】是相同的过程和结果。

1.2 使用固态硬盘刷机

1.3 如何安装纯净的Ubuntu?

如何安装纯净的Ubuntu而不是Jetson Linux ??
待续…

1.4 扩容

sudo apt install gparted
sudo gparted

在这里插入图片描述

2. JetPack SDK

2.1 一些概念

  • JetPack 简介
  • JetPack 文档
  • JetPack SDK的作用是为构建人工智能应用程序提供了一个完整的开发环境
  • JetPack 包含以下组件(模块):
    • 一个 Ubuntu 系统
      这个 Ubuntu 系统由以下模块构成。Ubuntu 版本、Jetson Linux系统版本因 JetPack 版本而异。
      • Jetson Linux系统
      • Linux 内核
      • 系统引导(bootloader)
      • Ubuntu 文件系统
    • 一些具有GPU加速的计算机视觉库;
    • 一些用于 AI 计算的SDK,如 CUDA 、Tensorrt、cuDNN、VPI 等。
    • 一些例程和文档;
  • full JetPack 和 runtime JetPack 的区别:
    full JetPack 包含示例和文档,有利于开发工作,适用于开发环境。runtime JetPack不包含示例和文档,比较简洁,适用于生产环境。
  • Ubuntu系统、Linux内核、Jetson Linux系统、L4T版本 的关系
    Ubuntu、Red Hat、Fedora等是Linux的一个发行版本(Linux Distribution)
    Linux是一个开源操作系统内核,而Ubuntu是基于Linux内核的操作系统之一。经常说的Linux版本实质指的是Linux内核版本。
    Ubuntu = Linux内核 + 界面系统。
    Jetson Linux 系统是NVIDIA公司开发的适用于Jetson 开发板的Linux内核,本质上是一种根据硬件型号定制的Linux内核。
    L4T 是 Linux for Tegra的缩写,因为jetson系列用的是Tegra架构,因此L4T可以理解为jetson定制的Linux操作系统,具体的来说就是 Ubuntu 定制款。
    因此,L4T版本指的是 Jetson Linux 版本。
    在这里插入图片描述
  • JetPack Archive 发行日志

2.2 安装、升级

  • 使用 apt 安装完整的JetPack
    需要事先为开发板安装好 Jetson Linux 36.2 系统。
    参考:Package Management Tool
    如:
    sudo apt show nvidia-jetpack
    sudo apt update
    sudo apt install nvidia-jetpack
    
  • 使用 apt 选择性安装 JetPack 中的某些packages
    需要事先为开发板安装好Ubuntu系统。
    参考:JetPack Debian Packages on Host
    如:
    sudo apt-get install cuda-toolkit-12-2 cuda-cross-aarch64-12-2 nvsci libnvvpi3 vpi3-dev vpi3-cross-aarch64-l4t python3.9-vpi3 vpi3-samples vpi3-python-src nsight-systems-2023.4.3 nsight-graphics-for-embeddedlinux-2023.3.0.0
    
  • JetPack Debian Packages 列表
    参考:List of JetPack Debian Packages
    在这里插入图片描述
  • 使用 SD Card Image 方式安装 JetPack
    不需要事先为开发板安装Ubuntu,因为JetPack包含了一个Ubuntu系统。
    参考:Installing JetPack by SD Card Image
  • 使用 NVIDIA SDK Manager 方式安装 JetPack
    参考:Installing JetPack by NVIDIA SDK Manager
  • 升级
    不支持使用apt工具从 JetPack 5 升级到 JetPack 6。
    请使用NVIDIA SDK Manager 或 卸载重装 的方式进行升级。

2.3 相关链接

3. NVIDIA SDK Manager

参考:NVIDIA Software Development Kit (SDK) Manager
NVIDIA SDK Manager 顾名思义就是一个管理 NVIDIA SDK 的工具。例如可用于为开发板安装 JetPack SDK 或操作系统。特的用处包括:

  • 适用于不同系列的NVIDIA开发板;
  • 可以为开发板安装不同的操作系统;
  • 快速便捷地配置开发环境;
  • A central location for all your software development needs.
  • Coordination of SDKs, library and driver combinations, and any needed compatibilities or dependencies.
  • Notifications for software updates keep your system up-to-date with the latest releases.

待续…

4. SSH远程登录、文件传输、VNC远程桌面

  • SSH:Secure Shell,Secure Socket Shell,是用于安全访问远程机器的Unix命令接口和协议,它被网络管理员广泛用来远程控制网络和其它服务。
  • 作用:windows借助PuTTY通过网络远程登录机载计算机进入shell页面、本地计算机的windows与远程计算机Linux相互传输文件。
    常用的远程登录工具有PuTTY、SSH、Xshell等,哪个最好用?
    推荐使用 MobaXterm ,优点:家庭版免费,可以通过USB或局域网进行SSH、拥有IP扫描功能。缺点:个人版是收费的。
    次推荐使用 PuTTY,优点:开源,简洁。缺点:只能通过局域网进行ssh。
    MobaXterm官网
    PuTTY官网

4.1 使用 MobaXterm 进行SSH

  • 用USB连接进行SSH
    • 下载并安装MobaXterm
    • 用USB连接PC与Jetson Orin Nano 开发板。
    • 打开MobaXterm
    • Session ~> SSH ~> Remote host 的IP地址是固定的:192.168.55.1 ~> OK
  • 使用局域网进行SSH
    • 将PC和Jetson Orin Nano 开发板用网线或WiFi连接到同一个局域网下
    • 查看Jetson Orin Nano的IP地址。根据不同情况有以下三种方法
      • 打开 MobaXterm ~> Tools ~> Network scanner:SS栏显示绿色打勾图标的则表明该设备可以进行SSH:
      • 登录路由器查看
      • 如果Jetson Orin Nano连接了屏幕,使用ifconfig命令查看
    • 打开MobaXterm
    • Session ~> SSH ~> Remote hos:Jetson Orin Nano的IP地址 ~> OK
  • 如果使用的是yahboom固态硬盘的出厂镜像系统,账号是jetson,密码是yahboom。

注意事项

  • MobaXterm 的SSH命令窗口中识别不了ll命令?文件夹没变蓝色?
    source ~/.bashrc 一下。

4.2 使用 MobaXterm + x11vnc 进行VNC远程桌面

VNC能干嘛?即Virtual Network Console,功能像windows的向日葵远程一样,可以实现远程桌面控制。

  • 安装和配置 x11vnc
    参考:https://blog.csdn.net/jiexijihe945/article/details/126222983
  • 开启VNC服务
    • (1)要先用MobaXterm进行SSH连接
    • (2)开启VNC服务
      sudo systemctl start x11vnc
      
    • (3)查看VNC状态
      systemctl status x11vnc
      
  • MobaXterm 远程桌面
    在这里插入图片描述
    usb连接方式,ip:192.168.55.1
    局域网连接方式,ip:开发板的IP地址

4.3 文件传输

将文件从windows拖进蓝色框区域。

在这里插入图片描述

4.2 使用VSCode进行SSH并打开远程目录

  • 远程主机的Linux系统已安装openssh-server
    sudo apt-get install openssh-server
  • windows的VSCode做如下操作:
    • 安装插件remote ssh
      在这里插入图片描述
    • 第一次连接
      在这里插入图片描述
    • 查看配置
      在这里插入图片描述
    • 连接
      在这里插入图片描述
      在这里插入图片描述
    • 打开工作目录
      在这里插入图片描述
      在这里插入图片描述

5. Jtop的安装和使用

https://www.yahboom.com/build.html?id=6453&cid=590

6. Intel RealSense SDK 2.0 安装

参考:https://blog.csdn.net/weixin_43321489/article/details/138284219
要先断开D455相机与开发板的连接。

# 更新系统,最好不用执行会影响其他
#sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
# 安装编译工具
sudo apt-get install git wget cmake build-essential
# 安装依赖库
sudo apt-get install git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev 
# 如果你想为SDK构建支持OpenGL的示例,请安装以下依赖
sudo apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev
# 下载源码,可以选择合适的版本
cd ~/shd
git clone https://github.com/IntelRealSense/librealsense.git
cd librealsense
# Realsense permissions script
./scripts/setup_udev_rules.sh
# 编译
mkdir build && cd build
cmake .. -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=release -DBUILD_WITH_CUDA=true
make -j$(($(nproc)-1))
# 安装,默认安装在usr/local中
sudo make install

验证是否安装成功:
rs-depthrealsense-viewe

7. VISP安装

参考:在Ubuntu上安装visp在jetson上安装visp

  • 安装一些必要的第三方库:
# 编译工具
sudo apt-get install build-essential cmake-curses-gui git wget
# opencv
sudo apt-get install libopencv-dev
# x11
sudo apt-get install libx11-dev
# 矩阵运算相关
sudo apt-get install liblapack-dev libblas-dev libeigen3-dev libopenblas-dev libatlas-base-dev libgsl-dev 
# 视频、图像获取 相关
sudo apt-get install libv4l-dev libzbar-dev libpthread-stubs0-dev nlohmann-json3-dev
# usb视频流  libdc1394-dev 或 libdc1394-22-dev,根据提示信息来安装*-dev或*-22-dev,选其中一个安装时提示要移除opencv或其他的则证明这个不兼容,安装另一个。
# sudo apt-get install libdc1394-22-dev

注意:如果通过apt安装的opencv版本太低,可以通过编译源码的方式安装opencv,可参考在jetson上安装visp。本次使用的是源码方式安装opencv。

cd ~/shd
sudo apt-get update
sudo apt-get install build-essential cmake git
git clone -b v1.4 https://github.com/mavlink/MAVSDK.git
cd MAVSDK
git branch
git submodule update --init --recursive
cmake -DCMAKE_BUILD_TYPE=Release -Bbuild/default -H.
sudo cmake --build build/default --target install

在这里插入图片描述

  • 下载VISP源码、编译
echo "export VISP_WS=$HOME/shd/visp-ws" >> ~/.bashrc
source ~/.bashrc
mkdir -p $VISP_WS
cd ~/shd/visp-ws
git clone https://github.com/lagadic/visp.git
mkdir -p ~/shd/visp-ws/visp-build
cd ~/shd/visp-ws/visp-build
cmake ../visp -DUSE_BLAS/LAPACK=GSL
make -j$(nproc)

设置环境变量 VISP_DIR

echo "export VISP_DIR=$VISP_WS/visp-build" >> ~/.bashrc
source ~/.bashrc

验证是否安装成功:

cd ~/shd/visp-ws/visp-build/tutorial/image
./tutorial-image-display-scaled-auto

在这里插入图片描述
cmake配置提示参考:

jetson@unbutu:~/shd/visp-ws/visp-build$ cmake ../visp -DUSE_BLAS/LAPACK=GSL
-- Detected processor: aarch64
-- Could NOT find Python3 (missing: Python3_LIBRARIES Python3_INCLUDE_DIRS Development) (found version "3.9.5")
-- Required CMake version for Python bindings is 3.19.0,
  but you have 3.16.3.
  Python bindings generation will be deactivated.

-- Could NOT find MKL (missing: MKL_LIBRARIES MKL_INCLUDE_DIRS MKL_INTERFACE_LIBRARY MKL_SEQUENTIAL_LAYER_LIBRARY MKL_CORE_LIBRARY)
-- Found OpenBLAS libraries: /usr/lib/aarch64-linux-gnu/liblapack.so
-- Found OpenBLAS include: /usr/include/aarch64-linux-gnu
-- Found Atlas (include: /usr/include/aarch64-linux-gnu, library: /usr/lib/aarch64-linux-gnu/libatlas.so)
-- Could NOT find OpenGL (missing: OPENGL_opengl_LIBRARY OPENGL_glx_LIBRARY OPENGL_INCLUDE_DIR)
-- Try C99 flag = [-std=gnu99]
-- Performing Test C99_FLAG_DETECTED
-- Performing Test C99_FLAG_DETECTED - Success
-- mavsdk found
-- v4l2 found
-- Could NOT find FTIITSDK (missing: FTIITSDK_FT_LIBRARY FTIITSDK_COMMUNICATION_LIBRARY FTIITSDK_INCLUDE_DIR)
-- openmp found
-- eigen3 found
Package libusb was not found in the pkg-config search path.
Perhaps you should add the directory containing `libusb.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libusb' found
-- libusb_1 found
-- realsense2 found
-- threads found
-- xml2 found
-- Found CUDA: /usr/local/cuda-11.4 (found suitable exact version "11.4")
-- opencv found
-- zlib found
-- x11 found
-- jpeg found
-- png found
-- zbar found
-- Found CUDA: /usr/local/cuda-11.4 (found version "11.4")
-- tensorrt found
-- Performing Test HAVE_FUNC_ISNAN
-- Performing Test HAVE_FUNC_ISNAN - Success
-- Performing Test HAVE_FUNC_STD_ISNAN
-- Performing Test HAVE_FUNC_STD_ISNAN - Success
-- Performing Test HAVE_FUNC__ISNAN
-- Performing Test HAVE_FUNC__ISNAN - Failed
-- Performing Test HAVE_FUNC_ISINF
-- Performing Test HAVE_FUNC_ISINF - Success
-- Performing Test HAVE_FUNC_STD_ISINF
-- Performing Test HAVE_FUNC_STD_ISINF - Success
-- Performing Test HAVE_FUNC_ROUND
-- Performing Test HAVE_FUNC_ROUND - Success
-- Performing Test HAVE_FUNC_STD_ROUND
-- Performing Test HAVE_FUNC_STD_ROUND - Success
-- Performing Test HAVE_FUNC_ERFC
-- Performing Test HAVE_FUNC_ERFC - Success
-- Performing Test HAVE_FUNC_STD_ERFC
-- Performing Test HAVE_FUNC_STD_ERFC - Success
-- Performing Test HAVE_FUNC_ISFINITE
-- Performing Test HAVE_FUNC_ISFINITE - Success
-- Performing Test HAVE_FUNC_STD_ISFINITE
-- Performing Test HAVE_FUNC_STD_ISFINITE - Success
-- Performing Test HAVE_FUNC__FINITE
-- Performing Test HAVE_FUNC__FINITE - Failed
-- Performing Test HAVE_FUNC_LOG1P
-- Performing Test HAVE_FUNC_LOG1P - Success
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- Could NOT find JNI (missing: JAVA_AWT_LIBRARY JAVA_JVM_LIBRARY JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
--
-- ==========================================================
-- General configuration information for ViSP 3.6.1
--
--   Version control:               v3.6.0-880-g659128117
--
--   Platform:
--     Timestamp:                   2024-05-13T15:47:41Z
--     Host:                        Linux 5.10.104-tegra aarch64
--     CMake:                       3.16.3
--     CMake generator:             Unix Makefiles
--     CMake build tool:            /usr/bin/make
--     Configuration:               Release
--
--   System information:
--     Number of CPU logical cores: 6
--     Number of CPU physical cores: 1
--     Total physical memory (in MiB): 6480
--     OS name:                     Linux
--     OS release:                  5.10.104-tegra
--     OS version:                  #1 SMP PREEMPT Sun Mar 19 07:55:28 PDT 2023
--     OS platform:                 aarch64
--     CPU name:                    Unknown family
--     Is the CPU 64-bit?           yes
--     Does the CPU have FPU?       no
--     CPU optimization:            NEON
--
--   C/C++:
--     Built as dynamic libs?:      yes
--     C++ Compiler:                /usr/bin/c++  (ver 9.4.0)
--     C++ flags (Release):         -Wall -Wextra -fopenmp -pthread -std=c++17 -fvisibility=hidden -fPIC -O3 -DNDEBUG
--     C++ flags (Debug):           -Wall -Wextra -fopenmp -pthread -std=c++17 -fvisibility=hidden -fPIC -g
--     C Compiler:                  /usr/bin/cc
--     C flags (Release):           -Wall -Wextra -fopenmp -pthread -std=c++17 -fvisibility=hidden -fPIC -O3 -DNDEBUG
--     C flags (Debug):             -Wall -Wextra -fopenmp -pthread -std=c++17 -fvisibility=hidden -fPIC -g
--     Linker flags (Release):
--     Linker flags (Debug):
--     Use cxx standard:            17
--
--   ViSP modules:
--     To be built:                 core gui imgproc io java_bindings_generator klt me sensor ar blob robot visual_features vs vision detection mbt tt tt_mi
--     Disabled:                    -
--     Disabled by dependency:      -
--     Unavailable:                 java
--
--   Python 3:
--     Interpreter:                 /usr/bin/python3.9 (ver 3.9.5)
--
--   Java:
--     ant:                         no
--     JNI:                         no
--
--   Python3 bindings:              no
--     Requirements:
--       Python version > 3.7.0:    python not found or too old (3.9.5)
--       Python in Virtual environment or conda:
--                                  ok
--       Pybind11 found:            failed
--       CMake > 3.19.0:            failed (3.16.3)
--       C++ standard > 201703L:    ok (201703L)
--
--   Build options:
--     Build deprecated:            yes
--     Build with moment combine:   no
--
--   OpenCV:
--     Version:                     4.9.0
--     Modules:                     calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo stitching video videoio alphamat aruco bgsegm bioinspired ccalib cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn_objdetect dnn_superres dpm face fuzzy hdf hfs img_hash intensity_transform line_descriptor mcc optflow phase_unwrapping plot quality rapid reg rgbd saliency shape signal stereo structured_light superres surface_matching text tracking videostab wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
--     OpenCV dir:                  /usr/local/lib/cmake/opencv4
--
--   Mathematics:
--     Blas/Lapack:                 yes
--     \- Use MKL:                  no
--     \- Use OpenBLAS:             no
--     \- Use Atlas:                no
--     \- Use Netlib:               no
--     \- Use GSL:                  yes (ver 2.5)
--     \- Use Lapack (built-in):    no
--     Use Eigen3:                  yes (ver 3.3.7)
--     Use OpenCV:                  yes (ver 4.9.0)
--
--   Simulator:
--     Ogre simulator:
--     \- Use Ogre3D:               no
--     \- Use OIS:                  no
--     Coin simulator:
--     \- Use Coin3D:               no
--     \- Use SoWin:                no
--     \- Use SoXt:                 no
--     \- Use SoQt:                 no
--     \- Use Qt5:                  no
--     \- Use Qt4:                  no
--     \- Use Qt3:                  no
--
--   Media I/O:
--     Use JPEG:                    yes (ver 80)
--     Use PNG:                     yes (ver 1.6.37)
--     \- Use ZLIB:                 yes (ver 1.2.11)
--     Use OpenCV:                  yes (ver 4.9.0)
--     Use stb_image (built-in):    yes (ver 2.27.0)
--     Use TinyEXR (built-in):      yes (ver 1.0.2)
--     Use simdlib (built-in):      yes
--
--   Real robots:
--     Use Afma4:                   no
--     Use Afma6:                   no
--     Use Franka:                  no
--     Use Viper650:                no
--     Use Viper850:                no
--     Use ur_rtde:                 no
--     Use Kinova Jaco:             no
--     Use aria (Pioneer):          no
--     Use PTU46:                   no
--     Use Biclops PTU:             no
--     Use Flir PTU SDK:            no
--     Use MAVSDK:                  yes (ver 1.4.18)
--     Use Parrot ARSDK:            no
--     \-Use ffmpeg:                no
--     Use Virtuose:                no
--     Use qbdevice (built-in):     yes (ver 2.6.0)
--     Use takktile2 (built-in):    yes (ver 1.0.0)
--     Use pololu (built-in):       yes (ver 1.0.0)
--
--   GUI:
--     Use X11:                     yes
--     Use GTK:                     no
--     Use OpenCV:                  yes (ver 4.9.0)
--     Use GDI:                     no
--     Use Direct3D:                no
--
--   Cameras:
--     Use DC1394-2.x:              no
--     Use CMU 1394:                no
--     Use V4L2:                    yes (ver 1.18.0)
--     Use directshow:              no
--     Use OpenCV:                  yes (ver 4.9.0)
--     Use FLIR Flycapture:         no
--     Use Basler Pylon:            no
--     Use IDS uEye:                no
--
--   RGB-D sensors:
--     Use Realsense:               no
--     Use Realsense2:              yes (ver 2.55.1)
--     Use Occipital Structure:     no
--     Use Kinect:                  no
--     \- Use libfreenect:          no
--     \- Use libusb-1:             yes (ver 1.0.23)
--     \- Use std::thread:          yes
--     Use PCL:                     no
--     \- Use VTK:                  no
--
--   F/T sensors:
--     Use atidaq (built-in):       no
--     Use comedi:                  no
--     Use IIT SDK:                 no
--
--   Mocap:
--     Use Qualisys:                no
--     Use Vicon:                   no
--
--   Detection:
--     Use zbar:                    yes (ver 0.23)
--     Use dmtx:                    no
--     Use AprilTag (built-in):     yes (ver 3.1.1)
--     \- Use AprilTag big family:  no
--
--   Misc:
--     Use Clipper (built-in):      yes (ver 6.4.2)
--     Use pugixml (built-in):      yes (ver 1.9.0)
--     Use libxml2:                 yes (ver 2.9.10)
--     Use json (nlohmann):         no
--
--   Optimization:
--     Use OpenMP:                  yes
--     Use std::thread:             yes
--     Use pthread:                 yes
--     Use pthread (built-in):      no
--     Use simdlib (built-in):      yes
--
--   DNN:
--     Use CUDA Toolkit:            yes (ver 11.4)
--     Use TensorRT:                yes (ver 8.5.2.2)
--
--   Documentation:
--     Use doxygen:                 no
--     \- Use mathjax:              no
--
--   Tests and samples:
--     Use catch2 (built-in):       yes (ver 2.13.7)
--     Tests:                       yes
--     Apps:                        yes
--     Demos:                       yes
--     Examples:                    yes
--     Tutorials:                   yes
--     Dataset found:               no
--
--   Library dirs:
--     Eigen3 include dir:          /usr/lib/cmake/eigen3
--     OpenCV dir:                  /usr/local/lib/cmake/opencv4
--
--   Install path:                  /usr/local
--
-- ==========================================================
-- Configuring done
-- Generating done
-- Build files have been written to: /home/jetson/shd/visp-ws/visp-build
jetson@unbutu:~/shd/visp-ws/visp-build$

8. 安装vscode

下载 vscode.deb.
使用MobaXterm将vscode.deb复制到Ubuntu中。
安装vscode:sudo dpkg -i vscode.deb
打开vscode:在命令行中输入code
取消文件窗口覆盖:在设置中搜索 workbench.editor.Enable Preview,并将它取消勾选。

9. visp:通过 RealSense SDK 获取 D455 相机的视频帧

运行VISP例程:

cd ~/shd/visp-ws/visp-build/tutorial/grabber
./tutorial-grabber-realsense --seqname I%04d.pgm --record 1

注意:--seqname--record要一起使用才可以,不然会报 invalid directory 错误。

10. visp:通过 opencv 获取 D455 相机的视频帧

运行例程 tutorial-grabber-opencv报错,初步判定原因是RealSense SDK安装过程中跳过了打系统内核补丁的步骤导致。

11. visp:标定 D455 相机内参

参考:8.4 相机内参标定

  • 获取适量图片,10~15张为宜
cd ~/shd/visp-ws/visp-build/tutorial/grabber
mkdir ~/shd/visp-ws/cali-cam-d455
./tutorial-grabber-realsense --seqname ~/shd/visp-ws/cali-cam-d455/chessboard-%02d.jpg --record 1

在这里插入图片描述
此时会生成一个示例的标定文件:~/shd/visp-ws/cali-cam-d455/camera.xml,参数是不正确的,标定时要删掉该示例文件。

  • 标定
cd ~/shd/visp-ws/cali-cam-d455
cp ~/shd/visp-ws/visp-build/example/calibration/calibrate-camera ./
cp ~/shd/visp-ws/visp-build/example/calibration/default-chessboard.cfg ./
rm camera.xml
./calibrate-camera default-chessboard.cfg

运行标定程序时如果出现下面的错误:
在这里插入图片描述
或:
在这里插入图片描述
这是由于没有使用第三方库如 MKL、OpenBLAS、 Atlas、 GSL、Netlib来加速矩阵运算导致的,cmake ../visp -DUSE_BLAS/LAPACK=GSL完成后要保证:
在这里插入图片描述
尝试了 OpenBLAS、 Atlas、Netlib都不行,GSL可行,MKL没试。
MKL、OpenBLAS、 Atlas、 GSL、Netlib 的安装和指定请参考:8.1 基本的矩阵向量运算

  • 安装可以优化矩阵运算的第三方库GSL
sudo apt install libgsl-dev
  • 指定使用GSL来加速矩阵运算,并重新编译visp或只编译例程calibrate-camera
cd ~/shd/visp-ws/visp-build
rm -rf *
cmake ../visp -DUSE_BLAS/LAPACK=GSL
make -j$(nproc) calibrate-camera
或
make -j$(nproc)
  • 成功标定
cd ~/shd/visp-ws/cali-cam-d455
rm calibrate-camera camera.xml
cp ~/shd/visp-ws/visp-build/example/calibration/calibrate-camera ./
./calibrate-camera default-chessboard.cfg

标定结果 px,py,u0,v0 保存在~/shd/visp-ws/cali-cam-d455/camera.xml中:

<?xml version="1.0"?>
<root>
  <!--This file stores intrinsic camera parameters used
   in the vpCameraParameters Class of ViSP available
   at https://visp.inria.fr/download/ .
   It can be read with the parse method of
   the vpXmlParserCamera class.-->
  <camera>
    <!--Name of the camera-->
    <name>Camera</name>
    <!--Size of the image on which camera calibration was performed-->
    <image_width>640</image_width>
    <image_height>480</image_height>
    <!--Intrinsic camera parameters computed for each projection model-->
    <model>
      <!--Projection model type-->
      <type>perspectiveProjWithoutDistortion</type>
      <!--Pixel ratio-->
      <px>398.84641712787538</px>
      <py>398.68682503969529</py>
      <!--Principal point-->
      <u0>322.99756859139262</u0>
      <v0>248.6004154301479</v0>
    </model>
    <model>
      <!--Projection model type-->
      <type>perspectiveProjWithDistortion</type>
      <!--Pixel ratio-->
      <px>388.57370857483153</px>
      <py>388.41874618866353</py>
      <!--Principal point-->
      <u0>325.65955938183913</u0>
      <v0>248.12836717904835</v0>
      <!--Undistorted to distorted distortion parameter-->
      <kud>-0.039473814232162549</kud>
      <!--Distorted to undistorted distortion parameter-->
      <kdu>0.040460699526659552</kdu>
    </model>
    <!--Additional information-->
    <additional_information>
      <date>2024/05/13 11:40:55</date>
      <nb_calibration_images>7</nb_calibration_images>
      <calibration_pattern_type>Chessboard</calibration_pattern_type>
      <board_size>9x6</board_size>
      <square_size>0.025</square_size>
      <global_reprojection_error>
        <without_distortion>0.191072</without_distortion>
        <with_distortion>0.105808</with_distortion>
      </global_reprojection_error>
      <camera_poses>
        <cMo>-0.1224468879  -0.09066443142  0.2851836326  0.396193237  0.0252419678  0.05725736687</cMo>
        <cMo>-0.1122749815  -0.07131476187  0.2850666327  0.07173676775  -0.4428100689  0.03359819038</cMo>
        <cMo>-0.1252028762  -0.09066211225  0.2824452077  0.1124871165  -0.555598502  0.05762187366</cMo>
        <cMo>-0.1741429871  -0.05962608505  0.3464753725  0.2196880035  -0.3503487077  -0.1767877418</cMo>
        <cMo>-0.02636216889  -0.0568963793  0.4076841021  0.01830059873  0.6156466408  0.210153884</cMo>
        <cMo>-0.06688647124  -0.1001443402  0.2733689325  0.2880379529  0.01325446236  0.4713165184</cMo>
        <cMo>-0.1100997782  -0.1000982268  0.3432905219  -0.07095469824  0.2366782342  0.393682507</cMo>
        <cMo_dist>-0.1242547143  -0.09017472147  0.2748890938  0.3974270231  0.0157942535  0.05879537619</cMo_dist>
        <cMo_dist>-0.1142069932  -0.0709658145  0.2755640618  0.07116784978  -0.4471194347  0.03401523573</cMo_dist>
        <cMo_dist>-0.126982605  -0.09026828346  0.2720565051  0.1118477587  -0.5628451697  0.05778465003</cMo_dist>
        <cMo_dist>-0.1761208319  -0.05913361779  0.3334596217  0.2209953356  -0.3609393418  -0.1755910688</cMo_dist>
        <cMo_dist>-0.02911523232  -0.05641192436  0.3968714019  0.0171518882  0.6106532922  0.2093655367</cMo_dist>
        <cMo_dist>-0.06871420006  -0.09981702989  0.2645287482  0.2850990662  0.009511734891  0.4721361553</cMo_dist>
        <cMo_dist>-0.1126448284  -0.09979328033  0.3330543328  -0.07206900533  0.220845779  0.3932979184</cMo_dist>
      </camera_poses>
    </additional_information>
  </camera>
</root>

标定提示如:

jetson@unbutu:~/shd/visp-ws/cali-cam-d455$ ./calibrate-camera default-chessboard.cfg
Settings from config file: default-chessboard.cfg
grid width : 9
grid height: 6
square size: 0.025
pattern    : CHESSBOARD
input seq  : chessboard-%02d.jpg
tempo      : 1

Settings from command line options:
Ouput xml file : camera.xml
Camera name    : Camera
Initialize camera parameters with default values
Camera parameters used for initialization:
Camera parameters for perspective projection without distortion:
  px = 600       py = 600
  u0 = 320       v0 = 240

Process frame: chessboard-04.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-05.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-06.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-07.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-08.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-09.jpg, grid detection status: 1, image used as input data
Process frame: chessboard-10.jpg, grid detection status: 1, image used as input data

Calibration without distortion in progress on 7 images...
Camera parameters for perspective projection without distortion:
  px = 398.8464171       py = 398.686825
  u0 = 322.9975686       v0 = 248.6004154

Image chessboard-04.jpg reprojection error: 0.2249227586
Image chessboard-05.jpg reprojection error: 0.1894938163
Image chessboard-06.jpg reprojection error: 0.1602951323
Image chessboard-07.jpg reprojection error: 0.1386196645
Image chessboard-08.jpg reprojection error: 0.148481482
Image chessboard-09.jpg reprojection error: 0.2398637129
Image chessboard-10.jpg reprojection error: 0.2111191561

Global reprojection error: 0.191072441
Camera parameters without distortion successfully saved in "camera.xml"


Calibration with distortion in progress on 7 images...
Camera parameters for perspective projection with distortion:
  px = 388.5737086       py = 388.4187462
  u0 = 325.6595594       v0 = 248.1283672
  kud = -0.03947381423
  kdu = 0.04046069953

Image chessboard-04.jpg reprojection error: 0.1195661908
Image chessboard-05.jpg reprojection error: 0.1193498118
Image chessboard-06.jpg reprojection error: 0.1057798103
Image chessboard-07.jpg reprojection error: 0.08989710063
Image chessboard-08.jpg reprojection error: 0.09226062422
Image chessboard-09.jpg reprojection error: 0.1115839283
Image chessboard-10.jpg reprojection error: 0.09794506681

Global reprojection error: 0.1058083044

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-04.jpg line 1 fitting error on distorted points: 0.180956872 ; on undistorted points: 0.02950599891
chessboard-04.jpg line 2 fitting error on distorted points: 0.1228108594 ; on undistorted points: 0.01468993169
chessboard-04.jpg line 3 fitting error on distorted points: 0.07217472377 ; on undistorted points: 0.01824309706
chessboard-04.jpg line 4 fitting error on distorted points: 0.02996954372 ; on undistorted points: 0.02391568145
chessboard-04.jpg line 5 fitting error on distorted points: 0.02206760241 ; on undistorted points: 0.02048522567
chessboard-04.jpg line 6 fitting error on distorted points: 0.1117242205 ; on undistorted points: 0.06897936988

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-04.jpg undistorted image, line 1 fitting error: 0.05715642892
chessboard-04.jpg undistorted image, line 2 fitting error: 0.03421786
chessboard-04.jpg undistorted image, line 3 fitting error: 0.02696133557
chessboard-04.jpg undistorted image, line 4 fitting error: 0.02182783737
chessboard-04.jpg undistorted image, line 5 fitting error: 0.01848111712
chessboard-04.jpg undistorted image, line 6 fitting error: 0.07628415665

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-05.jpg line 1 fitting error on distorted points: 0.07842274598 ; on undistorted points: 0.04073278419
chessboard-05.jpg line 2 fitting error on distorted points: 0.02894332818 ; on undistorted points: 0.03381049647
chessboard-05.jpg line 3 fitting error on distorted points: 0.03171134573 ; on undistorted points: 0.03028086031
chessboard-05.jpg line 4 fitting error on distorted points: 0.08958890743 ; on undistorted points: 0.08256816471
chessboard-05.jpg line 5 fitting error on distorted points: 0.07121212097 ; on undistorted points: 0.05129840474
chessboard-05.jpg line 6 fitting error on distorted points: 0.09545491062 ; on undistorted points: 0.03808806659

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-05.jpg undistorted image, line 1 fitting error: 0.03642440367
chessboard-05.jpg undistorted image, line 2 fitting error: 0.03209987361
chessboard-05.jpg undistorted image, line 3 fitting error: 0.0281838577
chessboard-05.jpg undistorted image, line 4 fitting error: 0.08309325865
chessboard-05.jpg undistorted image, line 5 fitting error: 0.04991418802
chessboard-05.jpg undistorted image, line 6 fitting error: 0.0292585384

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-06.jpg line 1 fitting error on distorted points: 0.05561697668 ; on undistorted points: 0.0662133449
chessboard-06.jpg line 2 fitting error on distorted points: 0.05927166434 ; on undistorted points: 0.02312656332
chessboard-06.jpg line 3 fitting error on distorted points: 0.02796679038 ; on undistorted points: 0.03354572908
chessboard-06.jpg line 4 fitting error on distorted points: 0.05702547089 ; on undistorted points: 0.05405971444
chessboard-06.jpg line 5 fitting error on distorted points: 0.0584693254 ; on undistorted points: 0.04685735151
chessboard-06.jpg line 6 fitting error on distorted points: 0.03431135262 ; on undistorted points: 0.03052371163

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-06.jpg undistorted image, line 1 fitting error: 0.06296087909
chessboard-06.jpg undistorted image, line 2 fitting error: 0.04294818457
chessboard-06.jpg undistorted image, line 3 fitting error: 0.05168819921
chessboard-06.jpg undistorted image, line 4 fitting error: 0.0556851262
chessboard-06.jpg undistorted image, line 5 fitting error: 0.05056370979
chessboard-06.jpg undistorted image, line 6 fitting error: 0.04425359694

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-07.jpg line 1 fitting error on distorted points: 0.06787258358 ; on undistorted points: 0.04541973465
chessboard-07.jpg line 2 fitting error on distorted points: 0.05187927974 ; on undistorted points: 0.03845123335
chessboard-07.jpg line 3 fitting error on distorted points: 0.03739709323 ; on undistorted points: 0.03099194338
chessboard-07.jpg line 4 fitting error on distorted points: 0.03075713401 ; on undistorted points: 0.02766735756
chessboard-07.jpg line 5 fitting error on distorted points: 0.03687559036 ; on undistorted points: 0.03707115456
chessboard-07.jpg line 6 fitting error on distorted points: 0.05090725306 ; on undistorted points: 0.03441411736

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-07.jpg undistorted image, line 1 fitting error: 0.04775173663
chessboard-07.jpg undistorted image, line 2 fitting error: 0.04753474708
chessboard-07.jpg undistorted image, line 3 fitting error: 0.03332362112
chessboard-07.jpg undistorted image, line 4 fitting error: 0.02953622602
chessboard-07.jpg undistorted image, line 5 fitting error: 0.02953730482
chessboard-07.jpg undistorted image, line 6 fitting error: 0.0242064968

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-08.jpg line 1 fitting error on distorted points: 0.02993187956 ; on undistorted points: 0.03104789079
chessboard-08.jpg line 2 fitting error on distorted points: 0.03084897981 ; on undistorted points: 0.0233375363
chessboard-08.jpg line 3 fitting error on distorted points: 0.04125608468 ; on undistorted points: 0.03998473937
chessboard-08.jpg line 4 fitting error on distorted points: 0.04926663155 ; on undistorted points: 0.02938769729
chessboard-08.jpg line 5 fitting error on distorted points: 0.06683096073 ; on undistorted points: 0.03742220759
chessboard-08.jpg line 6 fitting error on distorted points: 0.08041216846 ; on undistorted points: 0.03831622505

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-08.jpg undistorted image, line 1 fitting error: 0.03055637355
chessboard-08.jpg undistorted image, line 2 fitting error: 0.02533248836
chessboard-08.jpg undistorted image, line 3 fitting error: 0.0340083296
chessboard-08.jpg undistorted image, line 4 fitting error: 0.03352991706
chessboard-08.jpg undistorted image, line 5 fitting error: 0.03463057217
chessboard-08.jpg undistorted image, line 6 fitting error: 0.02816502124

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-09.jpg line 1 fitting error on distorted points: 0.109933998 ; on undistorted points: 0.03800707602
chessboard-09.jpg line 2 fitting error on distorted points: 0.07021286545 ; on undistorted points: 0.03656703468
chessboard-09.jpg line 3 fitting error on distorted points: 0.03532988443 ; on undistorted points: 0.04492092964
chessboard-09.jpg line 4 fitting error on distorted points: 0.07517576316 ; on undistorted points: 0.04855954434
chessboard-09.jpg line 5 fitting error on distorted points: 0.1202276102 ; on undistorted points: 0.0481914362
chessboard-09.jpg line 6 fitting error on distorted points: 0.1427617537 ; on undistorted points: 0.04338898154

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-09.jpg undistorted image, line 1 fitting error: 0.03528129462
chessboard-09.jpg undistorted image, line 2 fitting error: 0.03577755561
chessboard-09.jpg undistorted image, line 3 fitting error: 0.04094647121
chessboard-09.jpg undistorted image, line 4 fitting error: 0.04725962358
chessboard-09.jpg undistorted image, line 5 fitting error: 0.04740320542
chessboard-09.jpg undistorted image, line 6 fitting error: 0.03478607793

This tool computes the line fitting error (mean distance error) on image points extracted from the raw distorted image.
chessboard-10.jpg line 1 fitting error on distorted points: 0.04819638292 ; on undistorted points: 0.04329188635
chessboard-10.jpg line 2 fitting error on distorted points: 0.02611449413 ; on undistorted points: 0.02228242963
chessboard-10.jpg line 3 fitting error on distorted points: 0.0326439773 ; on undistorted points: 0.0323901009
chessboard-10.jpg line 4 fitting error on distorted points: 0.06324649554 ; on undistorted points: 0.04140267524
chessboard-10.jpg line 5 fitting error on distorted points: 0.121824865 ; on undistorted points: 0.04853700941
chessboard-10.jpg line 6 fitting error on distorted points: 0.1242757093 ; on undistorted points: 0.03170569318

This tool computes the line fitting error (mean distance error) on image points extracted from the undistorted image (vpImageTools::undistort()).
chessboard-10.jpg undistorted image, line 1 fitting error: 0.04519885523
chessboard-10.jpg undistorted image, line 2 fitting error: 0.02678937315
chessboard-10.jpg undistorted image, line 3 fitting error: 0.02729525637
chessboard-10.jpg undistorted image, line 4 fitting error: 0.03924111661
chessboard-10.jpg undistorted image, line 5 fitting error: 0.03910924634
chessboard-10.jpg undistorted image, line 6 fitting error: 0.04191180749

Found camera with name: "Camera"
Camera parameters without distortion successfully saved in "camera.xml"

Estimated pose using vpPoseVector format: [tx ty tz tux tuy tuz] with translation in meter and rotation in rad
Estimated pose on input data extracted from chessboard-04.jpg: -0.1242547143  -0.09017472147  0.2748890938  0.3974270231  0.0157942535  0.05879537619
Estimated pose on input data extracted from chessboard-05.jpg: -0.1142069932  -0.0709658145  0.2755640618  0.07116784978  -0.4471194347  0.03401523573
Estimated pose on input data extracted from chessboard-06.jpg: -0.126982605  -0.09026828346  0.2720565051  0.1118477587  -0.5628451697  0.05778465003
Estimated pose on input data extracted from chessboard-07.jpg: -0.1761208319  -0.05913361779  0.3334596217  0.2209953356  -0.3609393418  -0.1755910688
Estimated pose on input data extracted from chessboard-08.jpg: -0.02911523232  -0.05641192436  0.3968714019  0.0171518882  0.6106532922  0.2093655367
Estimated pose on input data extracted from chessboard-09.jpg: -0.06871420006  -0.09981702989  0.2645287482  0.2850990662  0.009511734891  0.4721361553
Estimated pose on input data extracted from chessboard-10.jpg: -0.1126448284  -0.09979328033  0.3330543328  -0.07206900533  0.220845779  0.3932979184

Camera calibration succeeded. Results are savec in "camera.xml"
jetson@unbutu:~/shd/visp-ws/cali-cam-d455$

12. visp-d455:斑点跟踪

由于现在只能通过 realsense sdk 获取到图像,因此需要对源代码进行修改。

  • tutorial-blob-tracker-live-firewire.cpp 复制为 tutorial-blob-tracker-live-d455.cpp
    将tutorial-blob-tracker-live-d455.cpp 修改为:
//! \example tutorial-blob-tracker-live-firewire.cpp
#include <visp3/core/vpConfig.h>
#include <visp3/blob/vpDot2.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>

#include <visp3/core/vpImage.h>
#include <visp3/core/vpXmlParserCamera.h>
#include <visp3/sensor/vpRealSense.h>
#include <visp3/sensor/vpRealSense2.h>

int main()
{
  int opt_fps = 30;
  unsigned int opt_width = 640;
  unsigned int opt_height = 480;
  vpImage<vpRGBa> Irgba;
  vpImage<unsigned char> I; // Create a gray level image container
#ifdef VISP_HAVE_REALSENSE2
  std::cout << "SDK        : Realsense 2" << std::endl;
  vpRealSense2 g;
  rs2::config config;
  config.disable_stream(RS2_STREAM_DEPTH);
  config.disable_stream(RS2_STREAM_INFRARED);
  config.enable_stream(RS2_STREAM_COLOR, opt_width, opt_height, RS2_FORMAT_RGBA8, opt_fps);
  g.open(config);
#else
  std::cout << "SDK        : Realsense 1" << std::endl;
  vpRealSense g;
  g.setStreamSettings(rs::stream::color, vpRealSense::vpRsStreamParams(opt_width, opt_height, rs::format::rgba8, 60));
  g.open();
#endif
  g.acquire(Irgba);
  vpImageConvert::convert(Irgba, I);


#if defined(VISP_HAVE_X11)
  vpDisplayX d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
  vpDisplayGDI d(I, 0, 0, "Camera view");
#elif defined(HAVE_OPENCV_HIGHGUI)
  vpDisplayOpenCV d(I, 0, 0, "Camera view");
#endif

  //! [Construction]
  vpDot2 blob;
  //! [Construction]
  //! [Setting]
  blob.setGraphics(true);
  blob.setGraphicsThickness(2);
  //! [Setting]

  vpImagePoint germ;
  bool init_done = false;

  while (1) {
    try {
      g.acquire(Irgba);
      vpImageConvert::convert(Irgba, I);

      vpDisplay::display(I);

      if (!init_done) {
        vpDisplay::displayText(I, vpImagePoint(10, 10), "Click in the blob to initialize the tracker", vpColor::red);
        if (vpDisplay::getClick(I, germ, false)) {
          //! [Init]
          blob.initTracking(I, germ);
          //! [Init]
          init_done = true;
        }
      }
      else {
        //! [Track]
        blob.track(I);
        //! [Track]
      }

      vpDisplay::flush(I);
    }
    catch (...) {
      init_done = false;
    }
  }
}

  • 在同级目录的CMakeFileLists.txt中增加对tutorial-blob-tracker-live-d455.cpp的编译声明:
# set the list of source files
set(tutorial_cpp
  tutorial-blob-auto-tracker.cpp
  tutorial-blob-tracker-live-firewire.cpp
  tutorial-blob-tracker-live-v4l2.cpp
  tutorial-blob-tracker-live-d455.cpp
  )
  • 编译
cd ~/shd/visp-ws/visp-build
cmake ../visp -DUSE_BLAS/LAPACK=GSL
make -j6 tutorial-blob-tracker-live-d455 
  • 鼠标选择斑点进行跟踪器初始化。
  • 结果:
    在这里插入图片描述
    斑点自动初始化的例程同理,这里不再赘述。

13. visp-d455:关键点(特征点)跟踪

  • 源码 tutorial-klt-tracker-live-d455.cpp
/*! \example tutorial-klt-tracker-live-v4l2.cpp */
#include <visp3/core/vpConfig.h>
#ifdef VISP_HAVE_MODULE_SENSOR
#include <visp3/sensor/vpV4l2Grabber.h>
#endif
#include <visp3/core/vpImageConvert.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/io/vpVideoReader.h>
#include <visp3/klt/vpKltOpencv.h>

#if defined(VISP_HAVE_REALSENSE) || defined(VISP_HAVE_REALSENSE2)
#include <visp3/core/vpImage.h>
#include <visp3/core/vpXmlParserCamera.h>
#include <visp3/sensor/vpRealSense.h>
#include <visp3/sensor/vpRealSense2.h>
#endif

int main(int argc, const char *argv [])
{
#if (defined(HAVE_OPENCV_HIGHGUI) && (defined(VISP_HAVE_REALSENSE) || defined(VISP_HAVE_REALSENSE2))) && defined(HAVE_OPENCV_IMGPROC) && defined(HAVE_OPENCV_VIDEO)
  try {
    bool opt_init_by_click = false;

    for (int i = 0; i < argc; i++) {
      if (std::string(argv[i]) == "--init-by-click")
        opt_init_by_click = true;
      else if (std::string(argv[i]) == "--help") {
        std::cout << "Usage: " << argv[0] << " [--init-by-click] [--device <camera device>] [--help]" << std::endl;
        return EXIT_SUCCESS;
      }
    }

    //! [Acquire]
    int opt_fps = 30;
    unsigned int opt_width = 640;
    unsigned int opt_height = 480;
    vpImage<vpRGBa> Irgba;
    vpImage<unsigned char> I; // Create a gray level image container
  #ifdef VISP_HAVE_REALSENSE2
    std::cout << "SDK        : Realsense 2" << std::endl;
    vpRealSense2 g;
    rs2::config config;
    config.disable_stream(RS2_STREAM_DEPTH);
    config.disable_stream(RS2_STREAM_INFRARED);
    config.enable_stream(RS2_STREAM_COLOR, opt_width, opt_height, RS2_FORMAT_RGBA8, opt_fps);
    g.open(config);
  #else
    std::cout << "SDK        : Realsense 1" << std::endl;
    vpRealSense g;
    g.setStreamSettings(rs::stream::color, vpRealSense::vpRsStreamParams(opt_width, opt_height, rs::format::rgba8, 60));
    g.open();
  #endif
    g.acquire(Irgba);
    vpImageConvert::convert(Irgba, I);
    //! [Acquire]


    cv::Mat cvI;
    vpImageConvert::convert(I, cvI);

    // Display initialisation
    vpDisplayOpenCV d(I, 0, 0, "Klt tracking");
    vpDisplay::display(I);
    vpDisplay::flush(I);

    vpKltOpencv tracker;
    // Set tracker parameters
    tracker.setMaxFeatures(200);
    tracker.setWindowSize(10);
    tracker.setQuality(0.01);
    tracker.setMinDistance(15);
    tracker.setHarrisFreeParameter(0.04);
    tracker.setBlockSize(9);
    tracker.setUseHarris(1);
    tracker.setPyramidLevels(3);

    // Initialise the tracking
    if (opt_init_by_click) {
      vpMouseButton::vpMouseButtonType button;
      std::vector<cv::Point2f> guess;
      vpImagePoint ip;
      do {
        vpDisplay::displayText(I, 10, 10, "Left click to select a point, right to start tracking", vpColor::red);
        if (vpDisplay::getClick(I, ip, button, false)) {
          if (button == vpMouseButton::button1) {
            guess.push_back(cv::Point2f((float)ip.get_u(), (float)ip.get_v()));
            vpDisplay::displayText(I, 10, 10, "Left click to select a point, right to start tracking", vpColor::red);
            vpDisplay::displayCross(I, ip, 12, vpColor::green);
          }
        }
        vpDisplay::flush(I);
        vpTime::wait(20);
      } while (button != vpMouseButton::button3);
      tracker.initTracking(cvI, guess);
    }
    else {
      tracker.initTracking(cvI);
    }

    while (1) {
      g.acquire(Irgba);
      vpImageConvert::convert(Irgba, I);

      vpDisplay::display(I);

      vpImageConvert::convert(I, cvI);
      tracker.track(cvI);

      tracker.display(I, vpColor::red);
      vpDisplay::displayText(I, 10, 10, "Click to quit", vpColor::red);
      vpDisplay::flush(I);
      if (vpDisplay::getClick(I, false))
        break;
    }
  }
  catch (const vpException &e) {
    std::cout << "Catch an exception: " << e << std::endl;
    return EXIT_FAILURE;
  }
#else
  (void)argc;
  (void)argv;
#endif
  return EXIT_SUCCESS;
}

  • 修改CMakeFileLists.txt、编译、运行。
  • 报错:
    在这里插入图片描述
    提示要求安装 libgtk2.0-dev pkg-config 后重新编译opencv
  • 安装libgtk2.0-dev pkg-config
sudo apt-get install libgtk2.0-dev  pkg-config
  • 重新编译opencv ,不用卸载,直接编译后安装即可更新。
  • 运行(不需要重新编译visp)
cd ~/shd/visp-ws/visp-build/tutorial/tracking/keypoint
./tutorial-klt-tracker-live-d455
  • 结果
    在这里插入图片描述
    在这里插入图片描述
    手动点选关键点的例程请参考:tutorial-klt-tracker.cpp

14. visp-d455:边缘跟踪

边缘跟踪包含了边缘检测,对于视频流,边缘是运动的边缘,所以又称移动边缘跟踪 moving edge tracking.

  • tutorial-me-line-tracker-d455.cpp
//! \example tutorial-me-line-tracker.cpp
#include <visp3/core/vpConfig.h>
#ifdef VISP_HAVE_MODULE_SENSOR
//! [camera headers]
#if defined(VISP_HAVE_REALSENSE) || defined(VISP_HAVE_REALSENSE2)
#include <visp3/core/vpImage.h>
#include <visp3/core/vpXmlParserCamera.h>
#include <visp3/sensor/vpRealSense.h>
#include <visp3/sensor/vpRealSense2.h>
#endif
//! [camera headers]
#endif
//! [display headers]
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
//! [display headers]
//! [me line headers]
#include <visp3/me/vpMeLine.h>
//! [me line headers]

#if defined(HAVE_OPENCV_VIDEOIO)
#include <opencv2/videoio.hpp>
#endif

int main()
{
#if (defined(VISP_HAVE_REALSENSE) || defined(VISP_HAVE_REALSENSE2)||defined(VISP_HAVE_DC1394) || defined(VISP_HAVE_CMU1394) || defined(VISP_HAVE_V4L2) || defined(HAVE_OPENCV_VIDEOIO))
  try {

    //! [1st image Acquire]
    int opt_fps = 30;
    unsigned int opt_width = 640;
    unsigned int opt_height = 480;
    vpImage<vpRGBa> Irgba;
    vpImage<unsigned char> I; // Create a gray level image container
  #ifdef VISP_HAVE_REALSENSE2
    std::cout << "SDK        : Realsense 2" << std::endl;
    vpRealSense2 g;
    rs2::config config;
    config.disable_stream(RS2_STREAM_DEPTH);
    config.disable_stream(RS2_STREAM_INFRARED);
    config.enable_stream(RS2_STREAM_COLOR, opt_width, opt_height, RS2_FORMAT_RGBA8, opt_fps);
    g.open(config);
  #else
    std::cout << "SDK        : Realsense 1" << std::endl;
    vpRealSense g;
    g.setStreamSettings(rs::stream::color, vpRealSense::vpRsStreamParams(opt_width, opt_height, rs::format::rgba8, 60));
    g.open();
  #endif
    g.acquire(Irgba);
    vpImageConvert::convert(Irgba, I);
    //! [1st image Acquire]

    //! [display container]
#if defined(VISP_HAVE_X11)
    vpDisplayX d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
    vpDisplayGDI d(I, 0, 0, "Camera view");
#elif defined(HAVE_OPENCV_HIGHGUI)
    vpDisplayOpenCV d(I, 0, 0, "Camera view");
#else
    std::cout << "No image viewer is available..." << std::endl;
#endif
    //! [display container]
    //! [display image]
    vpDisplay::display(I);
    vpDisplay::flush(I);
    //! [display image]

    //! [me container]
    vpMe me;
    me.setRange(25);
    me.setLikelihoodThresholdType(vpMe::NORMALIZED_THRESHOLD);
    me.setThreshold(20);
    me.setSampleStep(10);
    //! [me container]

    //! [me line container]
    vpMeLine line;
    line.setMe(&me);
    line.setDisplay(vpMeSite::RANGE_RESULT);
    line.initTracking(I);
    //! [me line container]

    //! [loop]
    while (1) {
      g.acquire(Irgba);
      vpImageConvert::convert(Irgba, I);
      vpDisplay::display(I);
      line.track(I);
      line.display(I, vpColor::red);
      vpDisplay::flush(I);
    }
    //! [loop]
  }
  catch (const vpException &e) {
    std::cout << "Catch an exception: " << e << std::endl;
  }
#endif
}

  • 结果
    在这里插入图片描述

15. visp-d455:基于通用模型的目标检测和跟踪-RGB相机

  • tutorial-mb-generic-tracker-d455.cpp
//! \example tutorial-mb-generic-tracker.cpp
#include <visp3/core/vpIoTools.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/io/vpImageIo.h>
//! [Include]
#include <visp3/mbt/vpMbGenericTracker.h>
//! [Include]
//! [camera headers]
#if defined(VISP_HAVE_REALSENSE) || defined(VISP_HAVE_REALSENSE2)
#include <visp3/core/vpImage.h>
#include <visp3/core/vpXmlParserCamera.h>
#include <visp3/sensor/vpRealSense.h>
#include <visp3/sensor/vpRealSense2.h>
#endif
//! [camera headers]

int main(int argc, char **argv)
{
#if defined(VISP_HAVE_OPENCV)
  try {
    std::string opt_modelname = "model/hyl/mybox.cao";
    int opt_tracker = 2; // Hybrid tracker

    for (int i = 0; i < argc; i++) {
      if (std::string(argv[i]) == "--model")
        opt_modelname = std::string(argv[i + 1]);
      else if (std::string(argv[i]) == "--tracker")
        opt_tracker = atoi(argv[i + 1]);
      else if (std::string(argv[i]) == "--help" || std::string(argv[i]) == "-h") {
        std::cout << "\nUsage: " << argv[0]
          << " [--video <video name>] [--model <model name>]"
          " [--tracker <0=egde|1=keypoint|2=hybrid>] [--help] [-h]\n"
          << std::endl;
        return EXIT_SUCCESS;
      }
    }
    std::string parentname = vpIoTools::getParent(opt_modelname);
    std::string objectname = vpIoTools::getNameWE(opt_modelname);

    if (!parentname.empty())
      objectname = parentname + "/" + objectname;

    std::cout << "Tracker requested config files: " << objectname << ".[init, cao]" << std::endl;
    std::cout << "Tracker optional config files: " << objectname << ".[ppm]" << std::endl;

    //! [Image]
    vpImage<unsigned char> I;
    vpCameraParameters cam;
    //! [Image]
    //! [cMo]
    vpHomogeneousMatrix cMo;
    //! [cMo]

    //! [1st image Acquire]
    int opt_fps = 30;
    unsigned int opt_width = 640;
    unsigned int opt_height = 480;
    vpImage<vpRGBa> Irgba;
#ifdef VISP_HAVE_REALSENSE2
    std::cout << "SDK        : Realsense 2" << std::endl;
    vpRealSense2 g;
    rs2::config config;
    config.disable_stream(RS2_STREAM_DEPTH);
    config.disable_stream(RS2_STREAM_INFRARED);
    config.enable_stream(RS2_STREAM_COLOR, opt_width, opt_height, RS2_FORMAT_RGBA8, opt_fps);
    g.open(config);
#else
    std::cout << "SDK        : Realsense 1" << std::endl;
    vpRealSense g;
    g.setStreamSettings(rs::stream::color, vpRealSense::vpRsStreamParams(opt_width, opt_height, rs::format::rgba8, 60));
    g.open();
#endif
    g.acquire(Irgba);
    vpImageConvert::convert(Irgba, I);
    //! [1st image Acquire]

    vpDisplay *display = nullptr;
#if defined(VISP_HAVE_X11)
    display = new vpDisplayX;
#elif defined(VISP_HAVE_GDI)
    display = new vpDisplayGDI;
#elif defined(HAVE_OPENCV_HIGHGUI)
    display = new vpDisplayOpenCV;
#endif
    display->init(I, 100, 100, "Model-based tracker");

    //! [Constructor]
    vpMbGenericTracker tracker;
    if (opt_tracker == 0)
      tracker.setTrackerType(vpMbGenericTracker::EDGE_TRACKER);
#if defined(VISP_HAVE_MODULE_KLT) && defined(HAVE_OPENCV_IMGPROC) && defined(HAVE_OPENCV_VIDEO)
    else if (opt_tracker == 1)
      tracker.setTrackerType(vpMbGenericTracker::KLT_TRACKER);
    else
      tracker.setTrackerType(vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER);
#else
    else {
      std::cout << "klt and hybrid model-based tracker are not available since visp_klt module is not available. "
        "In CMakeGUI turn visp_klt module ON, configure and build ViSP again."
        << std::endl;
      return EXIT_FAILURE;
    }
#endif
    //! [Constructor]

    //! [Set parameters]
    if (opt_tracker == 0 || opt_tracker == 2) {
      vpMe me;
      me.setMaskSize(5);
      me.setMaskNumber(180);
      me.setRange(8);
      me.setLikelihoodThresholdType(vpMe::NORMALIZED_THRESHOLD);
      me.setThreshold(20);
      me.setMu1(0.5);
      me.setMu2(0.5);
      me.setSampleStep(4);
      tracker.setMovingEdge(me);
      // Print moving-edges settings
      me = tracker.getMovingEdge();
      me.print();
    }

#if defined(VISP_HAVE_MODULE_KLT) && defined(HAVE_OPENCV_IMGPROC) && defined(HAVE_OPENCV_VIDEO)
    if (opt_tracker == 1 || opt_tracker == 2) {
      vpKltOpencv klt_settings;
      klt_settings.setMaxFeatures(300);
      klt_settings.setWindowSize(5);
      klt_settings.setQuality(0.015);
      klt_settings.setMinDistance(8);
      klt_settings.setHarrisFreeParameter(0.01);
      klt_settings.setBlockSize(3);
      klt_settings.setPyramidLevels(3);
      tracker.setKltOpencv(klt_settings);
      tracker.setKltMaskBorder(5);
    }
#endif

    //! [Set camera parameters]
    cam.initPersProjWithoutDistortion(398.846, 398.686, 322.997, 248.6004);
    //! [Set camera parameters]
    tracker.setCameraParameters(cam);
    //! [Set parameters]

    //! [Load cao]
    tracker.loadModel(objectname + ".cao");
    //! [Load cao]
    //! [Set display]
    tracker.setDisplayFeatures(true);
    //! [Set display]
    //! [Init]
    tracker.initClick(I, objectname + ".init", true);
    //! [Init]



    while (1) {
      g.acquire(Irgba);
      vpImageConvert::convert(Irgba, I);
      vpDisplay::display(I);
      //! [Track]
      tracker.track(I);
      //! [Track]
      //! [Get pose]
      tracker.getPose(cMo);
      //! [Get pose]
      //! [Display]
      tracker.getCameraParameters(cam);
      tracker.display(I, cMo, cam, vpColor::red, 2);
      //! [Display]
      vpDisplay::displayFrame(I, cMo, cam, 0.025, vpColor::none, 3);
      vpDisplay::displayText(I, 10, 10, "A click to exit...", vpColor::red);
      vpDisplay::flush(I);

      if (vpDisplay::getClick(I, false))
        break;
    }
    vpDisplay::getClick(I);
    //! [Cleanup]
    delete display;
    //! [Cleanup]
  }
  catch (const vpException &e) {
    std::cout << "Catch a ViSP exception: " << e << std::endl;
    return EXIT_FAILURE;
  }
#else
  (void)argc;
  (void)argv;
  std::cout << "Install OpenCV and rebuild ViSP to use this example." << std::endl;
#endif
  return EXIT_SUCCESS;
}

  • 结果:
    在这里插入图片描述
    注意:图中的坐标系就是制作xx.cao文件时的坐标系,也是被检测物体的机体坐标系即计算cMo矩阵的坐标系之一!!
  • 结论:
    • 遮挡物体会导致检测失败!!
    • 快速晃动相机会导致检测不准确!!

16. visp-d455:基于通用模型的目标检测和跟踪-RGBD相机

  • 先查看使用帮助,获知用法
    ./tutorial-mb-generic-tracker-rgbd-realsense-d455 --help
  • 修改mybox_depth.xml、mybox中的内参.xml
    内参可以自己标定也可以使用官方内参,官方内参在Intel.RealSense.Viewer软件中查看,也可以查阅官方文档,或者使用SDK调用函数来查看:
    vpCameraParameters cam_color =
      realsense.getCameraParameters(RS2_STREAM_COLOR, vpCameraParameters::perspectiveProjWithoutDistortion);
    vpCameraParameters cam_depth =
      realsense.getCameraParameters(RS2_STREAM_DEPTH, vpCameraParameters::perspectiveProjWithoutDistortion);
    
    D455 640x480 官方内参:
      <!--Pixel ratio-->
      <px>390.212</px>
      <py>390.212</py>
      <!--Principal point-->
      <u0>317.638</u0>
      <v0>242.057</v0>
    
  • 修改3D模型文件
    在这里插入图片描述
  • tutorial-mb-generic-tracker-rgbd-realsense-d455.cpp
    与例程 tutorial-mb-generic-tracker-rgbd-realsense.cpp 同。
  • 运行
./tutorial-mb-generic-tracker-rgbd-realsense-d455 --model_color model/hyl/mybox.cao --use_edges 1 --use_klt 1 --use_depth 1 --config_color model/hyl/mybox.xml --config_depth model/hyl/mybox_depth.xml

  • 结果
    在这里插入图片描述
    注意:图中的坐标系就是制作xx.cao文件时的坐标系,也是被检测物体的机体坐标系即计算cMo矩阵的坐标系之一!!
  • 讨论
    小幅度移动就会导致检测误差很大,原因可能有几下几点:
    1)KLT和边缘检测器的参数不太合适,如何调节?参考这里
    2)相机内参不准确?
    3)3D模型不准确?
    在这里插入图片描述
  • 结论
    不太实用高机动的飞行过程的目标跟踪任务;
    例程可以计算得到飞机位姿cMo,如上图所示,可以检测出被跟踪对象的机体坐标系因此此例程是可以计算出cMo矩阵的!!

17. visp-d455:AprilTag二维码跟踪,可实现相机位姿估计

一些说明:

  • 可以只使用RGB信息,也可以使用RGB信息+深度信息,这会更鲁棒。

  • 利用二维码检测技术,计算出二维码相对相机的姿态,以代替手动选点的方式来初始化跟踪器;

  • 需要把二维码粘贴在被跟踪对象上,只需要粘贴一面即可,贴哪一面?这与二维码检测算法中设定的二维码坐标系以及制作xxx.cao文件时定义的3d点坐标系有关,请往下看;
    在这里插入图片描述

  • 如何制作和打印自己的AprilTag二维码:
    tag36_11_00000-120x120.pdf
    Print an AprilTag marker

  • 二维码的尺寸:AprilTag外部黑色正方形的边长

  • xxx.cao中定义的坐标系,和二维码检测算法中定义的坐标系(如下),两者要统一,即要使用下面这个坐标系来定义 xxx.cao 中的3D点!!!
    在这里插入图片描述
    例如,长方体的length,width,high分别为18.5cm、13cm、9cm,假如二维码这样贴在长方体上:
    在这里插入图片描述
    已知这个图案在检测算法中的坐标系定义如下:
    在这里插入图片描述
    那么在xxx.cao文件中我应该使用这个坐标系来定义长方体的3D点,即:

    // Creates a cuboid.cao file in your current directory
    // cuboidEdgeSize : size of cuboid edges in meters
    void createCaoFile2(double cuboidLength,double cuboidWidth,double cuboidHigh)
    {
      std::ofstream fileStream;
      fileStream.open("cube.cao", std::ofstream::out | std::ofstream::trunc);
      fileStream << "V1\n";
      fileStream << "# 3D Points\n";
      fileStream << "8                  # Number of points\n";
      fileStream << cuboidLength / 2 << " " << cuboidWidth / 2 << " " << 0 << "    # Point 0: (X, Y, Z)\n";
      fileStream << cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << 0 << "    # Point 1\n";
      fileStream << -cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << 0 << "    # Point 2\n";
      fileStream << -cuboidLength / 2 << " " << cuboidWidth / 2 << " " << 0 << "    # Point 3\n";
      fileStream << -cuboidLength / 2 << " " << cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 4\n";
      fileStream << -cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 5\n";
      fileStream << cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 6\n";
      fileStream << cuboidLength / 2 << " " << cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 7\n";
      fileStream << "# 3D Lines\n";
      fileStream << "0                  # Number of lines\n";
      fileStream << "# Faces from 3D lines\n";
      fileStream << "0                  # Number of faces\n";
      fileStream << "# Faces from 3D points\n";
      fileStream << "6                  # Number of faces\n";
      fileStream << "4 0 3 2 1          # Face 0: [number of points] [index of the 3D points]...\n";
      fileStream << "4 1 2 5 6\n";
      fileStream << "4 4 7 6 5\n";
      fileStream << "4 0 7 4 3\n";
      fileStream << "4 5 2 3 4\n";
      fileStream << "4 0 1 6 7          # Face 5\n";
      fileStream << "# 3D cylinders\n";
      fileStream << "0                  # Number of cylinders\n";
      fileStream << "# 3D circles\n";
      fileStream << "0                  # Number of circles\n";
      fileStream.close();
    }
    

    注意:point 0定义在那个点不是固定的,其他点同理

  • tutorial-mb-generic-tracker-apriltag-rs2-d455.cpp

//! \example tutorial-mb-generic-tracker-apriltag-rs2.cpp
#include <fstream>
#include <ios>
#include <iostream>

#include <visp3/detection/vpDetectorAprilTag.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/mbt/vpMbGenericTracker.h>
#include <visp3/sensor/vpRealSense2.h>

typedef enum { state_detection, state_tracking, state_quit } state_t;

// Creates a cube.cao file in your current directory
// cubeEdgeSize : size of cube edges in meters
void createCaoFile(double cubeEdgeSize)
{
  std::ofstream fileStream;
  fileStream.open("cube.cao", std::ofstream::out | std::ofstream::trunc);
  fileStream << "V1\n";
  fileStream << "# 3D Points\n";
  fileStream << "8                  # Number of points\n";
  fileStream << cubeEdgeSize / 2 << " " << cubeEdgeSize / 2 << " " << 0 << "    # Point 0: (X, Y, Z)\n";
  fileStream << cubeEdgeSize / 2 << " " << -cubeEdgeSize / 2 << " " << 0 << "    # Point 1\n";
  fileStream << -cubeEdgeSize / 2 << " " << -cubeEdgeSize / 2 << " " << 0 << "    # Point 2\n";
  fileStream << -cubeEdgeSize / 2 << " " << cubeEdgeSize / 2 << " " << 0 << "    # Point 3\n";
  fileStream << -cubeEdgeSize / 2 << " " << cubeEdgeSize / 2 << " " << -cubeEdgeSize << "    # Point 4\n";
  fileStream << -cubeEdgeSize / 2 << " " << -cubeEdgeSize / 2 << " " << -cubeEdgeSize << "    # Point 5\n";
  fileStream << cubeEdgeSize / 2 << " " << -cubeEdgeSize / 2 << " " << -cubeEdgeSize << "    # Point 6\n";
  fileStream << cubeEdgeSize / 2 << " " << cubeEdgeSize / 2 << " " << -cubeEdgeSize << "    # Point 7\n";
  fileStream << "# 3D Lines\n";
  fileStream << "0                  # Number of lines\n";
  fileStream << "# Faces from 3D lines\n";
  fileStream << "0                  # Number of faces\n";
  fileStream << "# Faces from 3D points\n";
  fileStream << "6                  # Number of faces\n";
  fileStream << "4 0 3 2 1          # Face 0: [number of points] [index of the 3D points]...\n";
  fileStream << "4 1 2 5 6\n";
  fileStream << "4 4 7 6 5\n";
  fileStream << "4 0 7 4 3\n";
  fileStream << "4 5 2 3 4\n";
  fileStream << "4 0 1 6 7          # Face 5\n";
  fileStream << "# 3D cylinders\n";
  fileStream << "0                  # Number of cylinders\n";
  fileStream << "# 3D circles\n";
  fileStream << "0                  # Number of circles\n";
  fileStream.close();
}

// Creates a cuboid.cao file in your current directory
// cuboidEdgeSize : size of cuboid edges in meters
void createCaoFile2(double cuboidLength,double cuboidWidth,double cuboidHigh)
{
  std::ofstream fileStream;
  fileStream.open("cube.cao", std::ofstream::out | std::ofstream::trunc);
  fileStream << "V1\n";
  fileStream << "# 3D Points\n";
  fileStream << "8                  # Number of points\n";
  fileStream << cuboidLength / 2 << " " << cuboidWidth / 2 << " " << 0 << "    # Point 0: (X, Y, Z)\n";
  fileStream << cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << 0 << "    # Point 1\n";
  fileStream << -cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << 0 << "    # Point 2\n";
  fileStream << -cuboidLength / 2 << " " << cuboidWidth / 2 << " " << 0 << "    # Point 3\n";
  fileStream << -cuboidLength / 2 << " " << cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 4\n";
  fileStream << -cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 5\n";
  fileStream << cuboidLength / 2 << " " << -cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 6\n";
  fileStream << cuboidLength / 2 << " " << cuboidWidth / 2 << " " << -cuboidHigh << "    # Point 7\n";
  fileStream << "# 3D Lines\n";
  fileStream << "0                  # Number of lines\n";
  fileStream << "# Faces from 3D lines\n";
  fileStream << "0                  # Number of faces\n";
  fileStream << "# Faces from 3D points\n";
  fileStream << "6                  # Number of faces\n";
  fileStream << "4 0 3 2 1          # Face 0: [number of points] [index of the 3D points]...\n";
  fileStream << "4 1 2 5 6\n";
  fileStream << "4 4 7 6 5\n";
  fileStream << "4 0 7 4 3\n";
  fileStream << "4 5 2 3 4\n";
  fileStream << "4 0 1 6 7          # Face 5\n";
  fileStream << "# 3D cylinders\n";
  fileStream << "0                  # Number of cylinders\n";
  fileStream << "# 3D circles\n";
  fileStream << "0                  # Number of circles\n";
  fileStream.close();
}

#if defined(VISP_HAVE_APRILTAG)
state_t detectAprilTag(const vpImage<unsigned char> &I, vpDetectorAprilTag &detector, double tagSize,
  const vpCameraParameters &cam, vpHomogeneousMatrix &cMo)
{
  std::vector<vpHomogeneousMatrix> cMo_vec;

  // Detection
  bool ret = detector.detect(I, tagSize, cam, cMo_vec);

  // Display camera pose
  for (size_t i = 0; i < cMo_vec.size(); i++) {
    vpDisplay::displayFrame(I, cMo_vec[i], cam, tagSize / 2, vpColor::none, 3);
  }

  vpDisplay::displayText(I, 40, 20, "State: waiting tag detection", vpColor::red);

  if (ret && detector.getNbObjects() > 0) { // if tag detected, we pick the first one
    cMo = cMo_vec[0];
    return state_tracking;
  }

  return state_detection;
}
#endif // #if defined(VISP_HAVE_APRILTAG)

state_t track(const vpImage<unsigned char> &I, vpMbGenericTracker &tracker, double projection_error_threshold,
  vpHomogeneousMatrix &cMo)
{
  vpCameraParameters cam;
  tracker.getCameraParameters(cam);

  // Track the object
  try {
    tracker.track(I);
  }
  catch (...) {
    return state_detection;
  }

  tracker.getPose(cMo);

  // Detect tracking error
  double projection_error = tracker.computeCurrentProjectionError(I, cMo, cam);
  if (projection_error > projection_error_threshold) {
    return state_detection;
  }

  // Display
  tracker.display(I, cMo, cam, vpColor::red, 2);
  vpDisplay::displayFrame(I, cMo, cam, 0.025, vpColor::none, 3);
  vpDisplay::displayText(I, 40, 20, "State: tracking in progress", vpColor::red);
  {
    std::stringstream ss;
    ss << "Features: edges " << tracker.getNbFeaturesEdge() << ", klt " << tracker.getNbFeaturesKlt();
    vpDisplay::displayText(I, 60, 20, ss.str(), vpColor::red);
  }

  return state_tracking;
}

state_t track(std::map<std::string, const vpImage<unsigned char> *> mapOfImages,
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
  std::map<std::string, pcl::PointCloud<pcl::PointXYZ>::ConstPtr> mapOfPointclouds,
#else
  std::map<std::string, const std::vector<vpColVector> *> mapOfPointclouds,
  std::map<std::string, unsigned int> mapOfWidths, std::map<std::string, unsigned int> mapOfHeights,
#endif
  const vpImage<unsigned char> &I_gray, const vpImage<unsigned char> &I_depth,
  const vpHomogeneousMatrix &depth_M_color, vpMbGenericTracker &tracker, double projection_error_threshold,
  vpHomogeneousMatrix &cMo)
{
  vpCameraParameters cam_color, cam_depth;
  tracker.getCameraParameters(cam_color, cam_depth);

  // Track the object
  try {
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
    tracker.track(mapOfImages, mapOfPointclouds);
#else
    tracker.track(mapOfImages, mapOfPointclouds, mapOfWidths, mapOfHeights);
#endif
  }
  catch (...) {
    return state_detection;
  }

  tracker.getPose(cMo);

  // Detect tracking error
  double projection_error = tracker.computeCurrentProjectionError(I_gray, cMo, cam_color);
  if (projection_error > projection_error_threshold) {
    return state_detection;
  }

  // Display
  tracker.display(I_gray, I_depth, cMo, depth_M_color * cMo, cam_color, cam_depth, vpColor::red, 3);
  vpDisplay::displayFrame(I_gray, cMo, cam_color, 0.05, vpColor::none, 3);
  vpDisplay::displayFrame(I_depth, depth_M_color * cMo, cam_depth, 0.05, vpColor::none, 3);

  return state_tracking;
}

int main(int argc, const char **argv)
{
  //! [Macro defined]
#if defined(VISP_HAVE_APRILTAG) && defined(VISP_HAVE_REALSENSE2) && defined(VISP_HAVE_MODULE_MBT)
  //! [Macro defined]

  vpDetectorAprilTag::vpAprilTagFamily opt_tag_family = vpDetectorAprilTag::TAG_36h11;
  double opt_tag_size = 0.08;
  float opt_quad_decimate = 1.0;
  int opt_nthreads = 1;
  double opt_cube_length = 0.185; // 12.5cm by default
  double opt_cube_width = 0.15; // 12.5cm by default
  double opt_cube_high = 0.09; // 12.5cm by default
#ifdef VISP_HAVE_OPENCV
  bool opt_use_texture = false;
#endif
  bool opt_use_depth = false;
  double opt_projection_error_threshold = 40.;

#if !(defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV))
  bool display_off = true;
#else
  bool display_off = false;
#endif

  for (int i = 1; i < argc; i++) {
    if (std::string(argv[i]) == "--tag_size" && i + 1 < argc) {
      opt_tag_size = atof(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--quad_decimate" && i + 1 < argc) {
      opt_quad_decimate = (float)atof(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--nthreads" && i + 1 < argc) {
      opt_nthreads = atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--display_off") {
      display_off = true;
    }
    else if (std::string(argv[i]) == "--tag_family" && i + 1 < argc) {
      opt_tag_family = (vpDetectorAprilTag::vpAprilTagFamily)atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--cuboid_size3" && i + 3 <= argc) {
      opt_cube_length = atof(argv[i + 1]);
      opt_cube_width = atof(argv[i + 2]);
      opt_cube_high = atof(argv[i + 3]);
#ifdef VISP_HAVE_OPENCV
    }
    else if (std::string(argv[i]) == "--texture") {
      opt_use_texture = true;
#endif
    }
    else if (std::string(argv[i]) == "--depth") {
      opt_use_depth = true;
    }
    else if (std::string(argv[i]) == "--projection_error" && i + 1 < argc) {
      opt_projection_error_threshold = atof(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--help" || std::string(argv[i]) == "-h") {
      std::cout << "Usage: " << argv[0]
        << " [--cuboid_size3 <length width high in m>] [--tag_size <size in m>]"
        " [--quad_decimate <decimation>] [--nthreads <nb>]"
        " [--tag_family <0: TAG_36h11, 1: TAG_36h10, 2: TAG_36ARTOOLKIT, "
        " 3: TAG_25h9, 4: TAG_25h7, 5: TAG_16h5>]";
#if (defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV))
      std::cout << " [--display_off]";
#endif
      std::cout << " [--texture] [--depth] [--projection_error <30 - 100>] [--help]" << std::endl;
      return EXIT_SUCCESS;
    }
  }

  createCaoFile2(opt_cube_length,opt_cube_width,opt_cube_high);

  try {
    //! [Construct grabber]
    vpRealSense2 realsense;
    rs2::config config;
    int width = 640, height = 480, stream_fps = 30;
    config.enable_stream(RS2_STREAM_COLOR, width, height, RS2_FORMAT_RGBA8, stream_fps);
    config.enable_stream(RS2_STREAM_DEPTH, width, height, RS2_FORMAT_Z16, stream_fps);
    config.disable_stream(RS2_STREAM_INFRARED);
    realsense.open(config);

    vpCameraParameters cam_color, cam_depth;
    vpHomogeneousMatrix depth_M_color;
    cam_color = realsense.getCameraParameters(RS2_STREAM_COLOR, vpCameraParameters::perspectiveProjWithoutDistortion);
    if (opt_use_depth) {
      cam_depth = realsense.getCameraParameters(RS2_STREAM_DEPTH, vpCameraParameters::perspectiveProjWithoutDistortion);
      depth_M_color = realsense.getTransformation(RS2_STREAM_COLOR, RS2_STREAM_DEPTH);
    }

    vpImage<vpRGBa> I_color(height, width);
    vpImage<unsigned char> I_gray(height, width);
    vpImage<unsigned char> I_depth(height, width);
    vpImage<uint16_t> I_depth_raw(height, width);
    std::map<std::string, vpHomogeneousMatrix> mapOfCameraTransformations;
    std::map<std::string, const vpImage<unsigned char> *> mapOfImages;
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
    std::map<std::string, pcl::PointCloud<pcl::PointXYZ>::ConstPtr> mapOfPointclouds;
    pcl::PointCloud<pcl::PointXYZ>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZ>());
#else
    std::map<std::string, const std::vector<vpColVector> *> mapOfPointclouds;
    std::map<std::string, unsigned int> mapOfWidths, mapOfHeights;
    std::vector<vpColVector> pointcloud;
#endif

    std::map<std::string, vpHomogeneousMatrix> mapOfCameraPoses;

    std::cout << "Cube size3: " << opt_cube_length << ", " << opt_cube_width<< ", " << opt_cube_high<< std::endl;
    std::cout << "AprilTag size: " << opt_tag_size << std::endl;
    std::cout << "AprilTag family: " << opt_tag_family << std::endl;
    std::cout << "Camera parameters:" << std::endl;
    std::cout << "  Color:\n" << cam_color << std::endl;
    if (opt_use_depth)
      std::cout << "  Depth:\n" << cam_depth << std::endl;
    std::cout << "Detection: " << std::endl;
    std::cout << "  Quad decimate: " << opt_quad_decimate << std::endl;
    std::cout << "  Threads number: " << opt_nthreads << std::endl;
    std::cout << "Tracker: " << std::endl;
    std::cout << "  Use edges  : 1" << std::endl;
    std::cout << "  Use texture: "
#ifdef VISP_HAVE_OPENCV
      << opt_use_texture << std::endl;
#else
      << " na" << std::endl;
#endif
    std::cout << "  Use depth  : " << opt_use_depth << std::endl;
    std::cout << "  Projection error: " << opt_projection_error_threshold << std::endl;

    // Construct display
    vpDisplay *d_gray = nullptr;
    vpDisplay *d_depth = nullptr;

    if (!display_off) {
#ifdef VISP_HAVE_X11
      d_gray = new vpDisplayX(I_gray, 50, 50, "Color stream");
      if (opt_use_depth)
        d_depth = new vpDisplayX(I_depth, 80 + I_gray.getWidth(), 50, "Depth stream");
#elif defined(VISP_HAVE_GDI)
      d_gray = new vpDisplayGDI(I_gray);
      if (opt_use_depth)
        d_depth = new vpDisplayGDI(I_depth);
#elif defined(HAVE_OPENCV_HIGHGUI)
      d_gray = new vpDisplayOpenCV(I_gray);
      if (opt_use_depth)
        d_depth = new vpDisplayOpenCV(I_depth);
#endif
    }

    // Initialize AprilTag detector
    vpDetectorAprilTag detector(opt_tag_family);
    detector.setAprilTagQuadDecimate(opt_quad_decimate);
    detector.setAprilTagNbThreads(opt_nthreads);

    // Prepare MBT
    std::vector<int> trackerTypes;
#if defined(VISP_HAVE_OPENCV) && defined(HAVE_OPENCV_IMGPROC) && defined(HAVE_OPENCV_VIDEO)
    if (opt_use_texture)
      trackerTypes.push_back(vpMbGenericTracker::EDGE_TRACKER | vpMbGenericTracker::KLT_TRACKER);
    else
#endif
      trackerTypes.push_back(vpMbGenericTracker::EDGE_TRACKER);

    if (opt_use_depth)
      trackerTypes.push_back(vpMbGenericTracker::DEPTH_DENSE_TRACKER);

    vpMbGenericTracker tracker(trackerTypes);
    // edges
    vpMe me;
    me.setMaskSize(5);
    me.setMaskNumber(180);
    //! [Range]
    me.setRange(12);
    //! [Range]
    me.setLikelihoodThresholdType(vpMe::NORMALIZED_THRESHOLD);
    me.setThreshold(20);
    me.setMu1(0.5);
    me.setMu2(0.5);
    me.setSampleStep(4);
    tracker.setMovingEdge(me);

#if defined(VISP_HAVE_OPENCV) && defined(HAVE_OPENCV_IMGPROC) && defined(HAVE_OPENCV_VIDEO)
    if (opt_use_texture) {
      vpKltOpencv klt_settings;
      klt_settings.setMaxFeatures(300);
      klt_settings.setWindowSize(5);
      klt_settings.setQuality(0.015);
      klt_settings.setMinDistance(8);
      klt_settings.setHarrisFreeParameter(0.01);
      klt_settings.setBlockSize(3);
      klt_settings.setPyramidLevels(3);
      tracker.setKltOpencv(klt_settings);
      tracker.setKltMaskBorder(5);
    }
#endif

    if (opt_use_depth) {
      // camera calibration params
      tracker.setCameraParameters(cam_color, cam_depth);
      // model definition
      tracker.loadModel("cube.cao", "cube.cao");
      mapOfCameraTransformations["Camera2"] = depth_M_color;
      tracker.setCameraTransformationMatrix(mapOfCameraTransformations);
      tracker.setAngleAppear(vpMath::rad(70), vpMath::rad(70));
      tracker.setAngleDisappear(vpMath::rad(80), vpMath::rad(80));
    }
    else {
      // camera calibration params
      tracker.setCameraParameters(cam_color);
      // model definition
      tracker.loadModel("cube.cao");
      tracker.setAngleAppear(vpMath::rad(70));
      tracker.setAngleDisappear(vpMath::rad(80));
    }
    tracker.setDisplayFeatures(true);

    vpHomogeneousMatrix cMo;
    state_t state = state_detection;

    // wait for a tag detection
    while (state != state_quit) {
      if (opt_use_depth) {
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
        realsense.acquire((unsigned char *)I_color.bitmap, (unsigned char *)I_depth_raw.bitmap, nullptr, pointcloud, nullptr);
#else
        realsense.acquire((unsigned char *)I_color.bitmap, (unsigned char *)I_depth_raw.bitmap, &pointcloud, nullptr,
          nullptr);
#endif
        vpImageConvert::convert(I_color, I_gray);
        vpImageConvert::createDepthHistogram(I_depth_raw, I_depth);
        vpDisplay::display(I_gray);
        vpDisplay::display(I_depth);

        mapOfImages["Camera1"] = &I_gray;
        mapOfImages["Camera2"] = &I_depth;
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
        mapOfPointclouds["Camera2"] = pointcloud;
#else
        mapOfPointclouds["Camera2"] = &pointcloud;
        mapOfWidths["Camera2"] = width;
        mapOfHeights["Camera2"] = height;
#endif
      }
      else {
        realsense.acquire(I_gray);
        vpDisplay::display(I_gray);
      }

      if (state == state_detection) {
        state = detectAprilTag(I_gray, detector, opt_tag_size, cam_color, cMo);

        // Initialize the tracker with the result of the detection
        if (state == state_tracking) {
          if (opt_use_depth) {
            mapOfCameraPoses["Camera1"] = cMo;
            mapOfCameraPoses["Camera2"] = depth_M_color * cMo;
            tracker.initFromPose(mapOfImages, mapOfCameraPoses);
          }
          else {
            tracker.initFromPose(I_gray, cMo);
          }
        }
      }

      if (state == state_tracking) {
        if (opt_use_depth) {
#if defined(VISP_HAVE_PCL) && defined(VISP_HAVE_PCL_COMMON)
          state = track(mapOfImages, mapOfPointclouds, I_gray, I_depth, depth_M_color, tracker,
            opt_projection_error_threshold, cMo);
#else
          state = track(mapOfImages, mapOfPointclouds, mapOfWidths, mapOfHeights, I_gray, I_depth, depth_M_color,
            tracker, opt_projection_error_threshold, cMo);
#endif
        }
        else {
          state = track(I_gray, tracker, opt_projection_error_threshold, cMo);
        }
        {
          std::stringstream ss;
          ss << "Features: edges " << tracker.getNbFeaturesEdge() << ", klt " << tracker.getNbFeaturesKlt()
            << ", depth " << tracker.getNbFeaturesDepthDense();
          vpDisplay::displayText(I_gray, I_gray.getHeight() - 30, 20, ss.str(), vpColor::red);
        }
      }

      vpDisplay::displayText(I_gray, 20, 20, "Click to quit...", vpColor::red);
      if (vpDisplay::getClick(I_gray, false)) { // exit
        state = state_quit;
      }
      vpDisplay::flush(I_gray);

      if (opt_use_depth) {
        vpDisplay::displayText(I_depth, 20, 20, "Click to quit...", vpColor::red);
        if (vpDisplay::getClick(I_depth, false)) { // exit
          state = state_quit;
        }
        vpDisplay::flush(I_depth);
      }
    }

    if (!display_off) {
      delete d_gray;
      if (opt_use_depth)
        delete d_depth;
    }
  }
  catch (const vpException &e) {
    std::cerr << "Catch an exception: " << e.getMessage() << std::endl;
  }

  return EXIT_SUCCESS;
#else
  (void)argc;
  (void)argv;
#ifndef VISP_HAVE_APRILTAG
  std::cout << "ViSP is not build with Apriltag support" << std::endl;
#endif
#ifndef VISP_HAVE_REALSENSE2
  std::cout << "ViSP is not build with librealsense2 support" << std::endl;
#endif
  std::cout << "Install missing 3rd parties, configure and build ViSP to run this tutorial" << std::endl;
#endif
  return EXIT_SUCCESS;
}

  • 运行
    先查看用法

    ./tutorial-mb-generic-tracker-apriltag-rs2-d455  --help 
    

    仅使用边缘特征

    ./tutorial-mb-generic-tracker-apriltag-rs2-d455  --tag_size 0.12 --cuboid_size3 0.185 0.13 0.09
    

    使用边缘特征、关键点特征、深度特征

    ./tutorial-mb-generic-tracker-apriltag-rs2-d455  --tag_size 0.12 --cuboid_size3 0.185 0.13 0.09 --texture --depth
    

    一些参数默认值,参看代码或参考这里

  • 结果
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

  • 结论

    • 自动初始化跟踪器;
    • 加入深度信息后鲁棒性增强;
    • 只适用于规则的物体,如圆柱、矩形、球体等可以建立xxx.cao的场景;

18. visp-d455:【visp+深度学习算法MegaPose】实现目标跟踪、位姿估计

参考:VISP基础篇 - MegaPose位姿估计

步骤

  • 安装第三方库json、并验证是否可用。详参:Installing the client,命令如下:
    sudo apt-get install nlohmann-json3-dev
    cd visp-build
    cmake ../visp
    grep "json" ViSP-third-party.txt
    
    显示:To be built: core dnn_tracker则安装成功。
  • 安装 MegaPose server
    • MegaPose 的python环境可能会与Linux系统环境本身的的python环境冲突,所以要安装conda。安装miniconda即可,详参这里,命令如下:
      mkdir -p ~/shd/miniconda3
      # 可浏览https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh查看要下载的版本,下面下载最新版本。
      wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O ~/shd/miniconda3/miniconda.sh
      bash ~/shd/miniconda3/miniconda.sh -b -u -p ~/shd/miniconda3
      rm -rf ~/shd/miniconda3/miniconda.sh
      # 初始化bash:导致每次. ~/.bashrc都会进入base环境!避免这样在~/.bashrc中末尾加入conda deactivate即可!
      ~/shd/miniconda3/bin/conda init bash
      . ~/.bashrc
      
  • 确认environment path中有conda路径在这里插入图片描述
  • 验证conda
    conda --version
    conda activate base
    
  • 如果要卸载conda
    1)移除目录:rm -rf ~/shd/miniconda3
    2)清理~/.bashrc中关于conda的操作;
    3)清除PATH中关于conda的目录;
  • megapose安装前的一些配置
    cd ~/shd/visp-ws/visp/script/megapose_server
    ls
    vi megapose_variables.json
    
    其中 megapose_variables.json 文件是MegaPose server的配置文件,值得关注,用法说明如下,也可以使用默认值不修改 megapose_variables.json
    megapose_variables.json
  • 安装 megapose
     cd $VISP_WS/visp/script/megapose_server
    python install.py
    
    conda创建环境megapose失败,报错。
    problem1
    在这里插入图片描述
    原因:当前使用的channels没有找到这些包。
    解决办法:以为例包 torchvision==0.12.0,其他包则同理!
    1)去 https://anaconda.org/ 搜索torchvision,找到对应的版本0.12.0、aarch64架构。进入:
    在这里插入图片描述
    可知torchvision==0.12.0可用的channel为KumaTea
    2)将KumaTea添加到~/.condarc中:
    conda config --add channels KumaTea
    vi ~/.condarcvi ~/shd/visp-ws/visp/script/megapose_server/megapose_environment.yml
    problem2
    但是 geckodriverfirefox 两个包在 https://anaconda.org/ 中找不到 linux-aarch64 的版本。
    解决办法:
    查看~/shd/visp-ws/visp/script/megapose_server/megapose_environment.yml文件可知,这两个包其实是megapose的依赖,尝试从megapose_environment.yml删除掉,后续用pip进行安装。
    这两个包 geckodriver、firefox,一个是火狐浏览器,一个是用于网页排版功能的包,其实后续不用安装都可!!
    安装成功:
    在这里插入图片描述
  • 验证
    conda activate megapose
    python -m megapose_server.run -h
    
    报错1:
    在这里插入图片描述
    解决:
    cp ~/shd/visp-ws/visp/script/megapose_server/megapose_variables_final.json /home/jetson/shd/miniconda3/envs/megapose/lib/python3.9/site-packages/megapose_server/
    
    报错2:
    在这里插入图片描述
    解决:https://blog.csdn.net/weixin_45942927/article/details/134028773
    strings /lib/aarch64-linux-gnu/libstdc++.so.6  | grep GLIBCXX
    
    可知/lib/aarch64-linux-gnu/libstdc++.so.6中没有GLIBCXX_3.4.29,在系统中找到具有GLIBCXX_3.4.29 的 libstdc++.so.6文件:
    sudo find / -name "libstdc++.so.6*"
    
    找到可用的文件:/home/jetson/shd/miniconda3/envs/megapose/lib/libstdc++.so.6.0.32
    建立新的软链接:
    cd /lib/aarch64-linux-gnu
    sudo rm -f libstdc++.so.6
    sudo cp /home/jetson/shd/miniconda3/envs/megapose/lib/libstdc++.so.6.0.32 ./
    ln -s libstdc++.so.6.0.32  libstdc++.so.6
    
    成功运行 megapose server:
    在这里插入图片描述

19. visp-d455:AprilTag码检测 - 实现位姿估计

参考:Tutorial: AprilTag marker detection
类:vpDetectorAprilTag
前提:

  • 相机内参
  • AprilTag的真实尺寸

结果:
对于检测到的每一个tag

  • tag id
  • polygon 4个角点坐标的多边形
  • bounding box 边框
  • center of gravity of the tag 重心
  • 3D pose of the tag 位姿

步骤:

//! \example tutorial-apriltag-detector-live-rgbd-realsense.cpp
#include <visp3/core/vpConfig.h>
#ifdef VISP_HAVE_MODULE_SENSOR
#include <visp3/sensor/vpRealSense2.h>
#endif
//! [Include]
#include <visp3/detection/vpDetectorAprilTag.h>
//! [Include]
#include <visp3/core/vpImageConvert.h>
#include <visp3/gui/vpDisplayGDI.h>
#include <visp3/gui/vpDisplayOpenCV.h>
#include <visp3/gui/vpDisplayX.h>
#include <visp3/vision/vpPose.h>

int main(int argc, const char **argv)
{
//! [Macro defined]
#if defined(VISP_HAVE_APRILTAG) && defined(VISP_HAVE_REALSENSE2)//最外层的宏定义,判断是否安装了 REALSENSE SDK
  //! [Macro defined]
//应用程序参数定义和初始化
  vpDetectorAprilTag::vpAprilTagFamily tagFamily = vpDetectorAprilTag::TAG_36h11;
  vpDetectorAprilTag::vpPoseEstimationMethod poseEstimationMethod = vpDetectorAprilTag::HOMOGRAPHY_VIRTUAL_VS;
  double tagSize = 0.053;
  float quad_decimate = 1.0;
  int nThreads = 1;
  bool display_tag = false;
  int color_id = -1;
  unsigned int thickness = 2;
  bool align_frame = false;
  bool opt_verbose = false;

#if !(defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV))
  bool display_off = true;
  std::cout << "Warning: There is no 3rd party (X11, GDI or openCV) to dislay images..." << std::endl;
#else
  bool display_off = false;
#endif
//应用程序参数定义和初始化end

//输入参数解析
  for (int i = 1; i < argc; i++) {
    if (std::string(argv[i]) == "--pose_method" && i + 1 < argc) {
      poseEstimationMethod = (vpDetectorAprilTag::vpPoseEstimationMethod)atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--tag_size" && i + 1 < argc) {
      tagSize = atof(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--quad_decimate" && i + 1 < argc) {
      quad_decimate = (float)atof(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--nthreads" && i + 1 < argc) {
      nThreads = atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--display_tag") {
      display_tag = true;
    }
    else if (std::string(argv[i]) == "--display_off") {
      display_off = true;
    }
    else if (std::string(argv[i]) == "--color" && i + 1 < argc) {
      color_id = atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--thickness" && i + 1 < argc) {
      thickness = (unsigned int)atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--tag_family" && i + 1 < argc) {
      tagFamily = (vpDetectorAprilTag::vpAprilTagFamily)atoi(argv[i + 1]);
    }
    else if (std::string(argv[i]) == "--z_aligned") {
      align_frame = true;
    }
    else if (std::string(argv[i]) == "--verbose" || std::string(argv[i]) == "-v") {
      opt_verbose = true;
    }
    else if (std::string(argv[i]) == "--help" || std::string(argv[i]) == "-h") {
      std::cout << "Usage: " << argv[0]
        << " [--tag_size <tag_size in m> (default: 0.053)]"
        " [--quad_decimate <quad_decimate> (default: 1)]"
        " [--nthreads <nb> (default: 1)]"
        " [--pose_method <method> (0: HOMOGRAPHY, 1: HOMOGRAPHY_VIRTUAL_VS, "
        " 2: DEMENTHON_VIRTUAL_VS, 3: LAGRANGE_VIRTUAL_VS, "
        " 4: BEST_RESIDUAL_VIRTUAL_VS, 5: HOMOGRAPHY_ORTHOGONAL_ITERATION) (default: 0)]"
        " [--tag_family <family> (0: TAG_36h11, 1: TAG_36h10 (DEPRECATED), 2: TAG_36ARTOOLKIT (DEPRECATED),"
        " 3: TAG_25h9, 4: TAG_25h7 (DEPRECATED), 5: TAG_16h5, 6: TAG_CIRCLE21h7, 7: TAG_CIRCLE49h12,"
        " 8: TAG_CUSTOM48h12, 9: TAG_STANDARD41h12, 10: TAG_STANDARD52h13) (default: 0)]"
        " [--display_tag] [--z_aligned]";
#if (defined(VISP_HAVE_X11) || defined(VISP_HAVE_GDI) || defined(VISP_HAVE_OPENCV))
      std::cout << " [--display_off] [--color <color id>] [--thickness <line thickness>]";
#endif
      std::cout << " [--verbose,-v] [--help,-h]" << std::endl;
      return EXIT_SUCCESS;
    }
  }
//输入参数解析end

//主要逻辑代码
  try {
    //! [Construct grabber]
    std::cout << "Use Realsense 2 grabber" << std::endl;
    vpRealSense2 g;
    rs2::config config;
    unsigned int width = 640, height = 480;//设置图片像素??????
    config.enable_stream(RS2_STREAM_COLOR, static_cast<int>(width), static_cast<int>(height), RS2_FORMAT_RGBA8, 30);
    config.enable_stream(RS2_STREAM_DEPTH, static_cast<int>(width), static_cast<int>(height), RS2_FORMAT_Z16, 30);
    config.enable_stream(RS2_STREAM_INFRARED, static_cast<int>(width), static_cast<int>(height), RS2_FORMAT_Y8, 30);

    vpImage<unsigned char> I;//灰度图像!!!
    vpImage<vpRGBa> I_color(height, width);//rgba图像!!!a表示透明度,0为完全透明255为不透明!
    vpImage<uint16_t> I_depth_raw(height, width);
    vpImage<vpRGBa> I_depth;//深度rgb图!!!为什么要定义为vpRGBa类型??为了:用一个rgb图像来反映深度的大小,例如距离相机近的点呈现红色,远的点则呈现蓝色,看起来更直观!!

    g.open(config);
    const float depth_scale = g.getDepthScale();
    std::cout << "I_color: " << I_color.getWidth() << " " << I_color.getHeight() << std::endl;
    std::cout << "I_depth_raw: " << I_depth_raw.getWidth() << " " << I_depth_raw.getHeight() << std::endl;

    //sdk 获取图像保存在I_color,I_depth_raw
    rs2::align align_to_color = RS2_STREAM_COLOR;
    g.acquire(reinterpret_cast<unsigned char *>(I_color.bitmap), reinterpret_cast<unsigned char *>(I_depth_raw.bitmap),
              nullptr, nullptr, &align_to_color);

    std::cout << "Read camera parameters from Realsense device" << std::endl;
    vpCameraParameters cam;
    cam = g.getCameraParameters(RS2_STREAM_COLOR, vpCameraParameters::perspectiveProjWithoutDistortion);//获取相机参数、包括内参
    //! [Construct grabber]

    std::cout << cam << std::endl;
    std::cout << "poseEstimationMethod: " << poseEstimationMethod << std::endl;
    std::cout << "tagFamily: " << tagFamily << std::endl;
    std::cout << "nThreads : " << nThreads << std::endl;
    std::cout << "Z aligned: " << align_frame << std::endl;

    vpImage<vpRGBa> I_color2 = I_color;
    vpImage<float> depthMap;//注意与I_depth_raw的区别,一个是float类型,一个是uint16 !!
    vpImageConvert::createDepthHistogram(I_depth_raw, I_depth);//生成深度的rgb图I_depth,远近判断更直观!近点呈红色,远点呈蓝色!

    vpDisplay *d1 = nullptr;
    vpDisplay *d2 = nullptr;
    vpDisplay *d3 = nullptr;
    if (!display_off) {
#ifdef VISP_HAVE_X11
      d1 = new vpDisplayX(I_color, 100, 30, "Pose from Homography");
      d2 = new vpDisplayX(I_color2, I_color.getWidth() + 120, 30, "Pose from RGBD fusion");
      d3 = new vpDisplayX(I_depth, 100, I_color.getHeight() + 70, "Depth");
#elif defined(VISP_HAVE_GDI)
      d1 = new vpDisplayGDI(I_color, 100, 30, "Pose from Homography");
      d2 = new vpDisplayGDI(I_color2, I_color.getWidth() + 120, 30, "Pose from RGBD fusion");
      d3 = new vpDisplayGDI(I_depth, 100, I_color.getHeight() + 70, "Depth");
#elif defined(HAVE_OPENCV_HIGHGUI)
      d1 = new vpDisplayOpenCV(I_color, 100, 30, "Pose from Homography");
      d2 = new vpDisplayOpenCV(I_color2, I_color.getWidth() + 120, 30, "Pose from RGBD fusion");
      d3 = new vpDisplayOpenCV(I_depth, 100, I_color.getHeight() + 70, "Depth");
#endif
    }

    //! [Create AprilTag detector]
    vpDetectorAprilTag detector(tagFamily);
    //! [Create AprilTag detector]

    //! [AprilTag detector settings]  //检测器的参数设置!!!!!
    detector.setAprilTagQuadDecimate(quad_decimate);
    detector.setAprilTagPoseEstimationMethod(poseEstimationMethod);
    detector.setAprilTagNbThreads(nThreads);
    detector.setDisplayTag(display_tag, color_id < 0 ? vpColor::none : vpColor::getColor(color_id), thickness);//显示????
    detector.setZAlignedWithCameraAxis(align_frame);
    //! [AprilTag detector settings]
    std::vector<double> time_vec;

    //核心死循环:持续检测
    for (;;) {
      double t = vpTime::measureTimeMs();

      //! [Acquisition]
      g.acquire(reinterpret_cast<unsigned char *>(I_color.bitmap),
                reinterpret_cast<unsigned char *>(I_depth_raw.bitmap), nullptr, nullptr, &align_to_color);
      //! [Acquisition]

      I_color2 = I_color;
      vpImageConvert::convert(I_color, I);
      vpImageConvert::createDepthHistogram(I_depth_raw, I_depth);

      depthMap.resize(I_depth_raw.getHeight(), I_depth_raw.getWidth());

//并行计算
#ifdef VISP_HAVE_OPENMP
#pragma omp parallel for
#endif
//并行计算end

      //计算深度图: 得到每个像素点的深度值
      for (int i = 0; i < static_cast<int>(I_depth_raw.getHeight()); i++) {
        for (int j = 0; j < static_cast<int>(I_depth_raw.getWidth()); j++) {
          if (I_depth_raw[i][j]) {
            float Z = I_depth_raw[i][j] * depth_scale;//相机原始深度值的缩放系数
            depthMap[i][j] = Z;  //depthMap是真正的深度图,单位为m,I_depth_raw中保存的深度值是缩放后的,单位不是m
          }
          else {
            depthMap[i][j] = 0;
          }
        }
      }
      //计算深度图end

      //显示
      vpDisplay::display(I_color);
      vpDisplay::display(I_color2);
      vpDisplay::display(I_depth);

      //检测
      std::vector<vpHomogeneousMatrix> cMo_vec;//二维码位姿保存在cMo_vec中
      detector.detect(I, tagSize, cam, cMo_vec);//可知检测的输入是灰度图I,rgb图只用于显示!!

      //显示基于灰度图的检测结果:相机视角下 被检测对象的机体坐标系
      // Display camera pose for each tag
      for (size_t i = 0; i < cMo_vec.size(); i++) {
        vpDisplay::displayFrame(I_color, cMo_vec[i], cam, tagSize / 2, vpColor::none, 3);//显示 相机视角下的 被检测对象的 机体坐标系(轴)!!!!!
      }
      //显示基于灰度图的检测结果:相机视角下 被检测对象的机体坐标系end

      //显示基于深度图的检测结果:相机视角下 被检测对象的机体坐标系
      //! [Pose from depth map]
      std::vector<std::vector<vpImagePoint> > tags_corners = detector.getPolygon();//获取检测结果的4个角点
      std::vector<int> tags_id = detector.getTagsId();//获取检测结果的TagId
      std::map<int, double> tags_size;
      tags_size[-1] = tagSize; // Default tag size
      std::vector<std::vector<vpPoint> > tags_points3d = detector.getTagsPoints3D(tags_id, tags_size);
      for (size_t i = 0; i < tags_corners.size(); i++) {
        vpHomogeneousMatrix cMo;
        double confidence_index;
        if (vpPose::computePlanarObjectPoseFromRGBD(depthMap, tags_corners[i], cam, tags_points3d[i], cMo,
                                                    &confidence_index)) {
          if (confidence_index > 0.5) {//不同区间的可信度设置不同的坐标轴显示颜色
            vpDisplay::displayFrame(I_color2, cMo, cam, tagSize / 2, vpColor::none, 3);
          }
          else if (confidence_index > 0.25) {
            vpDisplay::displayFrame(I_color2, cMo, cam, tagSize / 2, vpColor::orange, 3);
          }
          else {
            vpDisplay::displayFrame(I_color2, cMo, cam, tagSize / 2, vpColor::red, 3);
          }
          std::stringstream ss;
          ss << "Tag id " << tags_id[i] << " confidence: " << confidence_index;
          vpDisplay::displayText(I_color2, 35 + i * 15, 20, ss.str(), vpColor::red);

          if (opt_verbose) {
            std::cout << "cMo[" << i << "]: \n" << cMo_vec[i] << std::endl;
            std::cout << "cMo[" << i << "] using depth: \n" << cMo << std::endl;
          }
        }
      }
      //! [Pose from depth map]
      //显示基于深度图的检测结果:相机视角下 被检测对象的机体坐标系end

      vpDisplay::displayText(I_color, 20, 20, "Pose from homography + VVS", vpColor::red);
      vpDisplay::displayText(I_color2, 20, 20, "Pose from RGBD fusion", vpColor::red);
      vpDisplay::displayText(I_color, 35, 20, "Click to quit.", vpColor::red);
      t = vpTime::measureTimeMs() - t;
      time_vec.push_back(t);

      std::stringstream ss;
      ss << "Detection time: " << t << " ms for " << detector.getNbObjects() << " tags";
      vpDisplay::displayText(I_color, 50, 20, ss.str(), vpColor::red);

      if (vpDisplay::getClick(I_color, false))//检测鼠标左击事件!!!
        break;

      vpDisplay::flush(I_color);
      vpDisplay::flush(I_color2);
      vpDisplay::flush(I_depth);
    }
    //核心死循环:持续检测end 

    std::cout << "Benchmark loop processing time" << std::endl;
    std::cout << "Mean / Median / Std: " << vpMath::getMean(time_vec) << " ms"
      << " ; " << vpMath::getMedian(time_vec) << " ms"
      << " ; " << vpMath::getStdev(time_vec) << " ms" << std::endl;

    if (!display_off) {
      delete d1;
      delete d2;
      delete d3;
    }

  }
  //主要逻辑代码end
  catch (const vpException &e) {
    std::cerr << "Catch an exception: " << e.getMessage() << std::endl;
  }

  return EXIT_SUCCESS;
#else
  (void)argc;
  (void)argv;
#ifndef VISP_HAVE_APRILTAG
  std::cout << "Enable Apriltag support, configure and build ViSP to run this tutorial" << std::endl;
#else
  std::cout << "Install librealsense 3rd party, configure and build ViSP again to use this example" << std::endl;
#endif
#endif
  return EXIT_SUCCESS;
}

  • 参数解析
    • --pose_method
      选择位姿检测算法,默认为1
      0: HOMOGRAPHY 根据【单应性矩阵】来计算位姿
      1: HOMOGRAPHY_VIRTUAL_VS
      2: DEMENTHON_VIRTUAL_VS
      3: LAGRANGE_VIRTUAL_VS,
      4: BEST_RESIDUAL_VIRTUAL_VS
      5: HOMOGRAPHY_ORTHOGONAL_ITERATION
    • --tag_size
      二维码尺寸,默认值0.053,单位m
      AprilTag外部黑色正方形的边长
    • --quad_decimate
      0 , 1 默认1
      是否开启四边形检测;
      可以在较低分辨率的图像上进行,可以提升检测速度,但以检测准确率为代价。
    • --tag_family
      二维码类型,默认0,TAG_36h11
      0: TAG_36h11
      1: TAG_36h10 (DEPRECATED)
      2: TAG_36ARTOOLKIT (DEPRECATED)
      3: TAG_25h9
      4: TAG_25h7 (DEPRECATED)
      5: TAG_16h5
      6: TAG_CIRCLE21h7
      7: TAG_CIRCLE49h12
      8: TAG_CUSTOM48h12
      9: TAG_STANDARD41h12
      10: TAG_STANDARD52h13)
    • --thickness
      线条粗细,默认2
      绘图的线条粗细
    • --nthreads
      要创建的线程数量
    • --display_tag
      默认为false
    • --display_off
      关闭所有图像显示,默认为false
    • --color
      默认为-1
    • --z_aligned
      当二维码的Z轴的方向与相机坐标的Z轴一致时是否计算pos
      Flag to get a pose where z-axis is aligned with the camera frame.
      detector.setZAlignedWithCameraAxis(align_frame);
      默认为false
    • --verbose
      默认false
      是否显示计算结果cMo到命令窗口。
    • --help

其中–display_tag,–color,–thickness都是函数setDisplayTag的参数。

  • 运行
./tutorial-apriltag-detector-live-rgbd-realsense-d455 --help
./tutorial-apriltag-detector-live-rgbd-realsense-d455 --tag_size 0.12 --verbose
  • 解析

    • 关于显示
      大概可以归纳为以下几个步骤:
      1)定义 vpDisplay 对象指针
      vpDisplay *d1 = nullptr;
      vpDisplay *d2 = nullptr;
      vpDisplay *d3 = nullptr;
      2)实例化vpDisplay对象指针
      d1 = new vpDisplayOpenCV(I_color, 100, 30, “Pose from Homography”);
      d2 = new vpDisplayOpenCV(I_color2, I_color.getWidth() + 120, 30, “Pose from RGBD fusion”);
      d3 = new vpDisplayOpenCV(I_depth, 100, I_color.getHeight() + 70, “Depth”);
      3)图像叠加显示,如文字、坐标轴(线条)
      vpDisplay::displayText(I_color, 20, 20, “Pose from homography + VVS”, vpColor::red);
      vpDisplay::displayText(I_color2, 20, 20, “Pose from RGBD fusion”, vpColor::red);
      vpDisplay::displayFrame(I_color, cMo_vec[i], cam, tagSize / 2, vpColor::none, 3);
      4)flush
      vpDisplay::flush(I_color);
      vpDisplay::flush(I_color2);
      vpDisplay::flush(I_depth);

    • 关于深度显示
      可将深度图转换为一张rgb图, 用红色表示近的点,用蓝色表示远的点,更直观!如图:
      vpImage<uint16_t> I_depth_raw(height, width);//相机输出的原始深度图
      vpImage I_depth;//深度rgb图,红色表示近点,蓝色表示远点
      vpImageConvert::createDepthHistogram(I_depth_raw, I_depth);
      在这里插入图片描述

    • 关于相机输出的原始深度图和最终可用的深度图
      原始深度图:vpImage<uint16_t> I_depth_raw(height, width);
      最终可用深度图:vpImage depthMap;
      I_depth_raw定义为uint16_t是为了保证精度不丢失,是被缩放后的结果,深度值的单位不是m。
      depthMap深度值的单位是m,它们之间的关系是:
      float Z = I_depth_raw[i][j] * depth_scale;
      depthMap[i][j] = Z;

    • 关于检测步骤
      大概总结为3个主要步骤:
      1)定义一个AprilTag检测器
      vpDetectorAprilTag detector(tagFamily);
      2)检测器的参数设置
      detector.setAprilTagQuadDecimate(quad_decimate);
      detector.setAprilTagPoseEstimationMethod(poseEstimationMethod);
      detector.setAprilTagNbThreads(nThreads);
      detector.setDisplayTag(display_tag, color_id < 0 ? vpColor::none : vpColor::getColor(color_id), thickness);
      detector.setZAlignedWithCameraAxis(align_frame);
      3)将每一帧图像输入,将结果保存到 cMo_vec
      std::vector cMo_vec;
      detector.detect(I, tagSize, cam, cMo_vec);//可知AprilTag检测器的输入是灰度图I,rgb图只用于显示!!

  • 结果
    在这里插入图片描述

20++. 更多内容请参考后篇

ardupilot开发 — Jetson Orin Nano 后篇

  • 4
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值