【SLAM】ORB_SLAM3 初步调试运行详细记录

前言

orbslam3 官方源码地址:https://github.com/UZ-SLAMLab/ORB_SLAM3

‼️ 注意如果是ROS编译请见issue:https://github.com/UZ-SLAMLab/ORB_SLAM3/issues/442 或直接使用fork版本下的 https://gitee.com/kin_zhang/ORB_SLAM3/tree/fix%2Fros_build/

此记录仅为小白式探索记录,主要是用自己的数据集跑一下orbslam3,之前对此并不了解,所以整篇内容会较为小白式,所有的参考解析均在前言部分给出 或途中给出,主要就是运行一下 在自己设备上跑一跑 看看效果 【然后发现emmm IMU是个坑 🙃】


0. 编译部分

在18.04 Docker和 20.04本机 下均进行了尝试,因为Docker内部无法把 Pangolin 显示给映射过来,只能走ros master节点,所以 最后又在本机上进行了尝试

如果是走ROS的话,安装完ROS 基本只需要装一个pangolin进行显示用

  • 安装方法见 ‣,或如下

    # Clone Pangolin along with it's submodules
    git clone --recursive https://github.com/stevenlovegrove/Pangolin.git
    cd Pangolin
    # Install dependencies (as described above, or your preferred method)
    ./scripts/install_prerequisites.sh recommended
    # Configure and build
    cmake -B build
    cmake --build build
    # GIVEME THE PYTHON STUFF!!!! (Check the output to verify selected python version)
    cmake --build build -t pypangolin_pip_install
    

然后ubuntu 20.04 需要进行一些修改建议直接clone fork版本:

git clone -b feat/20.04 https://github.com/kin-zhang/ORB_SLAM3.git
cd ORB_SLAM3
chmod +x build.sh
./build.sh
# ---- 记得换PATH哦!!!
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM3/Examples/ROS
chmod +x build_ros.sh
./build_ros.sh

# ---- examples 
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/kin/workspace/ORB_SLAM3/Examples_old/ROS
chmod +x build_ros.sh
./build_ros.sh

相关修改:

  1. opencv 版本需要修改一下
  2. C++ 14的标准进行编译
  3. sophus需要进行make install 或者是在cmakelist里 include

之后就ok了,可以直接测试提供的数据集 ,或者 阿里盘链接 (好像不可分享 emmm)

然后运行即可 【需要开三个终端】

roscore
rosrun ORB_SLAM3 Stereo_Inertial Vocabulary/ORBvoc.txt Examples_old/Stereo-Inertial/EuRoC.yaml true
rosbag play --pause V1_02_medium.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw /imu0:=/imu

运行示意:


后续部分主要针对 如果想跑自己的数据集的相关记录

1. 相机标定

因为是自己的数据集,所以需要 自己进行标定,其实原repo里有个pdf写的挺好的,以下为单目 不带IMU为例(因为这样简单…)

内参

Examples 下有各个对应相机(单目、单目+IMU、深度、深度+IMU、双目、双目+IMU)一些示例的yaml脚本,我们打开 Examples_old/Monocular/EuRoC.yaml

对于单目来说,只需要标定以下内参即可:

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 458.654
Camera.fy: 457.296
Camera.cx: 367.215
Camera.cy: 248.375

Camera.k1: -0.28340811
Camera.k2: 0.07395907
Camera.p1: 0.00019359
Camera.p2: 1.76187114e-05

Camera.width: 752
Camera.height: 480

# Camera frames per second 
Camera.fps: 20.0

内参为 focal length (fx, fy) 和 central point (cx, cy) ,还有就是distortion model (k1, k2, (k3,k4) p1, p2, (p3,p4))

  1. ROS标定 wiki

    rosrun camera_calibration cameracalibrator.py --size 7x10 --square 0.5 image:=/camera/image_raw --no-service-check
    
  2. matlab标定CSDN

一般情况下会得到这样一组 intrinsic matrix,相关原理及介绍

K = ( f x s x 0 0 f y y 0 0 0 1 ) K=\left(\begin{array}{ccc}f_{x} & s & x_{0} \\0 & f_{y} & y_{0} \\0 & 0 & 1\end{array}\right) K=fx00sfy0x0y01

可以注意到两边矩阵有时候需要转置一下(一般opencv遵循上面公式所对应的)

比如自己的数据集给出的矩阵是这样的,然后读取了bag包知道 fps也是20hz

image_width: 1024
image_height: 768
camera_name: stereo_right_Flir_BFSU3
camera_matrix: !!opencv-matrix
  rows: 3
  cols: 3
  dt: f
  data: [ 6.04964966e+02, 0., 5.17844666e+02, 
	0., 6.04625610e+02, 3.89209320e+02, 
	0., 0., 1. ]
distortion_model: plumb_bob
distortion_coefficients: !!opencv-matrix
  rows: 1
  cols: 4
  dt: f
  data: [ -9.58003029e-02, 8.74120295e-02, 2.08094658e-04, -1.08567670e-04 ]
rectification_matrix: !!opencv-matrix
  rows: 3
  cols: 3
  dt: f
  data: [ 9.99987543e-01, 4.83056623e-03, 1.24577642e-03, 
					-4.83735651e-03, 9.99973118e-01, 5.50653692e-03, 
					-1.21914328e-03, -5.51249459e-03, 9.99984086e-01 ]

对照起来复制一个 EuRoC.yaml 修改一下 KIN.yaml

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 604.964
Camera.fy: 604.625
Camera.cx: 517.844
Camera.cy: 389.2093

Camera.k1: -0.0958
Camera.k2: 0.08741
Camera.p1: 2.08094658e-04
Camera.p2: -1.08567670e-04

Camera.width: 1024
Camera.height: 768

# Camera frames per second 
Camera.fps: 20.0

外参

主要是双目相机/双相机情况下,需要修改外参

比如这里给出了 左右相机 到 IMU的(有时间也可直接标定c1_c2的转换 不通过body)

# 左相机 c1
# extrinsics from the sensor (reference) to bodyimu (target)
quaternion_sensor_bodyimu: !!opencv-matrix
  rows: 1
  cols: 4
  dt: f
  data: [0.501677, 0.491365, -0.508060, 0.498754]
translation_sensor_bodyimu: !!opencv-matrix
  rows: 1
	cols: 3
	dt: f
	data: [0.066447, -0.019381, -0.077907]

# 右相机 c2
# extrinsics from the sensor (reference) to bodyimu (target)
quaternion_sensor_bodyimu: !!opencv-matrix
  rows: 1
  cols: 4
  dt: f
  data: [0.495420, 0.501199, -0.503827, 0.499516]
translation_sensor_bodyimu: !!opencv-matrix
  rows: 1
	cols: 3
	dt: f
	data: [-0.093388, -0.017886, -0.078768]

然后我们直接复制到matlab得到T_c1_c2 (从坐标c1到c2)

T_c1_b = trvec2tform([0.066447, -0.019381, -0.077907])*quat2tform([0.501677, 0.491365, -0.508060, 0.498754])
T_c2_b = trvec2tform([-0.093388, -0.017886, -0.078768])*quat2tform([0.495420, 0.501199, -0.503827, 0.499516])
disp(T_c1_b*inv(T_c2_b))

然后得到一个4x4的矩阵 复制到 Stereo 文件夹下替换即可(当然两个内参也需要换一下)

Stereo.ThDepth: 60.0
Stereo.T_c1_c2: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [1.0071,0.0073,-0.00140,0.0798,
         -0.0051,1.0161,0.0164,-0.0002,
         -0.0020,-0.0091,1.0019,0.0003,
         0,0,0,1.0000]

2. 运行

单目

单目基本只需要改一下相机参数,就可以了 更多的关于orbslam方面相关参数

其中因为自身数据录制的时候 image的type是 sensor_msgs/CompressedImage 需要做一下转换

roscore
rosrun image_transport republish compressed in:=/stereo/frame_right/image_raw raw out:=/camera/image_raw

然后再跑数据包

rosbag play --pause 20220216_MCR_normal.bag /imu:=/stim300/imu/data_raw

然后再跑算法

rosrun ORB_SLAM3 Mono Vocabulary/ORBvoc.txt Examples_old/Monocular/KIN.yaml

运行示例截图:

双目/双相机

双目需要改两个相机的内参,同时给出c1相机到c2的 homogeneous transformation 矩阵。

然后和单目的差不多 只是换一下算法名和republish的topic

roscore
rosrun image_transport republish compressed in:=/stereo/frame_left/image_raw raw out:=/camera/left/image_raw
rosrun image_transport republish compressed in:=/stereo/frame_right/image_raw raw out:=/camera/right/image_raw

然后再跑数据包

rosbag play --pause 20220216_MCR_normal.bag /imu:=/stim300/imu/data_raw

然后再跑算法

rosrun ORB_SLAM3 Stereo Vocabulary/ORBvoc.txt Examples_old/Stereo/KIN.yaml false

运行示例:

带IMU玩

感觉是外参一直搞错了 能跑,但是总是重置 emmm 后面再找找吧 大致探索一下,现在好像知道了 因为自己的数据包一开始录制的时候并不知晓 IMU的初始化需要时间等事宜

roscore
rosrun image_transport republish compressed in:=/stereo/frame_left/image_raw raw out:=/camera/left/image_raw
rosrun image_transport republish compressed in:=/stereo/frame_right/image_raw raw out:=/camera/right/image_raw

然后再跑数据包

rosbag play --pause 20220216_MCR_normal.bag /stim300/imu/data_raw:=/imu

然后再跑算法

rosrun ORB_SLAM3 Stereo_Inertial Vocabulary/ORBvoc.txt Examples_old/Stereo-Inertial/HKUST.yaml false

IMU会重置

if (!mCurrentFrame.mpImuPreintegrated || !mLastFrame.mpImuPreintegrated)
{
    cout << "not IMU meas" << endl;
    return;
}

可能原因:

  • ROS 中坐标系和平常的不太一样

Running ORB-SLAM3 with ZED2 always appears error. · Issue #620 · stereolabs/zed-ros-wrapper

The camera/IMU transform is in the standard ROS frame: X Forward, Y left, Z Top.

但是两个相机之间的就没啥问题 所以应该不是

  • 数据包的IMU 初始化时间不够 相关链接附后,需要运动与静止

the imu initialization need motion, you can try to move the imu at first till it print the “end VBA2”.

Do you have enough motion in all directions, take a look at using ORBSLAM3 with e.g. EuRoC dataset: MH_01_easy.bag, seems like motion initially is important for the initialization of ORBSLAM3 to work issue


3. 算法参数

# ORB Extractor: Number o

f features per image
ORBextractor.nFeatures: 1000

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

因为都是室内就没咋调,后面再有兴趣可以看看细节,运行起来感觉还不错,如果想得到pose可以直接pub一下pose,就能当个定位用了


赠人点赞 手有余香 😆;正向回馈 才能更好开放记录 hhh

评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Kin-Zhang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值