论文阅读笔记:(2015, ijrr) Keyframe-based visual–inertial odometry using nonlinear optimization

本文介绍了关键帧视觉惯性SLAM中,如何通过非线性优化处理视觉异常并实现IMU校准。关键内容包括边缘化数学原理、关键帧选择策略以及关键帧边缘化过程。作者详细解释了重投影误差、IMU模型和边缘化技术在实际应用中的实施方法。
摘要由CSDN通过智能技术生成

算是基于滑窗的VIO的必读文章吧,很详细地说了边缘化~

 

paper: 

http://in.ruc.edu.cn/wp-content/uploads/2021/01/Keyframe-Based-Visual-Inertial-Odometry-Using-Nonlinear-Optimization.pdfhttp://in.ruc.edu.cn/wp-content/uploads/2021/01/Keyframe-Based-Visual-Inertial-Odometry-Using-Nonlinear-Optimization.pdf一、简介/贡献

对于VIO系统, 通过紧耦合和非线性优化,能够在视觉部分有outlier的时候大幅提升精度;

提供了一种在线相机外参的标定方法;

细节详细,方便re-implemantation

二、包含惯性项的批视觉slam(Batch Visual SLAM with Inertial Terms)

坐标系定义:

需要求解的状态如下:

Cost Function: 

a. 重投影误差(Reprojection Error Formulation)

重投影误差及其Jaccobin,可以参考高博书的第四章和第七章:

b. IMU运动学及其偏差模型(IMU Kinematics and Bias Model) 

(详见论文)

c. IMU测量误差项(Formulation of the IMU Measurement Error Term)

三、前端概览 (Frontend Overivew)

3.1 关键点匹配,检测和变量初始化 (Keypoint Detection, Matching, and Variable Initialization)

关键点和描述子分别是: SSE-optimized Harris corner detector + BRISK descriptor

匹配步骤:

    a. 根据IMU大致预测当前帧的位置, 且获得可观测到的地图点;

    b. 3D 到 2D 匹配:深度收敛的地图点直接在当前帧匹配, 用匹配结果通过RANSAC计算当前帧的绝对位姿;

    c. 2D到2D匹配: 当前帧无地图点的关键点和所有active关键帧中深度没有收敛的关键点(any previous frames available)匹配 -> 三角化估计这些点的空间位置->位置uncertainty低的加入地图点->RANSAC求当前帧到最新一帧的相对位姿(怎么求,有什么作用没有看懂)。

3.2 关键帧选择 (Keyframe Selection)

当3D to 2D匹配的点数小于检测出的的关键点的50%, 或者2D to 2D匹配的点数小于检测出的的关键点的20%时, 就认为当前帧可以作为一个关键帧;

四、关键帧和边缘化 (Keyframes and Marginalization)

4.1 边缘化的数学基础(Mathematical Formulation of Marginalization in Nonlinear Optimization)

 非常详细的博客: 滑窗优化——边缘化_一抹烟霞的博客-CSDN博客_滑动窗口边缘化https://blog.csdn.net/qq_34213260/article/details/120359990#t17

关于“First Order Jacobin”: (2011, icra) Motion tracking with fixed-lag smoothing: Algorithm and consistency analysis

4.2 具体边缘化策略(Marginalization Applied to Keyframe-Based Visual-Inertial SLAM)

对要边缘化的状态的选择遵循原则:1. 保证H矩阵不要太大;2. 保证H矩阵的稀疏性;

首先会marg一些IMU Bias和速度

对于非Key Frame的帧, 丢弃其地标点, marg掉pose, IMU bias和速度:

 

对于Key Frame的帧, , marg掉pose, IMU bias和速度,以及不被最新Key Frame看到的地标

 

Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate Visual-Inertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochasic cloning sliding-window filter. This competititve reference implementation performs tightly-coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值