【综述】视觉和激光雷达融合SLAM | 传统方法| 最新2020

视觉和激光雷达的综述

A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping

先介绍基本SLAM原理,再视觉SLAM,再激光SLAM,最后介绍两者融合SLAM

Section 2:SLAM
  • 1、SLAM的概率方法解释
  • 2、基于图的SLAM框架
Section 3:V-SLAM
  • 1、所有这些视觉SLAM在光线改变或者低纹理环境都容易出错。
Section 4:LiDAR Based SLAM
  • 1、基于激光雷达的主要解决方案是扫描匹匹配方法,然后是图优化
  • 2、每个节点表示一个传感器测量,边表示观测产生的约束
  • 3、栅格地图和粒子滤波器
  • 4、回环检测全局优化
Section 5:LiDAR-Camera Fusion
  • 1、即使V-SLAM提供准确的结果,也存在一些缺陷,如:单目尺度漂移,双目深度估计精度不高,户外RGBD稠密重建困难等。

  • 2、激光雷达主要优点在于测距和制图方面非常准确。

  • 3、使用以上两者融合将大大提升SLAM性能,但是标定和融合事比较困难。

  • 4、标定

    • 相机和雷达外参。传统棋盘格标定[62],CNN标定(在线标定)[63]?
  • 5、为了提升V-SLAM的融合:

    • [67]激光为了提供深度值,视觉进行运动估计和建图
    • [68]利用激光的深度值的一种直接法视觉SLAM
    • 然而,由于相机分辨率远高于激光雷达分辨率,大量像素没有深度信息
    • [69]解决了分辨率匹配问题的方法,在计算两个传感器之间的几何变换后,采用高斯过程回归对缺失值进行插值,因此,激光雷达只是用来初始化中检测到的特征,就像RGBD一样。
    • [70]提出一个单目融合融合一维激光雷达,该方案以极低成本硬件实现了有效的漂移矫正
    • [71]采用视觉惯导结合进行状态估计,激光雷达进行障碍物检测和河流边界测绘
    • 然而,点云可能包含遮挡点,这会降低精度
    • [72]提出一种直接SLAM方法,采用遮挡点检测器和共面点检测器解决这一问题。
  • 6、为了提升LiDAR SLAM而融合:

    • 很多视觉-激光SLAM中,激光雷达扫描匹配进行运动估计,而相机进行特征检测。
    • [73]使用扫描匹配和使用ORB特征的视觉回环检测方案,增强了基于激光雷达的SLAM的较差性能。
    • [74]3D激光SLAM和视觉关键帧词袋回环检测融合,此外,使用激光相机融合,可以对最近点迭代(ICP)进行优化。
    • [75]利用视觉信息对刚性转换做了预测,用于建立一个通用的ICP框架。
  • 7、并发视觉激光融合:

    • [76]提出使用视觉和激光测量,通过运行在每个模式的并行SLAM和耦合的数据。然后通过在优化阶段使用两种模式的残差完成最终轨迹和地图建立。
    • [77 V-LOAM]该系统利用高频视觉里程计和低频雷达里程计来细化运动估计和修正漂移。
    • [78]应该是目前可用的最紧密的融合,其中进行了图优化,考虑激光和特征约束的成本函数(Cost function)。激光数据和地图数据都可以得到机器人的姿态估计。此外,还建立了2.5D地图以加速回环检测。

62.Kassir, A.; Peynot, T. Reliable automatic camera-laser calibration. In Australasian Conference on Robotics and
Automation (ACRA 2010); Wyeth, G., Upcroft, B., Eds.; ARAA: Brisbane, Australia, 2010.
63. Park, K.; Kim, S.; Sohn, K. High-Precision Depth Estimation Using Uncalibrated LiDAR and Stereo Fusion.
IEEE Trans. Intell. Transp. Syst. 2019, 1–15. [CrossRef]
64. Sun, F.; Zhou, Y.; Li, C.; Huang, Y. Research on active SLAM with fusion of monocular vision and laser range
data. In Proceedings of the 2010 8th World Congress on Intelligent Control and Automation, Jinan, China,
7–9 July 2010; pp. 6550–6554.
65. Xu, Y.; Ou, Y.; Xu, T. SLAM of Robot based on the Fusion of Vision and LIDAR. In Proceedings of the 2018
IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018;
pp. 121–126.
66. Guillén, M.; García, S.; Barea, R.; Bergasa, L.; Molinos, E.; Arroyo, R.; Romera, E.; Pardo, S. A Multi-Sensorial
Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied
Environments. Sensors 2017, 17, 802. [CrossRef]
67. Graeter, J.; Wilczynski, A.; Lauer, M. Limo: Lidar-monocular visual odometry. In Proceedings of
the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain,
1–5 October 2018; pp. 7872–7879.
68. Shin, Y.S.; Park, Y.S.; Kim, A. Direct visual SLAM using sparse depth for camera-lidar system. In Proceedings
of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia,
21–25 May 2018; pp. 1–8.
69. De Silva, V.; Roche, J.; Kondoz, A. Fusion of LiDAR and camera sensor data for environment sensing in
driverless vehicles. arXiv 2018, arXiv:1710.06230.
70. Zhang, Z.; Zhao, R.; Liu, E.; Yan, K.; Ma, Y. Scale Estimation and Correction of the Monocular Simultaneous
Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data. Sensors
2018, 18, 1948. [CrossRef] [PubMed]
71. Scherer, S.; Rehder, J.; Achar, S.; Cover, H.; Chambers, A.; Nuske, S.; Singh, S. River Mapping From a Flying
Robot: State Estimation, River Detection, and Obstacle Mapping. Auton. Robots 2012, 33. [CrossRef]
72. Huang, K.; Xiao, J.; Stachniss, C. Accurate Direct Visual-Laser Odometry with Explicit Occlusion Handling
and Plane Detection. In Proceedings of the IEEE International Conference on Robotics and Automation
(ICRA), Montreal, Canada, 20–24 May 2019; pp. 1295–1301. [CrossRef]
73. Liang, X.; Chen, H.; Li, Y.; Liu, Y. Visual laser-SLAM in large-scale indoor environments. In Proceedings
of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China,
3–7 December 2016; pp. 19–24. [CrossRef]
74. Zhu, Z.; Yang, S.; Dai, H.; li, G. Loop Detection and Correction of 3D Laser-Based SLAM with Visual
Information. In Proceedings of the 31st International Conference on Computer Animation and Social Agents,
Caine, USA, 21–23 October 2018; pp. 53–58. [CrossRef]
75. Pandey, G.; McBride, J.; Savarese, S.; Eustice, R. Visually bootstrapped generalized ICP. In Proceedings
of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011;
pp. 2660–2667. [CrossRef]
76. Seo, Y.; Chou, C. A Tight Coupling of Vision-Lidar Measurements for an Effective Odometry. In Proceedings
of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1118–1123. [CrossRef]
77. Zhang, J.; Singh, S. Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast. In Proceedings of the
2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015;
Volume 2015. [CrossRef]
78. Jiang, G.; Lei, Y.; Jin, S.; Tian, C.; Ma, X.; Ou, Y. A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion. Appl. Sci. 2019, 9, 2105.[CrossRef]

  • 3
    点赞
  • 64
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值