超全!SLAM论文与开源代码汇总(激光+视觉+融合)

这篇博客详细总结了视觉、激光及融合SLAM的代表性算法论文与开源代码,包括激光SLAM、视觉SLAM、激光-视觉融合SLAM,并提供了相关链接和参考文献,覆盖了从基础理论到实际应用的各种SLAM方法。
摘要由CSDN通过智能技术生成

1.代表性视觉SLAM算法论文与开源代码总结

d07a44fa8c37a35d4c55ff4161d119be.png

2.代表性激光SLAM算法论文与开源代码总结

76c6fe4edc73800cdaa0132ffabf3881.png

3.代表性激光-视觉融合SLAM算法论文总结

230c0a1cdf2618a4527ea452f0c1110a.png

激光-视觉-IMU-GPS融合SLAM算法理论与代码讲解:https://mp.weixin.qq.com/s/CEJPWHVAnKsLepqP3lSAbg

参考文献

[1] CADENA C, CARLONE L, CARRILLO H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on robotics, 2016, 32(6):1309-1332.

[2] YOUNES G , ASMAR D , SHAMMAS E. A survey on non-filter-based monocular Visual SLAM systems. Robotics & Autonomous Systems, 2016.

[3] STRASDAT H, MONTIEL J M, DAVISON A J. Visual slam: why filter?[J]. Image and Vision Computing, 2012, 30(2):65-77.

[4] DAVISON A J. Real-time simultaneous localization and mapping with a single camera[C]// Proceedings Ninth IEEE International Conference on Computer Vision. IEEE, 2003.

[5] DAVISON A J, REID I D, MOLTON N D, et al. Monoslam: Real-time single camera slam[J]. IEEE transactions on pattern analysis and machine intelligence, 2007, 29(6):1052-1067.

[6] CIVERA J, DAVISON A J, MONTIEL J M. M. Inverse depth parametrization for monocular SLAM[J]. IEEE Transactions on Robotics, 24(5):932–945, 2008.

[7] KUMMERLE R , GRISETTI G , STRASDAT H , et al. G2o: A general framework for graph optimization[C]// IEEE Internatonal Conference on Robotics & Automation. IEEE, 2011.

[8] POLOK L . Incremental Block Cholesky Factorization for Nonlinear Least Squares in Robotics[C]// IFAC Proceedings Volumes. 2013:172-178.

[9] KLEIN G , MURRAY D W. Parallel Tracking and Mapping for Small AR Workspaces[C]// IEEE & Acm International Symposium on Mixed & Augmented Reality. ACM, 2008.

[10] KLEIN G, MURRAY D W . Improving the Agility of Keyframe-Based SLAM[C]// European Conference on Computer Vision. Springer-Verlag, 2008.

[11] KLEIN G, MURRAY D W. Parallel Tracking and Mapping on a camera phone. IEEE International Symposium on Mixed & Augmented Reality, 2009:83-86.

[12] KLEIN G, MURRAY D W. PTAM-GPL[EB/OL]. 2013. https://github.com/Oxford-PTAM/ PTAM-GPL.

[13] ENGEL J , SCHPS T , CREMERS D . LSD-SLAM: Large-scale direct monocular SLAM[C]// European Conference on Computer Vision. Springer, Cham, 2014.

[14] ENGEL J , STUCKLER J , CREMERS D . Large-scale direct SLAM with stereo cameras[C]// 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015.

[15] ENGEL J , SCHPS T , CREMERS D. LSD-SLAM: Large-scale direct monocular SLAM [EB/OL]. 2014. https://github.com/tum-vision/lsd slam.

[16] FORSTER C , PIZZOLI M , D SCARAMUZZA∗. SVO: Fast semi-direct monocular visual odometry[C]// IEEE International Conference on Robotics & Automation. IEEE, 2014.

[17] FORSTER C , ZHANG Z , GASSNER M , et al. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems[J]. IEEE Transactions on Robotics, 2017, 33(2):249-265.

[18] C. FORSTER, M. PIZZOLI, AND D. SCARAMUZZA. SVO[EB/OL]. 2014. https:// github.com/uzh-rpg/rpg svo.

[19] MUR-ARTAL R , MONTIEL J M M , TARDOS J D . ORB-SLAM: A Versatile and Accurate Monocular SLAM System[J]. IEEE Transactions on Robotics, 2015, 31(5):1147-1163.

[20] MURARTAL R, TARDOS J. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras[J]. IEEE Transactions on Robotics, 2017, 33(5):1255-1262.

[21] MUR-ARTAL R, TARDOS J D, MONTIEL J M M, et al. ORB-SLAM2[EB/OL]. 2016. https://github.com/raulmur/ORB SLAM2.

[22] ENGEL J , KOLTUN V , CREMERS D . Direct Sparse Odometry[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016:1-1.

[23] HIDENOBU M , LUKAS V S , VLADYSLAV U , et al. Omnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras[J]. IEEE Robotics & Automation Letters, 2018, PP:1-1.</

激光视觉融合语义SLAM源代码是一种利用激光雷达视觉传感器提供的数据进行同时定位与地图构建的技术。该技术结合了激光雷达提供的高精度地图和视觉传感器提供的语义信息,能够在环境中同时进行定位和目标识别。这种技术的源代码使得研究者和发者可以更好地理解和应用该算法激光视觉融合语义SLAM源代码的主要优点是可以利用激光雷达视觉传感器的互补性,提高地图构建的准确性和重建的视觉效果。激光雷达能够提供精确的地图结构和距离信息,而视觉传感器则能够提供更丰富的语义信息。激光视觉融合语义SLAM源代码能够将两者的数据进行融合,利用激光雷达的高精度地图信息进行优化,并通过视觉传感器的语义信息实现更准确和完整的地图重建。 源代码使得研究者可以自由地访问和利用这些算法,从而加快研究进展和技术推广。通过源代码,研究者可以根据自己的需求和实际情况进行定制和修改,以适应不同的应用场景和硬件设备。此外,源代码还能够促进学术界和工业界之间的交流与合作,推动SLAM技术的发展和应用。 总之,激光视觉融合语义SLAM源代码是一项重要的技术,能够实现在同时定位与地图构建过程中的高精度地图和丰富语义信息的完美融合。通过源代码的共享,促进了该技术的发展和推广,为研究者和发者提供了更好的工具和资源,推动了SLAM技术在不同领域的应用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值