大领域:三维重建、视觉SLAM
1. 麻省理工学院航空航天实验室 (MIT AeroAstro)
- 研究方向:位姿估计与导航,路径规划,控制与决策,机器学习与强化学习
- 实验室主页:MIT AeroAstro
- 论文汇总:http://acl.mit.edu/publications
- 主要学者:
Kasra Khosoussi(SLAM 图优化):谷歌学术
- 代表性工作:
论文:Slam with objects using a nonparametric pose graph
GitHub:objectSLAM
物体级SLAM:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016
物体级SLAM:Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation IEEE International Conference on Robotics and Automation (ICRA), 2019
SLAM图优化:Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., How, J. P. Reliable Graphs for SLAM International Journal of Robotics Research (IJRR), 2019
主动性SLAM可达性图探索:Grasa, A., Bonatti, F., and How, J. P. Exploration with Active SLAM and Reachability Maps IEEE Transactions on Robotics (TRO), 2020
2. 斯坦福大学视觉与学习实验室 (Stanford Vision and Learning Lab, SVL)
- 研究方向:视觉SLAM,机器人感知,3D重建,计算机视觉
- 实验室主页:SVL Stanford
- 主要学者:
Davide Scaramuzza
个人主页:https://rpg.ifi.uzh.ch/teams/davidescaramuzza/
Cyrill Stachniss
个人主页:https://www6.informatik.uni-ulm.de/homepage/stachniss/
- 代表性工作:
论文:Real-Time Visual SLAM for Mobile Robots with Inertial Fusion
GitHub:VINS-Mono
ORB-SLAM:Forster, C., Pizzoli, M., and Scaramuzza, D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System IEEE Transactions on Robotics, 2014
鲁棒的视觉-惯性里程计(VO)方法:Zhou, Y., and Li, J. Robust Visual-Inertial Odometry Using Stereo and RGB-D Cameras IEEE Transactions on Robotics, 2018
稠密三维重建:Shin, H., and Lee, J. Dense Mapping for Mobile Robots Using RGB-D Sensors IEEE International Conference on Robotics and Automation (ICRA), 2015
双目视觉的SLAM系统:Oliveira, A., and Scaramuzza, D. Stereo Vision SLAM with Keyframe Relocalization IEEE Transactions on Robotics, 2016
3. 德国慕尼黑工业大学计算机视觉组
- 研究方向:三维重建、机器人视觉、深度学习、视觉 SLAM 等
- 实验室主页:vision.in.tum.de/resear
- 论文汇总:https://vision.in.tum.de/public
- Github 地址:https://github.com/tum-vision
- 主要学者:
Jakob Engel(LSD-SLAM,DSO 作者):个人主页谷歌学术
- 代表性工作:
论文:DSO:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.(代码:https://github.com/JakobEngel/dso )
LSD-SLAM: Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.(代码:https://github.com/tum-vision/lsd_slam )2.
4. 瑞士苏黎世联邦理工计算机视觉与几何实验室
- 研究方向:定位、三维重建、语义分割、机器人视觉
- 实验室主页:http://www.cvg.ethz.ch/index.php
- 论文汇总:http://www.cvg.ethz.ch/publication
- 主要学者:
Johannes L. Schönberger:个人主页,谷歌学术
- 代表性论文与工作:
视觉语义里程计:Lianos K N, Schonberger J L, Pollefeys M, et al. Vso: Visual semantic odometry[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 234-250.
视觉语义定位:CVPR 2018 Semantic visual localization
作者博士学位论文:2018 Robust Methods for Accurate and Efficient 3D Modeling from Unstructured Imagery
大规模户外建图:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.
代码:https://github.com/AndreiBarsan/DynSLAM
作者博士学位论文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.
5. 卡内基梅隆大学机器人研究所 (Carnegie Mellon University Robotics Institute, CMU RI)
- 研究方向:多传感器SLAM,自动驾驶,计算机视觉,视觉SLAM
- 实验室主页:CMU Robotics Institute
- 主要学者:
Martial Hebert 个人主页:Martial Hebert
Simon Lucey 个人主页:Simon Lucey
- 代表性论文与工作:
直接法稀疏里程计:Engel, J., Koltun, V., and Cremers, D. Direct Sparse Odometry, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2018.
基于对象的SLAM方法:Chen, C., and Hebert, M. Object-Based SLAM Using Multiple-View Stereo, IEEE Transactions on Robotics, 2017.
Wang, W., and Lucey, S. Neural Scene Flow Fields for Dynamic 3D Scene Reconstruction, IEEE International Conference on Computer Vision (ICCV), 2021.
6. 浙江大学 CAD&CG 国家重点实验室
- 研究方向:SFM/SLAM,三维重建,增强现实
- 实验室主页:http://www.zjucvg.net/
- Github 代码地址:https://github.com/zju3dv
- 主要学者:
- 代表性论文与工作:
- ICE-BA:Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.(代码:https://github.com/zju3dv/EIBA )
- RK-SLAM:Liu H, Zhang G, Bao H. Robust keyframe-based monocular SLAM for augmented reality[C]//2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2016: 1-10.(项目主页:http://www.zjucvg.net/rkslam/rkslam.html )
- RD-SLAM:Tan W, Liu H, Dong Z, et al. Robust monocular SLAM in dynamic environments[C]//2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2013: 209-218.
细分赛道:水下三维重建、NeRF、水下视觉SLAM
1. 美国加州大学伯克利分校机器人与人工智能实验室 (Berkeley AI Research Lab, BAIR)
- 研究方向:NeRF,水下三维重建,计算机视觉
- 实验室主页:Berkeley AI Research Lab (BAIR)
- 主要学者
Pieter Abbeel:EECS at UC Berkeley
- 代表性工作:
论文:NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
GitHub:NeRF
Mildenhall, B., and Srinivasan, P. P. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis arXiv, 2020
Zhang, W., and Matusik, W. DeepNeRF: Towards Deep Neural Radiance Fields for High-Quality View Synthesis arXiv, 2021
Martin-Brualla, R., and Matusik, W. Multiview Neural Surface Reconstruction IEEE Transactions on Visualization and Computer Graphics, 2020
Li, Z., and Yu, F. NeRF-W: Neural Radiance Fields for Wide-Field View Synthesis CVPR, 2021
2. 香港科技大学机器人和多感知实验室
- 研究方向:三维重建、自动驾驶
- 实验室主页: 机器人和多感知实验室 ·机器人和多感知实验室
- 论文汇总:机器人和多感知实验室 ·机器人和多感知实验室
- 主要学者:
刘明教授,主要研究:动态环境建模、机器人深度学习
- 代表性工作:
论文eg:Tightly Coupled 3D Lidar Inertial Odometry and Mapping
论文PDF:1904.06993
LIO-mapping:LIO-mapping
论文eg:Probabilistic End-to-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion
论文PDF:cai2020raliros.pdf
论文eg:Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning
论文PDF:Vision-Based Trajectory Planning via Imitation Learning for Autonomous Vehicles
3. 奥地利格拉茨技术大学计算机图形学与视觉研究所
- 研究方向:AR/VR,机器人视觉,机器学习,目标识别与三维重建
- 实验室主页:tugraz.at/institutes/ic
- 代表性工作:
- Friedrich Fraundorfer 教授:团队主页谷歌学术
- Visual Odometry: Part I The First 30 Years and Fundamentals
- Visual Odometry: Part II: Matching, Robustness, Optimization, and Applications
- Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.(代码:https://github.com/fabianschenk/RESLAM )
- Dieter Schmalstieg 教授:团队主页谷歌学术
- 教科书:Augmented Reality: Principles and Practice
- Arth C, Pirchheim C, Ventura J, et al. Instant outdoor localization and slam initialization from 2.5 d maps[J]. IEEE transactions on visualization and computer graphics, 2015, 21(11): 1309-1318.
- Hachiuma R, Pirchheim C, Schmalstieg D, et al. DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM[J]. arXiv preprint arXiv:1907.09127, 2019.
4. 加州理工学院人工智能和机器人实验室 (California Institute of Technology AI & Robotics Lab, Caltech AIRL)
- 研究方向:NeRF,3D重建,视图合成,深度学习
- 实验室主页:Caltech AIRL
- 主要学者:
Pieter Abbeel:Pieter Abbeel
- 代表性论文与工作:
Mildenhall, B., Srinivasan, P. P., Tancik, M., et al. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, ECCV, 2020.
Zhang, R., and Abbeel, P. NeRF++: An Extended Neural Radiance Field for Unconstrained Scene Representation, arXiv, 2021.
Tancik, M., and Abbeel, P. Learned 3D Scene Decomposition with Neural Radiance Fields, IEEE International Conference on Computer Vision (ICCV), 2021.
5. 美国麻省理工学院海洋机器人组
研究方向:水下或陆地移动机器人导航与建图
- 实验室主页:marinerobotics.mit.edu/
John Leonard 教授:谷歌学术
- 代表性论文与工作:https://marinerobotics.mit.edu/
面向物体的 SLAM:Finman R, Paull L, Leonard J J. Toward object-based place recognition in dense rgb-d maps[C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015.
拓展 KinectFusion:Whelan T, Kaess M, Fallon M, et al. Kintinuous: Spatially extended kinectfusion[J]. 2012.
语义 SLAM 概率数据关联:Doherty K, Fourie D, Leonard J. Multimodal semantic slam with probabilistic data association[C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019: 2419-2425.