研究领域国内外知名课题组调研

大领域:三维重建、视觉SLAM

1. 麻省理工学院航空航天实验室 (MIT AeroAstro)

Jonathan P. How 教授:个人主页谷歌学术

Kasra Khosoussi(SLAM 图优化):谷歌学术

  • 代表性工作:

论文:Slam with objects using a nonparametric pose graph

GitHubobjectSLAM

物体级SLAM:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016

物体级SLAM:Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation IEEE International Conference on Robotics and Automation (ICRA), 2019

SLAM图优化:Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., How, J. P. Reliable Graphs for SLAM International Journal of Robotics Research (IJRR), 2019

主动性SLAM可达性图探索:Grasa, A., Bonatti, F., and How, J. P. Exploration with Active SLAM and Reachability Maps IEEE Transactions on Robotics (TRO), 2020

2. 斯坦福大学视觉与学习实验室 (Stanford Vision and Learning Lab, SVL)

  • 研究方向:视觉SLAM,机器人感知,3D重建,计算机视觉
  • 实验室主页:SVL Stanford
  • 主要学者:

Davide Scaramuzza

个人主页:https://rpg.ifi.uzh.ch/teams/davidescaramuzza/

Cyrill Stachniss

个人主页:https://www6.informatik.uni-ulm.de/homepage/stachniss/

  • 代表性工作:

论文:Real-Time Visual SLAM for Mobile Robots with Inertial Fusion

GitHubVINS-Mono

ORB-SLAM:Forster, C., Pizzoli, M., and Scaramuzza, D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System IEEE Transactions on Robotics, 2014

鲁棒的视觉-惯性里程计(VO)方法:Zhou, Y., and Li, J. Robust Visual-Inertial Odometry Using Stereo and RGB-D Cameras IEEE Transactions on Robotics, 2018

稠密三维重建:Shin, H., and Lee, J. Dense Mapping for Mobile Robots Using RGB-D Sensors IEEE International Conference on Robotics and Automation (ICRA), 2015

双目视觉的SLAM系统:Oliveira, A., and Scaramuzza, D. Stereo Vision SLAM with Keyframe Relocalization IEEE Transactions on Robotics, 2016

3.  德国慕尼黑工业大学计算机视觉组

Daniel Cremers 教授:个人主页谷歌学术

Jakob Engel(LSD-SLAM,DSO 作者):个人主页谷歌学术

  • 代表性工作:

论文:DSO:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.(代码:https://github.com/JakobEngel/dso )

LSD-SLAM: Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.(代码:https://github.com/tum-vision/lsd_slam )2.

4. 瑞士苏黎世联邦理工计算机视觉与几何实验室

Marc Pollefeys个人主页谷歌学术

Johannes L. Schönberger个人主页谷歌学术

  • 代表性论文与工作:

视觉语义里程计:Lianos K N, Schonberger J L, Pollefeys M, et al. Vso: Visual semantic odometry[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 234-250.

视觉语义定位:CVPR 2018 Semantic visual localization

作者博士学位论文:2018 Robust Methods for Accurate and Efficient 3D Modeling from Unstructured Imagery

大规模户外建图:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.

代码https://github.com/AndreiBarsan/DynSLAM

作者博士学位论文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.

5. 卡内基梅隆大学机器人研究所 (Carnegie Mellon University Robotics Institute, CMU RI)

  • 研究方向:多传感器SLAM,自动驾驶,计算机视觉,视觉SLAM
  • 实验室主页:CMU Robotics Institute
  • 主要学者:

Martial Hebert  个人主页:Martial Hebert

Simon Lucey  个人主页:Simon Lucey

  • 代表性论文与工作:

直接法稀疏里程计:Engel, J., Koltun, V., and Cremers, D. Direct Sparse Odometry, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2018.

基于对象的SLAM方法:Chen, C., and Hebert, M. Object-Based SLAM Using Multiple-View Stereo, IEEE Transactions on Robotics, 2017.

Wang, W., and Lucey, S. Neural Scene Flow Fields for Dynamic 3D Scene Reconstruction, IEEE International Conference on Computer Vision (ICCV), 2021.

6. 浙江大学 CAD&CG 国家重点实验室

章国峰教授:个人主页谷歌学术

细分赛道:水下三维重建、NeRF、水下视觉SLAM

1. 美国加州大学伯克利分校机器人与人工智能实验室 (Berkeley AI Research Lab, BAIR)

Pieter AbbeelEECS at UC Berkeley

  • 代表性工作:

论文:NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

GitHubNeRF

Mildenhall, B., and Srinivasan, P. P. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis arXiv, 2020

Zhang, W., and Matusik, W. DeepNeRF: Towards Deep Neural Radiance Fields for High-Quality View Synthesis arXiv, 2021

Martin-Brualla, R., and Matusik, W. Multiview Neural Surface Reconstruction IEEE Transactions on Visualization and Computer Graphics, 2020

Li, Z., and Yu, F. NeRF-W: Neural Radiance Fields for Wide-Field View Synthesis CVPR, 2021

2. 香港科技大学机器人和多感知实验室

刘明教授,主要研究:动态环境建模、机器人深度学习

主页:‪Ming Liu - ‪Google 学术搜索

  • 代表性工作:

论文eg:Tightly Coupled 3D Lidar Inertial Odometry and Mapping

github代码:hyye/lio-mapping: Implementation of Tightly Coupled 3D Lidar Inertial Odometry and Mapping (LIO-mapping)

论文PDF:1904.06993

LIO-mappingLIO-mapping

论文eg:Probabilistic End-to-End Vehicle Navigation in Complex Dynamic Environments with Multimodal Sensor Fusion

论文PDF:cai2020raliros.pdf

论文eg:Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning

论文PDF:Vision-Based Trajectory Planning via Imitation Learning for Autonomous Vehicles

3. 奥地利格拉茨技术大学计算机图形学与视觉研究所

4. 加州理工学院人工智能和机器人实验室 (California Institute of Technology AI & Robotics Lab, Caltech AIRL)

  • 研究方向:NeRF,3D重建,视图合成,深度学习
  • 实验室主页:Caltech AIRL
  • 主要学者:

Pieter AbbeelPieter Abbeel

  • 代表性论文与工作:

Mildenhall, B., Srinivasan, P. P., Tancik, M., et al. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, ECCV, 2020.

Zhang, R., and Abbeel, P. NeRF++: An Extended Neural Radiance Field for Unconstrained Scene Representation, arXiv, 2021.

Tancik, M., and Abbeel, P. Learned 3D Scene Decomposition with Neural Radiance Fields, IEEE International Conference on Computer Vision (ICCV), 2021.

5. 美国麻省理工学院海洋机器人组

研究方向:水下或陆地移动机器人导航与建图

John Leonard 教授:谷歌学术

面向物体的 SLAM:Finman R, Paull L, Leonard J J. Toward object-based place recognition in dense rgb-d maps[C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015.

拓展 KinectFusion:Whelan T, Kaess M, Fallon M, et al. Kintinuous: Spatially extended kinectfusion[J]. 2012.

语义 SLAM 概率数据关联:Doherty K, Fourie D, Leonard J. Multimodal semantic slam with probabilistic data association[C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019: 2419-2425.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值