机器人学习--Mobile robot国内外优秀实验室

这个网站收集了不少国内外机器人相关实验室的官网

学科机构 - - 机器人工程 - 武汉商学院 - 学科服务平台 (hnlat.com)

部分内容转自:

【泡泡图灵智库】SLAM 领域国内外优秀实验室汇总 1/3

【泡泡图灵智库】SLAM 领域国内外优秀实验室汇总 2/3

国内:

1. 东北大学

信息科学与工程学院  方正教授

研究方向:移动机器人的“环境感知与自主导航”

 方正

机器人科学与工程学院 机器人科学与工程学院

佟国峰教授等团队  机器人与人工智能实验室: 东北大学“985工程”人工智能与机器人实验室

2. 浙江大学

章国峰教授 注重视觉SLAM

3.香港科技大学

沈邵洁老师团队

VinsMono的作者

4. 湖南大学

5. 国防科技大学

NuBot Research Team - 确实激发创新Trustiehttps://nubot.trustie.net/organizations/23

国外:

1. 美国卡内基梅隆大学机器人研究所

研究方向:机器人感知、结构,服务型、运输、制造业、现场机器

研究所主页:https://www.ri.cmu.edu/

下属 Field Robotic Center 主页:https://frc.ri.cmu.edu/

发表论文:https://www.ri.cmu.edu/pubs/

👦 Michael Kaess 个人主页:https://natanaso.github.io/

👦 Sebastian Scherer 个人主页:https://www.ri.cmu.edu/ri-faculty/sebastian-scherer/

📜 Kaess M, Ranganathan A, Dellaert F. iSAM: Incremental smoothing and mapping[J]. IEEE Transactions on Robotics, 2008, 24(6): 1365-1378.

📜 Hsiao M, Westman E, Zhang G, et al. Keyframe-based dense planar SLAM[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 5110-5117.

📜 Kaess M. Simultaneous localization and mapping with infinite planes[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015: 4605-4611.

2. 美国加州大学圣地亚哥分校语境机器人研究所

研究方向:多模态环境理解,语义导航,自主信息获取

实验室主页:https://existentialrobotics.org/index.html

发表论文汇总:https://existentialrobotics.org/pages/publications.html

👦 Nikolay Atanasov 个人主页:

    机器人状态估计与感知课程 ppt:

    https://natanaso.github.io/ece276a2019/schedule.html

📜 语义 SLAM 经典论文:Bowman S L, Atanasov N, Daniilidis K, et al. Probabilistic data association for semantic slam[C]//2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017: 1722-1729.

📜 实例网格模型定位与建图:Feng Q, Meng Y, Shan M, et al. Localization and Mapping using Instance-specific Mesh Models[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4985-4991.

📜 基于事件相机的 VIO:Zihao Zhu A, Atanasov N, Daniilidis K. Event-based visual inertial odometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5391-5399.

3. 美国特拉华大学机器人感知与导航组

研究方向:SLAM、VINS、语义定位与建图等

实验室主页:https://sites.udel.edu/robot/

发表论文汇总:https://sites.udel.edu/robot/publications/

Github 地址:https://github.com/rpng?page=2

👦 黄国权教授主页:http://udel.edu/~ghuang/index.html

📜 Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019.

    代码:https://github.com/rpng/open_vins

📜 Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 6319-6326.

    代码:https://github.com/rpng/R-VIO

4. 美国麻省理工学院航空航天实验室

研究方向:位姿估计与导航,路径规划,控制与决策,机器学习与强化学习

实验室主页:http://acl.mit.edu/

发表论文:http://acl.mit.edu/publications (实验室的学位论文也可以在这里找到)

👦 Jonathan P. How 教授个人主页:http://www.mit.edu/people/jhow/

👦 Kasra Khosoussi(SLAM 图优化)谷歌学术:https://scholar.google.com/citations?user=SRCCuo0AAAAJ&hl=zh-CN&oi=sra

📜 物体级 SLAM:Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代码:https://github.com/BeipengMu/objectSLAM)

📜 物体级 SLAM 导航:Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 669-675.

📜 SLAM 图优化:Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., and How, J. P., Reliable Graphs for SLAM [C]//International Journal of Robotics Research (IJRR), 2019.

5. 美国麻省理工学院 SPARK 实验室

研究方向:移动机器人环境感知

实验室主页:http://web.mit.edu/sparklab/

👦 Luca Carlone 教授个人主页:https://lucacarlone.mit.edu/

📜 SLAM 经典综述:Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age[J]. IEEE Transactions on robotics, 2016, 32(6): 1309-1332.

📜 VIO 流形预积分:Forster C, Carlone L, Dellaert F, et al. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry[J]. IEEE Transactions on Robotics, 2016, 33(1): 1-21.

📜 开源语义 SLAM:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, 2019.

代码:https://github.com/MIT-SPARK/Kimera

6. 美国麻省理工学院海洋机器人组

研究方向:水下或陆地移动机器人导航与建图

实验室主页:https://marinerobotics.mit.edu/ (隶属于 MIT 计算机科学与人工智能实验室)

👦 John Leonard 教授谷歌学术:https://scholar.google.com/citations?user=WPe7vWwAAAAJ&hl=zh-CN&authuser=1&oi=ao

发表论文汇总:https://marinerobotics.mit.edu/biblio

📜 面向物体的 SLAM:Finman R, Paull L, Leonard J J. Toward object-based place recognition in dense rgb-d maps[C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015.

📜 拓展 KinectFusion:Whelan T, Kaess M, Fallon M, et al. Kintinuous: Spatially extended kinectfusion[J]. 2012.

📜 语义 SLAM 概率数据关联:Doherty K, Fourie D, Leonard J. Multimodal semantic slam with probabilistic data association[C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019: 2419-2425.

7. 美国明尼苏达大学多元自主机器人系统实验室

研究方向:视觉、激光、惯性导航系统,移动设备大规模三维建模与定位

实验室主页:http://mars.cs.umn.edu/index.php

发表论文汇总:http://mars.cs.umn.edu/publications.php

👦 Stergios I. Roumeliotis个人主页:https://www-users.cs.umn.edu/~stergios/

📜 移动设备 VIO:Wu K, Ahmed A, Georgiou G A, et al. A Square Root Inverse Filter for Efficient Vision-aided Inertial Navigation on Mobile Devices[C]//Robotics: Science and Systems. 2015, 2.(项目主页:http://mars.cs.umn.edu/research/sriswf.php )

📜 移动设备大规模三维半稠密建图:Guo C X, Sartipi K, DuToit R C, et al. Resource-aware large-scale cooperative three-dimensional mapping using multiple mobile devices[J]. IEEE Transactions on Robotics, 2018, 34(5): 1349-1369. 

    项目主页:http://mars.cs.umn.edu/research/semi_dense_mapping.php

📜 VIO 相关研究:http://mars.cs.umn.edu/research/vins_overview.php

8. 美国宾夕法尼亚大学 Vijay Kumar 实验室

研究方向:自主微型无人机

实验室主页:https://www.kumarrobotics.org/

发表论文:https://www.kumarrobotics.org/publications/

研究成果视频:https://www.youtube.com/user/KumarLabPenn/videos

📜 无人机半稠密 VIO:Liu W, Loianno G, Mohta K, et al. Semi-Dense Visual-Inertial Odometry and Mapping for Quadrotors with SWAP Constraints[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-6.

📜 语义数据关联:Liu X, Chen S W, Liu C, et al. Monocular Camera Based Fruit Counting and Mapping with Semantic Data Association[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2296-2303.

9. Srikumar Ramalingam(美国犹他大学计算机学院)

研究方向:三维重构、语义分割、视觉 SLAM、图像定位、深度神经网络

👦 Srikumar Ramalingam 个人主页:https://www.cs.utah.edu/~srikumar/

📜 点面 SLAM:Taguchi Y, Jian Y D, Ramalingam S, et al. Point-plane SLAM for hand-held 3D sensors[C]//2013 IEEE international conference on robotics and automation. IEEE, 2013: 5182-5189.

📜 点线定位:Ramalingam S, Bouaziz S, Sturm P. Pose estimation using both points and lines for geo-localization[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011: 4716-4723.

📜 2D 3D 定位:Ataer-Cansizoglu E, Taguchi Y, Ramalingam S. Pinpoint SLAM: A hybrid of 2D and 3D simultaneous localization and mapping for RGB-D sensors[C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016: 1300-1307.

10. Frank Dellaert(美国佐治亚理工学院机器人与智能机器研究中心)

研究方向:SLAM,图像时空重构

👦 个人主页:https://www.cc.gatech.edu/~dellaert/FrankDellaert/Frank_Dellaert/Frank_Dellaert.html

📜 因子图:Dellaert F. Factor graphs and GTSAM: A hands-on introduction[R]. Georgia Institute of Technology, 2012. (GTSAM 代码:http://borg.cc.gatech.edu/ )

📜 多机器人分布式 SLAM:Cunningham A, Wurm K M, Burgard W, et al. Fully distributed scalable smoothing and mapping with robust multi-robot data association[C]//2012 IEEE International Conference on Robotics and Automation. IEEE, 2012: 1093-1100.

📜 Choudhary S, Trevor A J B, Christensen H I, et al. SLAM with object discovery, modeling and mapping[C]//2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014: 1018-1025.

11. 赵轶璞(美国佐治亚理工学院智能视觉与自动化实验室)

研究方向:视觉 SLAM、三维重建、多目标跟踪

👦 个人主页 :https://sites.google.com/site/zhaoyipu/home?authuser=0

📜 Zhao Y, Smith J S, Karumanchi S H, et al. Closed-Loop Benchmarking of Stereo Visual-Inertial SLAM Systems: Understanding the Impact of Drift and Latency on Tracking Accuracy[J]. arXiv preprint arXiv:2003.01317, 2020.

📜 Zhao Y, Vela P A. Good feature selection for least squares pose optimization in VO/VSLAM[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1183-1189.(代码:https://github.com/ivalab/FullResults_GoodFeature )

📜 Zhao Y, Vela P A. Good line cutting: Towards accurate pose tracking of line-assisted VO/VSLAM[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 516-531. (代码:https://github.com/ivalab/GF_PL_SLAM )

12. 加拿大舍布鲁克大学智能、交互、综合、跨学科机器人实验室

研究方向:移动机器人软硬件设计

实验室主页:https://introlab.3it.usherbrooke.ca/

📜 激光视觉稠密重建:Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, 2019, 36(2): 416-446.

代码:https://github.com/introlab/rtabmap

项目主页:http://introlab.github.io/rtabmap/

13. 加拿大蒙特利尔大学 机器人与嵌入式 AI 实验室

研究方向:SLAM,不确定性建模

实验室主页:http://montrealrobotics.ca/

👦 Liam Paull 教授个人主页:https://liampaull.ca/index.html

📜 Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4602-4609.(代码:https://github.com/BeipengMu/objectSLAM)

📜 Murthy Jatavallabhula K, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, 2019.(代码:https://github.com/montrealrobotics/gradSLAM )

14. 瑞士苏黎世大学机器人与感知课题组

研究方向:移动机器人、无人机环境感知与导航,VISLAM,事件相机

实验室主页:http://rpg.ifi.uzh.ch/index.html

发表论文汇总:http://rpg.ifi.uzh.ch/publications.html

Github 代码公开地址:https://github.com/uzh-rpg

📜 Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.

📜 VO/VIO 轨迹评估工具 rpg_trajectory_evaluation:https://github.com/uzh-rpg/rpg_trajectory_evaluation

📜 事件相机项目主页:http://rpg.ifi.uzh.ch/research_dvs.html

👦 Davide Scaramuzza 主页:http://rpg.ifi.uzh.ch/people_scaramuzza.html

👦 张子潮主页:https://www.ifi.uzh.ch/en/rpg/people/zichao.html

15. 瑞士苏黎世联邦理工计算机视觉与几何实验室

研究方向:定位、三维重建、语义分割、机器人视觉

实验室主页:http://www.cvg.ethz.ch/index.php

发表论文:http://www.cvg.ethz.ch/publications/

📜 视觉语义里程计:Lianos K N, Schonberger J L, Pollefeys M, et al. Vso: Visual semantic odometry[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 234-250.

📜 视觉语义定位:CVPR 2018 Semantic visual localization

    作者博士学位论文:2018 Robust Methods for Accurate and Efficient 3D Modeling from Unstructured Imagery

📜 大规模户外建图:Bârsan I A, Liu P, Pollefeys M, et al. Robust dense mapping for large-scale dynamic environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 7510-7517.(代码:https://github.com/AndreiBarsan/DynSLAM )

    作者博士学位论文:Barsan I A. Simultaneous localization and mapping in dynamic scenes[D]. ETH Zurich, Department of Computer Science, 2017.

👦 Marc Pollefeys 个人主页:http://people.inf.ethz.ch/pomarc/index.html

👦 Johannes L. Schönberger 个人主页:https://demuc.de/

Cesar Dario Cadena Lerma

Cesar Cadena's Homepage

16. 英国帝国理工学院戴森机器人实验室

研究方向:机器人视觉场景与物体理解、机器人操纵

实验室主页:https://www.imperial.ac.uk/dyson-robotics-lab/

发表论文:https://www.imperial.ac.uk/dyson-robotics-lab/publications/

代表性工作:MonoSLAM、CodeSLAM、ElasticFusion、KinectFusion

📜 ElasticFusion:Whelan T, Leutenegger S, Salas-Moreno R, et al. ElasticFusion: Dense SLAM without a pose graph[C]. Robotics: Science and Systems, 2015.(代码:https://github.com/mp3guy/ElasticFusion )

📜 Semanticfusion:McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017: 4628-4635.(代码:https://github.com/seaun163/semanticfusion )

📜 Code-SLAM:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2560-2568.

👦 Andrew Davison 谷歌学术:https://scholar.google.com/citations?user=A0ae1agAAAAJ&hl=zh-CN&oi=ao

Andrew Davison: Research

17. 英国牛津大学信息工程学

研究方向:SLAM、目标跟踪、运动结构、场景增强、移动机器人运动规划、导航与建图等等等

实验室主页:http://www.robots.ox.ac.uk/

主动视觉实验室:http://www.robots.ox.ac.uk/ActiveVision/

牛津机器人学院:https://ori.ox.ac.uk/

发表论文汇总

    主动视觉实验室:http://www.robots.ox.ac.uk/ActiveVision/Publications/index.html

    机器人学院:https://ori.ox.ac.uk/publications/papers/

📜 Klein G, Murray D. PTAM: Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, 2007: 225-234.

📜 RobotCar 数据集:https://robotcar-dataset.robots.ox.ac.uk/

👦 David Murray 谷歌学术:https://scholar.google.com.hk/citations?hl=zh-CN&user=O5QreiwAAAAJ

👦 Maurice Fallon 谷歌学术:https://ori.ox.ac.uk/ori-people/maurice-fallon/

部分博士学位论文可以在这里搜到:https://ora.ox.ac.uk/

-- 牛津大学 视觉几何组 VGG  目标检测神经网络经典架构VGG团队名称。 Visual Geometry Group - University of Oxford

18. 德国慕尼黑工业大学计算机视觉组

研究方向:三维重建、机器人视觉、深度学习、视觉 SLAM 等

实验室主页:https://vision.in.tum.de/research/vslam

发表论文汇总:https://vision.in.tum.de/publications

代表作:DSO、LDSO、LSD_SLAM、DVO_SLAM

📜 DSO:Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.(代码:https://github.com/JakobEngel/dso )

📜 LSD-SLAM:Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, 2014: 834-849.(代码:https://github.com/tum-vision/lsd_slam )2.

Github 地址:https://github.com/tum-vision

👦 Daniel Cremers 教授个人主页:https://vision.in.tum.de/members/cremers

👦 Jakob Engel(LSD-SLAM,DSO 作者)个人主页:https://jakobengel.github.io/

19. 德国马克斯普朗克智能系统研究所嵌入式视觉组

研究方向:智能体自主环境理解、导航与物体操纵

实验室主页:https://ev.is.tuebingen.mpg.de/

👦 负责人 Jörg Stückler(前 TUM 教授)个人主页:https://ev.is.tuebingen.mpg.de/person/jstueckler

发表论文汇总:https://ev.is.tuebingen.mpg.de/publications

📜 Kasyanov A, Engelmann F, Stückler J, et al. Keyframe-based visual-inertial online SLAM with relocalization[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017: 6662-6669.

📜 Strecke M, Stuckler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 5865-5874.

📜 Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D. Visual-Inertial Mapping with Non-Linear Factor Recovery [J] IEEE Robotics and Automation Letters (RA-L), 5, 2020

20. 德国弗莱堡大学智能自主系统实验室

研究方向:多机器人导航与协作,环境建模与状态估计

实验室主页:http://ais.informatik.uni-freiburg.de/index_en.php

发表论文汇总:http://ais.informatik.uni-freiburg.de/publications/index_en.php (学位论文也可以在这里找到)

👦 Wolfram Burgard 谷歌学术:https://scholar.google.com/citations?user=zj6FavAAAAAJ&hl=zh-CN&oi=ao

开放数据集:http://aisdatasets.informatik.uni-freiburg.de/

📜 RGB-D SLAM:Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-D camera[J]. IEEE transactions on robotics, 2013, 30(1): 177-187.(代码:https://github.com/felixendres/rgbdslam_v2 )

📜 跨季节的 SLAM:Naseer T, Ruhnke M, Stachniss C, et al. Robust visual SLAM across seasons[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015: 2529-2535.

📜 博士学位论文Robust Graph-Based Localization and Mapping 2015

📜 博士学位论文Discovering and Leveraging Deep Multimodal Structure for Reliable Robot Perception and Localization 2019

📜 博士学位论文Robot Localization and Mapping in Dynamic Environments 2019

21. Cyrill Stachniss(德国波恩大学摄影测量与机器人实验室)

研究方向:概率机器人、SLAM、自主导航、视觉激光感知、场景分析与分配、无人飞行器

实验室主页:https://www.ipb.uni-bonn.de/

👦 个人主页:https://www.ipb.uni-bonn.de/people/cyrill-stachniss/

发表论文汇总:https://www.ipb.uni-bonn.de/publications/

开源代码:https://github.com/PRBonn

📜 IROS 2019 激光语义 SLAM:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 4530-4537.(代码:https://github.com/PRBonn/semantic_suma/ )

Cyrill Stachniss 教授 SLAM 公开课:https://space.bilibili.com/16886998/channel/detail?cid=118821

波恩大学另外一个智能自主系统实验室:http://www.ais.uni-bonn.de/research.html

22. 西班牙萨拉戈萨大学机器人、感知与实时组 SLAM 实验室

研究方向:视觉 SLAM、物体 SLAM、非刚性 SLAM、机器人、增强现实

实验室主页:http://robots.unizar.es/slamlab/

发表论文:http://robots.unizar.es/slamlab/?extra=3 (论文好像没更新,可以访问下面实验室大佬的谷歌学术查看最新论文)

👦 J. M. M. Montiel 谷歌学术:https://scholar.google.com/citations?user=D99JRxwAAAAJ&hl=zh-CN&oi=sra

📜 Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.

📜 Gálvez-López D, Salas M, Tardós J D, et al. Real-time monocular object slam[J]. Robotics and Autonomous Systems, 2016, 75: 435-449.

📜 Strasdat H, Montiel J M M, Davison A J. Real-time monocular SLAM: Why filter?[C]//2010 IEEE International Conference on Robotics and Automation. IEEE, 2010: 2657-2664.

📜 Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, 2019.

📜 Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, 2019.

23. 西班牙马拉加大学机器感知与智能机器人课题组

研究方向:自主机器人、人工嗅觉、计算机视觉

实验室主页:http://mapir.uma.es/mapirwebsite/index.php/topics-2.html

发表论文汇总:http://mapir.isa.uma.es/mapirwebsite/index.php/publications-menu-home.html

📜 Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, 2019, 35(3): 734-746.(代码:https://github.com/rubengooj/pl-slam )

👦 Francisco-Angel Moreno 个人主页:http://mapir.isa.uma.es/mapirwebsite/index.php/people/199-francisco-moreno-due%C3%B1as

👦 Ruben Gomez-Ojeda (点线 SLAM )个人主页:https://scholar.google.com/citations?user=7jne0V4AAAAJ&hl=zh-CN&oi=sra

📜 Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016: 4211-4216.(代码:https://github.com/rubengooj/pl-svo )

📜 Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 2521-2526.(代码:https://github.com/rubengooj/stvo-pl )

📜 Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, 2017.(代码:https://github.com/rubengooj/pl-slam )

24. Alejo Concha(Oculus VR,西班牙萨拉戈萨大学)

研究方向:SLAM,单目稠密重建,传感器融合

👦 个人主页:https://sites.google.com/view/alejoconcha/ 

Github:https://github.com/alejocb

📜 IROS 2015 单目平面重建DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence (代码:https://github.com/alejocb/dpptam )

📜 IROS 2017 开源 RGB-D SLAM:RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System代码:https://github.com/alejocb/rgbdtam )

📜 ICRA 2016:Visual-inertial direct SLAM

📜 ICRA 2014:Using Superpixels in Monocular SLAM

RSS 2014:Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping

25. 奥地利格拉茨技术大学计算机图形学与视觉研究所

研究方向:AR/VR,机器人视觉,机器学习,目标识别与三维重建

实验室主页:https://www.tugraz.at/institutes/icg/home/

👦 Friedrich Fraundorfer 教授团队主页:https://www.tugraz.at/institutes/icg/research/team-fraundorfer/

📜 Visual Odometry: Part I The First 30 Years and Fundamentals

📜 Visual Odometry: Part II: Matching, Robustness, Optimization, and Applications

📜 Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 154-160.(代码:https://github.com/fabianschenk/RESLAM )

👦 Dieter Schmalstieg 教授团队主页:https://www.tugraz.at/institutes/icg/research/team-fraundorfer/

📜 增强现实教科书:Augmented Reality: Principles and Practice

📜 Arth C, Pirchheim C, Ventura J, et al. Instant outdoor localization and slam initialization from 2.5 d maps[J]. IEEE transactions on visualization and computer graphics, 2015, 21(11): 1309-1318.

📜 Hachiuma R, Pirchheim C, Schmalstieg D, et al. DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM[J]. arXiv preprint arXiv:1907.09127, 2019.

26. 波兰波兹南工业大学移动机器人实验室

研究方向:SLAM,机器人运动规划,控制

实验室主页:http://lrm.put.poznan.pl/

Github 主页:https://github.com/LRMPUT

📜 Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, 2016, 10.(代码:https://github.com/LRMPUT/PlaneSLAM )

📜 Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, 2019.(代码:https://github.com/LRMPUT/PlaneLoc )

📜 PUTSLAM:http://lrm.put.poznan.pl/putslam/

27. Alexander Vakhitov(三星莫斯科 AI 中心)

研究方向:SLAM,几何视觉

👦 个人主页:https://alexandervakhitov.github.io/ 

📜 点线 SLAM:ICRA 2017 PL-SLAM: Real-time monocular visual SLAM with points and lines

📜 点线定位:Pumarola A, Vakhitov A, Agudo A, et al. Relative localization for aerial manipulation with PL-SLAM[M]//Aerial Robotic Manipulation. Springer, Cham, 2019: 239-248.

📜 学习型线段:IEEE Access 2019 Learnable line segment descriptor for visual SLAM代码:https://github.com/alexandervakhitov/lld-slam )

28.西班牙萨拉戈萨大学

José Neira Parra,Grupo de Robótica y Tiempo Real 

个人主页 José Neira Homepage

  • 5
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
搭建自己的机器人模型需要进行以下步骤: 1. 安装ROS和仿真工具包 2. 创建ROS包和机器人模型 3. 编写机器人控制程序 4. 启动仿真环境并加载机器人模型 5. 运行机器人控制程序,观察仿真结果 下面是一个简单的机器人模型搭建示例,使用ROS Kinetic和Gazebo仿真工具包: 1. 安装ROS和仿真工具包 在Ubuntu系统中使用以下命令安装ROS Kinetic和Gazebo仿真工具包: ``` sudo apt-get update sudo apt-get install ros-kinetic-desktop-full sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kinetic-gazebo-ros-control ``` 2. 创建ROS包和机器人模型 使用以下命令创建一个名为my_robot的ROS包,并在其中创建一个名为urdf的目录用于存放机器人模型文件: ``` mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_create_pkg my_robot cd my_robot mkdir urdf ``` 在urdf目录中创建一个名为my_robot.urdf的机器人模型文件,内容如下: ```xml <?xml version="1.0"?> <robot name="my_robot" xmlns:xacro="http://www.ros.org/wiki/xacro"> <link name="base_link"> <visual> <geometry> <box size="0.3 0.3 0.1"/> </geometry> </visual> </link> <joint name="base_joint" type="fixed"> <parent link="world"/> <child link="base_link"/> <origin xyz="0 0 0.05"/> </joint> <link name="left_wheel_link"> <visual> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> </link> <joint name="left_wheel_joint" type="continuous"> <parent link="base_link"/> <child link="left_wheel_link"/> <origin xyz="0.15 0 -0.05"/> <axis xyz="0 1 0"/> </joint> <link name="right_wheel_link"> <visual> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> </link> <joint name="right_wheel_joint" type="continuous"> <parent link="base_link"/> <child link="right_wheel_link"/> <origin xyz="0.15 0 0.05"/> <axis xyz="0 1 0"/> </joint> </robot> ``` 这个机器人模型由一个长方体的底座和两个圆柱形的轮子组成,使用URDF格式描述。其中base_link表示机器人的底座,left_wheel_link和right_wheel_link分别表示左右两个轮子。 3. 编写机器人控制程序 在ROS包的src目录中创建一个名为my_robot_control.cpp的控制程序文件,内容如下: ```cpp #include <ros/ros.h> #include <geometry_msgs/Twist.h> int main(int argc, char** argv) { ros::init(argc, argv, "my_robot_control"); ros::NodeHandle nh; ros::Publisher cmd_vel_pub = nh.advertise<geometry_msgs::Twist>("cmd_vel", 10); ros::Rate loop_rate(10); while (ros::ok()) { geometry_msgs::Twist cmd_vel; cmd_vel.linear.x = 0.1; cmd_vel.angular.z = 0.5; cmd_vel_pub.publish(cmd_vel); ros::spinOnce(); loop_rate.sleep(); } return 0; } ``` 这个控制程序使用ROS的Twist消息类型发布机器人的线速度和角速度,以控制机器人的运动。在这个示例中,机器人线速度为0.1,角速度为0.5。 4. 启动仿真环境并加载机器人模型 使用以下命令启动Gazebo仿真环境,并加载机器人模型: ``` roslaunch my_robot my_robot.launch ``` 在my_robot包中创建一个名为my_robot.launch的启动文件,内容如下: ```xml <?xml version="1.0"?> <launch> <arg name="model" default="$(find my_robot)/urdf/my_robot.urdf"/> <param name="robot_description" textfile="$(arg model)" /> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-urdf -model my_robot -param robot_description -x 0 -y 0 -z 0"/> <node name="my_robot_control" type="my_robot_control" pkg="my_robot"/> <node name="gazebo_gui" pkg="gazebo" type="gazebo"/> </launch> ``` 这个启动文件首先将机器人模型文件加载到ROS参数服务器中,然后使用gazebo_ros包的spawn_model节点将机器人模型加载到Gazebo仿真环境中。同时运行my_robot_control程序节点控制机器人运动。最后启动Gazebo仿真环境的GUI界面。 5. 运行机器人控制程序,观察仿真结果 使用以下命令运行my_robot_control程序节点,控制机器人运动: ``` rosrun my_robot my_robot_control ``` 可以观察到仿真环境中的机器人开始运动,同时在控制程序的终端输出中可以看到机器人的线速度和角速度。 下图为搭建自己的机器人模型的结果截图: ![ROS机器人仿真结果截图](https://i.imgur.com/lv9v5a1.png)

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值