slam收藏(updating)

比较全面的资料:SLAM技术协会  

http://slamcn.org/index.php/%E9%A6%96%E9%A1%B5

 

A curated list of awesome SLAM tutorials, projects and communities.  https://github.com/kanster/awesome-slam

===============================================================

slam资料整理

  slam视频课程,ppt简单教程; 书籍 ; slam论文(综述 常用方法  视觉slam 激光slam  闭环  ) ;  openSLAM

  slam数据集    slam研究者群   slam研究者

 

===================================视觉slam============================================================

 

The Future of Real-Time SLAM and "Deep Learning vs SLAM"                                                  --Tombone's Computer Vision Blog   Wednesday, January 13, 2016(若无法打开原文,看转载)大家可以顺着这篇博客了解Vslam的进程。 关注其中提到的研究者及其相应的研究机构和相应的具体研究项目。

 

====The Future of Real-Time SLAM: 18th December 2015 (ICCV Workshop)

 Andrew Davison and Stefan Leutenegger:15 years of vision-based SLAM and where we are now
 Christian Kerl: Dense continuous-time tracking and mapping
 Jakob Engel: Semi-dense SLAM
 Torsten Sattler: The challenges of large-scale localisation and mapping
 Raúl Mur Artal:Should we still do sparse feature based SLAM?
 Simon Lynen: Google Project Tango: SLAM and dense mapping for end-users
 Stefan Leutenegger representing Tom Whelan: Map-centric SLAM with ElasticFusion

 

 

==================================课程==================================================================

== 国外机器人/移动机器人相关视频==

Autonome Intelligente Systeme

CS 287: Advanced Robotics, Fall 2012 University of California at BerkeleyDept of Electrical Engineering & Computer Sciences
Introduction to Mobile Robotics - SS 2012
slam视频教程(请勿商用) 链接:  密码: wz65

苏黎世理工的robot课程:http://www.asl.ethz.ch/education/master/mobile_robotics       

http://www.asl.ethz.ch/education/lectures/autonomous_mobile_robots/spring-2018.html      

 

==机器学习==

 

斯坦福大学公开课 :机器学习课程 (吴恩达)  链接: http://pan.baidu.com/s/1pJSzxpT 密码: 68eu

 

 

========Photogrammetry ==========

 

 ========vision==========

 

Learning Based Methods in Vision

 

 

=======================================书籍=============================================

Probabilistic Robotics     链接:http://pan.baidu.com/s/1o6MOiJw 密码:iqcf

Home:  http://www.probabilistic-robotics.org/   勘误  http://probabilistic-robotics.informatik.uni-freiburg.de/errata.html

 

Multiple View Geometry in Computer Vision Second Edition  

Robotics Vision and Control        

 

通过MATLAB几乎把机器人学给贯穿了,里面每章节都有对应的Code,关于里面Matlab的codes

澳大利亚昆士兰理工大学的Peter Corke是机器视觉领域的大牛人物,他所编写的Robotics, vision and control一书更是该领域的经典教材

配套有matlab工具箱。工具箱分为两部分,一部分为机器人方面的,另一部分为视觉方面的工具箱

源代码都是开放免费下载的: http://petercorke.com/Toolbox_software.html           

 

=======================================论文==============================================================

 

slam基本方法:

滤波框架:       卡尔曼滤波  : EKF UKF  EIF  等

                      粒子滤波:   PF  RBPF   FASTSAM 1.0   2.0  MCL

图优化框架:    Graph-slam   工具: g20

 

开源算法: 

              激光:  gampping   karto-slam   scan-matching

              视觉:  Mono-slam(SceneLib davison  c++)    ekfmonoo-slam(逆深度观测模型  matlab)  

                        PTAM    SVO    ORB

                        rgbd-slam V2    高博一起做slam(kinect  视觉slam入门必看)  

闭环检测

         

开源代码汇总

     openslam              https://www.openslam.org/  

      

//======

我曾经下载的论文,看了部分.     仅作自由分享,勿商用  链接:http://pan.baidu.com/s/1ntW7mch 密码:c6j1

基础的同学论文先看综述类,基础类. 再根据自己打算做的,把相关方向的论文都涉猎下看看.再集中解决你文献调研中出现的问题.

 

=======================================数据集==============================================================

 

The Robotics Data Set Repository (Radish for short) provides acollection of standard robotics data sets. Contained here-in you will find:

  • Logs of odometry, laser and sonar data taken from real robots.
  • Logs of all sorts of sensor data taken from simulated robots.
  • Environment maps generated by robots.
  • Environment maps generated by hand (i.e., re-touched floor-plans).

-----转自  

 

 

 

  • SLAM benchmarking.  http://kaspar.informatik.uni-freiburg.de/~slamEvaluation/datasets.php
  • KITTI SLAM dataset.  http://www.cvlibs.net/datasets/kitti/eval_odometry.php. 包括 单目视觉 ,双目视觉, velodyne, POS 轨迹
  • OpenSLAM .https://www.openslam.org/links.html
  • CMU Visual Localization Data Set: Dataset collected using the Navlab 11 equipped with IMU, GPS, Lidars and cameras.
  • NYU RGB-D Dataset: Indoor dataset captured with a Microsoft Kinect that provides semantic labels.
  • TUM RGB-D Dataset: Indoor dataset captured with Microsoft Kinect and high-accuracy motion capturing.
  • New College Dataset: 30 GB of data for 6 D.O.F. navigation and mapping (metric or topological) using vision and/or laser.
  • The Rawseeds Project: Indoor and outdoor datasets with GPS, odometry, stereo, omnicam and laser measurements for visual, laser-based, omnidirectional, sonar and multi-sensor SLAM evaluation.
  • Victoria Park Sequence: Widely used sequence for evaluating laser-based SLAM. Trees serve as landmarks, detection code is included.
  • Malaga Dataset 2009 and Malaga Dataset 2013: Dataset with GPS, Cameras and 3D laser information, recorded in the city of Malaga, Spain.
  • Ford Campus Vision and Lidar Dataset: Dataset collected by a Ford F-250 pickup, equipped with IMU, Velodyne and Ladybug.

 

------转自
1. Tum数据集
这个大家用的人都知道,RGB-D数据集,有很多个sequence,自带Ground-truth轨迹与测量误差的脚本(python写的,还有一些有用的函数)。
有一些很简单(xyz, 360系列),但也有的很难(各个slam场景)。
由于它的目标场景是机器人救援(虽然看不太出来),场景都比较空旷,许多时候kinect的深度只够看一个地板。对视觉算法可靠性的要求还是蛮高的。
网址: http://vision.in.tum.de/data/datasets/rgbd-dataset


2. MRPT
坛友SLAM_xian已经给出了地址:见此贴
含有多种传感器数据,包括双目,laser等等。
MRPT本身是个机器人用的开发包(然而我还是没用过),有兴趣的同学可以尝试。

3. Kitti
坛友zhengshunkai给出了地址:见此贴
著名的室外数据集,双目,有真值。场景很大,数据量也很大(所以在我这种流量限制的地方用不起……)。如果你做室外的请务必尝试一下此数据集。就算你不用审稿人也会让你用的。

4. Oxford数据集
含有一些Fabmap相关的数据集,用来验证闭环检测的算法。室外场景。它提供了ground-truth闭环(据说是手工标的,真是有耐心啊)。
网址:http://www.robots.ox.ac.uk/~mobile/wikisite/pmwiki/pmwiki.php?n=Main.Datasets#userconsent#

5. ICL-NUIM数据集
(又)是帝国理工弄出来的,RGB-D数据集,室内向。提供ground-truth和odometry。
网址:http://www.doc.ic.ac.uk/%7Eahanda/VaFRIC/iclnuim.html

6. NYUV2 数据集
一个带有语义标签的RGB-D数据集,原本是用来做识别的,也可以用来做SLAM。特点是有一个训练集(1400+手工标记的图像,好像是雇人弄的),以前一大堆video sequence。
网址:http://cs.nyu.edu/silberman/datasets/nyu_depth_v2.html (似乎访问有问题,不知道会不会修复)


7. KOS的3d scan数据集
一个激光扫描的数据集。
网址:http://kos.informatik.uni-osnabrueck.de/3Dscans/

 --------------

 

======================================研究者=============================================================

slam研究者群:   254787961  

 

SRI International is a nonprofit corporation committed to technology research and development   http://www.ai.sri.com/

Marc Pollefeys  https://www.inf.ethz.ch/personal/marc.pollefeys/

ZEESHAN ZIA

Jakob Engel

Ronald Parr

  • LSPI: Fast andefficient reinforcement learning with linear value function approximationfor MDPs and multi agent systems.
  • DPSLAM: Fast,accurate, truly simultaneous localization and mapping without landmarks.
  • Textured Occupancy Grids: Monocular Localization without Features. We provide some 3D data sets using a variety of sensors.

Andrew Davison

Dr. Thomas Whelan  : research focuses on real-time dense visual SLAM and on a broader scale, general robotic perception.

 

Probabilistic Robotics: About the Authors

Sebastian Thrun is Associate Professor in the Computer Science Department at Stanford University and Director of the Stanford AI Lab. 

Wolfram Burgard is Associate Professor and Head of the Autonomous Intelligent Systems Research Lab in the Department of Computer Science at the University of Freiburg. 

Dieter Fox is Associate Professor and Director of the Robotics and State Estimation Lab in the Department of Computer Science and Engineering at the University of Washington.

 

===============================公司===================================

 

Tango goole   :   https://www.google.com/atap/project-tango/

slamTec   激光slam :   http://www.slamtec.com/en

………………………………

 

//From kint_zhao  http://blog.csdn.net/zyh821351004/article/details/50081713   

 

  • 8
    点赞
  • 35
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值