研究SLAM网站和有用站点-sourcecode+测试数据

1http://openslam.org/

The  simultaneous localization and mapping (SLAM)  problem has been intensively studied in the robotics community in the past. Different techniques have been proposed but only a few of them are available as implementations to the community. The goal of OpenSLAM.org is to provide a platform for SLAM researchers which gives them the possibility to publish their algorithms. OpenSLAM.org provides to every interested SLAM researcher a  subversion (svn)  repository and a small webpage in order to publish and promote their work. In the repository, only the authors have full access to the files; other users are restricted to read-only access. OpenSLAM.org does not really aim to provide a repository for the daily development process of early SLAM implementations. Published algorithm should have a certain degree of robustness. 

OpenSLAM.org does not force the authors to give away the copyright for their code. We only require that the algorithms are provided as source code and that the authors allow the users to use and modify the source code for their own research. Any commercial application, redistribution, etc has to be arranged between users and authors individually. 

2http://cvpr.in.tum.de/data/datasets/rgbd-dataset

RGB-D SLAM Dataset and Benchmark

Contact: Jürgen Sturm

We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) and sensor resolution (640×480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. Finally, we propose an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems.

3http://vision.ia.ac.cn/Students/gzp/monocularslam.html

Monocular SLAM

The research in monocular SLAM technology is mainly based on the EKF(Extended Kalman Filter) SLAM approaches.

4* http://www.cvpapers.com/rr.html

Computer Vision Algorithm Implementations

5*  http://www.mrpt.org/
the mobile robot programming toolkit非常好的东西

6 自己研究方向的andrew j. Davsion 
基于单目视觉的SLAM实现与研究

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ORB-SLAM2是一个用于视觉SLAM(同时定位与地图构建)的系统,主要用于单目相机。它主要依赖于特征识别和自主导航,但也可以结合其他传感器,如IMU。 关于ORB-SLAM2和YOLO的结合,我无法在提供的引用中找到相关信息。然而,ORB-SLAM2可以与YOLO目标检测算法结合使用,以实现在SLAM系统中进行实时目标检测。通过将YOLO的检测结果与ORB-SLAM2的视觉信息结合起来,可以实现对环境中的目标进行定位和跟踪。这样的组合可以用于许多应用,如机器人导航、增强现实等。 要在ORB-SLAM2中使用YOLO,您需要做以下工作: 1. 安装YOLO:首先,您需要安装YOLO目标检测算法。可以参考引用中提供的路径来获取YOLO的检测结果。确保您已经正确安装并配置了YOLO。 2. 整合YOLO与ORB-SLAM2:将YOLO的检测结果与ORB-SLAM2的视觉信息融合起来是整合两者的关键。您需要修改ORB-SLAM2的源代码,以接收并处理YOLO的输出结果。具体的实现细节将取决于您使用的ORB-SLAM2和YOLO的版本和实现方式。 3. 运行整合后的系统:完成整合后,您可以运行整合后的系统,使用单目相机进行SLAM同时进行目标检测。您可以使用ORB-SLAM2的API接口来访问SLAM系统的位姿估计结果,并从YOLO的检测结果中获取目标的位置和类别信息。 总结起来,ORB-SLAM2可以与YOLO目标检测算法结合使用,以实现在SLAM系统中进行实时目标检测。通过整合两者,并修改ORB-SLAM2的源代码,您可以实现单目相机的SLAM和目标检测。请注意,具体的实现细节取决于您使用的ORB-SLAM2和YOLO的版本和实现方式。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值