ORB-SLAM2环境配置以及运行详解(一):ROS与非ROS环境下的安装、编译、离线数据集测试

系统环境:Ubuntu 16.04 LTS

ROS版本:Kinetic Kame

ORB-SLAM2源码的编译部分有ROS版本和非ROS版本,非ROS版本不需要安装ROS也可以运行,主要针对离线数据集模式。而ROS版本主要针对使用摄像头的在线运行模式。

本篇博客对两种模式的配置均进行介绍,其中ROS版本的介绍包括对单目、RGBD摄像头使用方法的讲解。

参考博客:https://blog.csdn.net/u014709760/article/details/85253525

                    https://blog.csdn.net/learning_tortosie/article/details/79881165

                    https://www.cnblogs.com/shang-slam/p/6733322.html

                    https://blog.csdn.net/weixin_39272225/article/details/79886805

 

Table of Contents

1.准备工作

2.安装库

2.1安装Pangolin

2.2安装opencv

3.ORB-SLAM2安装

3.1非ROS版本下安装与测试

3.1.1非ROS版本单目测试

3.1.2非ROS版本RGBD测试

3.2 ROS版本安装

3.2.2ROS版本的单目摄像头运行

3.2.3ROS版本的RGBD摄像头运行


                 

1.准备工作

键入以下指令安装常用工具:

sudo apt-get update
sudo apt-get install git
sudo apt-get install cmake
//vim不是必须的,但是是个很方便编辑文本的工具
sudo apt-get install vim

键入以下指令安装Pangolin所需的依赖项,Pangolin是用于可视化的工具和用户界面:

sudo apt-get install libglew-dev libpython2.7-dev

键入以下指令安装Opencv所需的依赖项:

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

键入以下指令安装Eigen3库:

sudo apt-get install libeigen3-dev

2.安装库

2.1安装Pangolin

键入以下指令下载Pangolin包(路径自行选择):

git clone https://github.com/stevenlovegrove/Pangolin.git

在安装包所在目录下键入以下指令进行安装:

cd Pangolin
mkdir build
cd build
cmake ..
//建议电脑配置不是很高的不要加 -j
make –j
sudo make install

2.2安装opencv

在opencv官网上下载opencv安装包,这里提醒两个问题。

第一个是,安装过ROS的同学可以先跳过这一步,因为ROS自带了opencv的包。

第二个是,网上大多教程都说opencv3版本会出现不兼容的问题,所以建议大家下载opencv2.4.xx版本的。这里有opencv2.4.13.6的官网:https://github.com/opencv/opencv/tree/2.4.13.6

3.ORB-SLAM2安装

首先说明一下,非ROS版本不依赖ROS,但如果读者需要对二者都进行使用的话,需要先按照ROS版本安装过程来编译,然后再回过头来按照非ROS版本操作。

3.1非ROS版本下安装与测试

键入以下指令下载源文件(路径自行选择):

git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2

在源文件所在目录下键入以下指令进行编译:

cd ORB_SLAM2
chmod +x build.sh
./build.sh

3.1.1非ROS版本单目测试

首先下载数据集(路径自行选择,我放在了ORB-SLAM2工程目录下):https://vision.in.tum.de/data/datasets/rgbd-dataset/download

在ORB-SLAM2目录下键入以下指令单目运行:

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml rgbd_dataset_freiburg1_desk/

第一个参数为系统的主程序执行文件

第二个参数为特征词典

第三个参数为相机参数:后续如果要使用自己的数据集,是需要修改这个文件的。而这个地方,也需要注意你下载的数据集和相机参数文件对应,rgbd_dataset_freiburg1_desk里的数字是1,所以对应的是TUM1。

第四个参数为数据集路径:此处我就放在了当前目录下

3.1.2非ROS版本RGBD测试

同样是用官网数据集。

首先键入以下命令进入数据集目录:

cd rgbd rgbd_dataset_freiburg1_desk

新建一个文本文档associate.py并键入以下代码,保存:

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements: 
# sudo apt-get install python-argparse

"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.

For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""

import argparse
import sys
import os
import numpy


def read_file_list(filename):
    """
    Reads a trajectory from a text file. 
    
    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
    Input:
    filename -- File name
    
    Output:
    dict -- dictionary of (stamp,data) tuples
    
    """
    file = open(filename)
    data = file.read()
    lines = data.replace(","," ").replace("\t"," ").split("\n") 
    list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
    list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
    return dict(list)

def associate(first_list, second_list,offset,max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
    to find the closest match for every input tuple.
    
    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation

    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
    """
    first_keys = first_list.keys()
    second_keys = second_list.keys()
    potential_matches = [(abs(a - (b + offset)), a, b) 
                         for a in first_keys 
                         for b in second_keys 
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))
    
    matches.sort()
    return matches

if __name__ == '__main__':
    
    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
    parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
    args = parser.parse_args()

    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)

    matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    

    if args.first_only:
        for a,b in matches:
            print("%f %s"%(a," ".join(first_list[a])))
    else:
        for a,b in matches:
            print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
            
        

这个python代码是用来生成深度图、彩色图文件路径匹配的信息文件的。

在终端键入以下指令生成结合信息文件:

python associate.py rgb.txt depth.txt > associations.txt

返回ORB-SLAM2目录下,键入以下指令RGBD运行:

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml rgbd_dataset_freiburg1_desk/ rgbd_dataset_freiburg1_desk/associations.txt 

注意此命令各文件的路径,与单目有所不同,并且增加了第五个参数,就是刚刚生成的associations.txt文件。

3.2 ROS版本安装

键入以下指令创建工作空间文件夹及其源文件夹(路径自行选择,此处就放在当前用户目录下):

mkdir -p ~/catkin_ws/src

键入以下命令进行工作空间编译:

cd catkin_ws
catkin_make

将ORB-SLAM2源文件夹放在src下。

键入以下指令添加环境变量,PATH为所在路径,请自行修改:

export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS

键入以下命令打开./bashrc文件:

sudo gedit ~/.bashrc

在文件末尾加入以下代码并保存,其中USERNAME请改为当前用户名:

source /home/USERNAME/catkin_ws/devel/setup.sh

在终端中键入以下指令使其生效:

source ~/.bashrc

在ORB-SLAM2目录下键入以下指令进行ROS版本的编译:

$ chmod +x build_ros.sh
$ ./build_ros.sh

可能出现的错误:

undefined reference to symbol '_ZN5boost6system15system_categoryEv'

解决办法:

在Examples/ROS/ORB_SLAM2/CMakeList.txt当中修改:

加入以下两句:

find_package(Boost COMPONENTS filesystem system REQUIRED)#加在其他find_package下面
set(LIBS 
${OpenCV_LIBS} 
${EIGEN3_LIBS}
${Pangolin_LIBRARIES}
${Boost_LIBRARIES} #加的是这句
${PROJECT_SOURCE_DIR}/../../../Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/../../../Thirdparty/g2o/lib/libg2o.so
${PROJECT_SOURCE_DIR}/../../../lib/libORB_SLAM2.so
)

保存后再编译,此时可以编译成功,但使用摄像头进行实时运行还需要安装一些其他依赖,我将在另一篇博客里讲解。

3.2.2ROS版本的单目摄像头运行

键入以下命令转到当前catkin_ws下的src中,再键入以下指令安装usb_cam:

cd ~/catkin_ws/src
git clone https://github.com/bosch-ros-pkg/usb_cam.git usb_cam

再键入以下命令转到catkin_ws目录下,重新进行make,并重新开启:

cd ~/catkin_ws/
catkin_make
source ~/catkin_ws/devel/setup.bash

键入以下命令就可以看到摄像头了:

roslaunch usb_cam usb_cam-test.launch

此时显示的是你的自带摄像头,如果想外接usb摄像头,请在usb_cam/launch/usb_cam-test.launch中,将以下这句里的videoX(X为序号)修改为对应设备的序号。一般来说vide0是自带摄像头,video1是外接摄像头。

<param name="video_device" value="/dev/video1" />

如果你成功看到图像,说明测试成功了。

但自己摄像头是需要内参标定的,在此介绍一下ROS下的标定过程:

按照如上操作打开摄像头显示图像后,键入以下命令打开标定窗口:

 rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.025 image:=/usb_cam/image_raw camera:=/usb_cam

这里我们需要准备一张打印好的棋盘标定板,size 8*6指的是8*6的棋盘网格,square 0.0245指的是棋盘格一个正方形格子是2.45厘米,可根据实际情况修改。这里有下载链接:http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration?action=AttachFile&do=view&target=check-108.pdf

我们需要拿着标定板在摄像机能捕捉到的各个方位移动,同时还要注意倾斜标定板,等到X、Y、Size、Skew的进度条都变绿以后,点击CALIBRATE按钮,此时后台进行计算。等到计算完毕后,除carlibrate的两个按钮也都点亮了,点击commit即可。

此时相机的内参信息保存在~/.ros/camera_info/head_camera.yaml中,将此文件复制到ORB-SLAM2的文件夹下,并按照Examples/Monocular/TUM1.yaml的格式修改。各个数据的对应关系请参考opencv标定的内参矩阵的形式,其实数值范围一般很好看出对应关系。

此时,还需要修改一个地方,即修改Examples/ROS/ORB_SLAM2/src/ros_mono.cc中的/camera_cam/image_raw为/usb_cam/image_raw。

重新编译。

键入以下指令即可在线运行单目相机模式:(C920是我自己的相机内参文件)

rosrun ORB_SLAM2 Mono Vocabulary/ORBvoc.txt C920.yaml 

3.2.3ROS版本的RGBD摄像头运行

我的rgbd摄像头型号为:华硕 Xtion PRO LIVE,步骤如下:

安装openni2:

sudo apt-get install ros-kinetic-openni2-camera
sudo apt-get install ros-kinetic-openni2-launch

打开openni2:

roslaunch openni2_launch openni2.launch

键入以下命令即可在线运行RGBD相机模式:(同样,最后一个文件是我自己的内参文件,如果没有的话需要标定)

rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.txt Examples/ROS/ORB_SLAM2/Asus.yaml 

 

  • 4
    点赞
  • 37
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值