在Ubuntu上安装ORBSLAM3与运行RGBD-TUM数据集

目录

一、ORBSLAM3源码下载

二、环境配置

2.1 C++11 or C++0x Compiler

2.2 Pangolin 安装

 2.3 OpenCV安装

2.4 Eigen3安装

2.5 Python安装

三、ORBSLAM3安装

四、运行RGBD-TUM数据集

4.1 下载配准文件associate.py

4.2 运行ORBSLAM3


一、ORBSLAM3源码下载

ORBSLAM3-Github源码下载地址:

ORBSLAM3-github源码地址

二、环境配置

2.1 C++11 or C++0x Compiler

首先需安装 gcc 和 g++ 环境,安装之前查看是否有安装,打开终端使用命令:

sudo apt-get install gcc
sudo apt-get install g++

测试 c++

创建一个.cpp文件(在终端中输入touch test.cpp),打开并输入

#include <iostream>
using namespace std;
int main()
{
    cout << "HELLO WORLD!" << endl;
    return 0;
}

在“test.cpp”文件目录中打开终端并输入:

g++ test.cpp -o test.out

通过以上编译得到test.out文件以后,可以直接把这个文件拖到Terminal上,通过按回车键运行,可以得到结果:HELLO WORLD! 。

这证明我们已经完成了ubuntu下c++环境的搭建!!

2.2 Pangolin 安装

2.2.1 安装依赖项

在终端依次输入:

sudo apt-get install libglew-dev
sudo apt-get install cmake
sudo apt-get install libpython2.7-dev
sudo apt-get install ffmpeg libavcodec-dev libavutil-dev libavformat-dev
libswscale-dev libavdevice-dev
sudo apt-get install libdc1394-22-dev libraw1394-dev
sudo apt-get install libjpeg-dev libpng16-dev libtiff5-dev libopenexr-dev
sudo apt install libegl1-mesa-dev libwayland-dev libxkbcommon-dev waylandprotocols

!注意:安装某些依赖项遇到问题时,可尝试切换镜像源.常用镜像源:阿里云、清华和中科大等
切换镜像源之后,使用sudo apt-get update进行下载地址更新。

2.2.2 下载Pangolin

Pangolin-Github上下载源码:

Pangolin-Github源码

解压缩后,打开Pangolin文件夹,打开终端

mkdir build
cd build //可以直接打开文件夹,而省略此步骤
cmake ..
sudo make
sudo make install

2.2.3 测试Pangolin

cd build/examples/HelloPangolin
cmake .
make
./HelloPangolin

若安装成功则在终端显示下图

 2.3 OpenCV安装

2.3.1 下载OpenCV

在opencv官网下载:OpenCV下载网站

点击Sources下载自己需要的版本。(以OpenCV3.4为例)

下载完成解压缩后,进入文件夹,在终端中输入:

mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..
#cmake成功后,生成makefile文件,使用make进行编译
sudo make
#编译成功后,进行安装
sudo make install

 2.3.2 OpenCV配置环境

sudo gedit /etc/ld.so.conf.d/opencv.conf
#打开opencv.conf文件,在文末添加
/usr/local/lib
#保存使路径生效
sudo ldconfig
#编辑bash环境文件
sudo gedit /etc/bash.bashrc
#文末添加
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH
#保存使配置生效
source /etc/bash.bashrc
#更新
sudo updatedb

!检查自己的OpenCV版本:

pkg-config --modversion opencv

2.4 Eigen3安装

2.4.1 在Eigen3官网下载

官网下载地址:Eigen3.4.0下载地址 (tar.gz)

2.4.2 Eigen安装过程

在Eigen文件夹中打开终端

mkdir build
cd build
sudo cmake ..
sudo make install

sudo cp -r /usr/local/include/eigen3/Eigen /usr/local/include 
#移动头文件

!注意:在很多程序中include时经常使用#include <Eigen/Dense>;否则系统无法默认搜索到,build有时侯也会找不到,所以将头文件移动到“/usr/local/include”中。

2.4.3 测试Eigen

新建test_Eigen.cpp,打开。

#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
using namespace Eigen::internal;
using namespace Eigen::Architecture;
 
using namespace std;
 
int main()
{
    Vector4d test1;
    test1<< 1,2,3,4;
    cout<<"test=\n"<<test1<<endl;
    cout<<"Test Success!"<<endl;
    return 0;

}

在终端打开:

g++ test-Eigen.cpp -o test-Eigen
./test-Eigen

2.5 Python安装

在终端输入:

sudo apt install libpython2.7-dev

三、ORBSLAM3安装

在ORBSLAM3文件夹中打开终端,运行:

chmod +x build.sh //给 build.sh赋予可允许权限
./build.sh //自动化脚本,分别编译脚本中内容

没有提示错误则安装成功。

四、运行RGBD-TUM数据集

数据集下载网址:  https://vision.in.tum.de/data/datasets/rgbd-dataset/download

解压缩到ORBSLAM3文件夹中,以rgbd_dataset_freiburg1_desk为例。

4.1 下载配准文件associate.py

(它从rgb.txt文件和 depth.txt 文件中读取时间戳,并通过查找最佳匹配来连接它们。)

网站:associate.py下载

或者直接自己创建一个associate.py,粘贴进去。(将文件放在/ORB_SLAM2/Examples/RGB-D目录里面)

import argparse
import sys
import os
import numpy


def read_file_list(filename):
    """
    Reads a trajectory from a text file. 
    
    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
    Input:
    filename -- File name
    
    Output:
    dict -- dictionary of (stamp,data) tuples
    
    """
    file = open(filename)
    data = file.read()
    lines = data.replace(","," ").replace("\t"," ").split("\n")
    #if remove_bounds:
    #    lines = lines[100:-100]
    list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
    list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
    return dict(list)

def associate(first_list, second_list,offset,max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
    to find the closest match for every input tuple.
    
    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation

    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
    """
    first_keys = list(first_list.keys())
    second_keys = list(second_list.keys())
    potential_matches = [(abs(a - (b + offset)), a, b) 
                         for a in first_keys 
                         for b in second_keys 
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))
    
    matches.sort()
    return matches

if __name__ == '__main__':
    
    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
    parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
    args = parser.parse_args()

    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)

    matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    

    if args.first_only:
        for a,b in matches:
            print("%f %s"%(a," ".join(first_list[a])))
    else:
        for a,b in matches:
            print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
            
        

在ORBSLAM3文件夹中打开终端运行

python3 ./Examples/RGB-D/associate.py ./rgbd_dataset_freiburg1_desk/rgb.txt ./rgbd_dataset_freiburg1_desk/depth,txt >./rgbd_dataset_freiburg1_desk/associations.txt

得到associations.txt

4.2 运行ORBSLAM3

在ORBSLAM3中打开终端:

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml /rgbd_dataset_freiburg1_desk rgbd_dataset_freiburg1_desk/associations.txt

即可运行成功!

!注意:TUM1.yaml为相机内参文件,注意数据集使用的是那个相机内参,rgbd_dataset_freiburg1_desk中“freiburg1”表示用的TUM1.yaml。

  • 19
    点赞
  • 88
    收藏
    觉得还不错? 一键收藏
  • 5
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值