自己部署运行ManhattanSLAM

自己部署运行ManhattanSLAM 在笔记本的ubuntu18.04上面,这个我已经之前装过ORBSLAM3了

git clone https://gitee.com/maxibooksiyi/ManhattanSLAM.git

接着运行

cd ManhattanSLAM
chmod +x build.sh
./build.sh

出现找不到 tinyply ,意料之中,之前看别人博客也是有这个问题

输入图片说明

输入图片说明

GitHub - ddiakopoulos/tinyply: C++11 ply 3d mesh format importer & exporter

输入图片说明

git clone -b 2.3.2 https://github.com/ddiakopoulos/tinyply

cmakelists的第17行改为

输入图片说明

cd tinyply
 
mkdir build
 
cd build
 
cmake ..
 
make
 
sudo make install

输入图片说明

然后继续运行./build.sh

输入图片说明

我把build文件夹删了再重新编译

这回编译到一半似乎卡死了

这都是内存不够报的错。

输入图片说明

输入图片说明

我接着再运行了一遍./build.sh

输入图片说明

往上翻还是可以找到一些东西的

输入图片说明

这似乎是一个常见报错 【在ROS下编译ORB_SLAM2遇到错误,】pangolin could not be found because dependency Eigen3 could not be found-CSDN博客

输入图片说明

都是说是因为pangolin版本高了导致的 ORB-SLAM2编译时出现找不到Eigen库的问题 - 知乎

输入图片说明

确实,我看了下我之前装ORBSLAM3的记录,pangolin装的v0.6版本的,确实高了,可能需要降为0.5版本的。

输入图片说明

似乎ORBSLAM3也是可以用pangolin v0.5的 【算法】跑ORB-SLAM3遇到的问题、解决方法、效果展示(环境:Ubuntu18.04+ROS melodic)_没有euroc_examples-CSDN博客

输入图片说明

首先卸载之前装的v0.6版本的pangolin,去build文件夹下运行sudo make uninstall即可

输入图片说明

如果是已经卸载过的,再去build文件夹运行sudo make uninstall,会提示你一些文件不存在的。

输入图片说明

git clone -b v0.5 https://gitee.com/maxibooksiyi/Pangolin.git
cd Pangolin
mkdir build
cd build 
cmake ..
make
sudo make install

再重新运行./build.sh似乎就剩这个报错了

输入图片说明

输入图片说明

输入图片说明

参照着改cmakelists,我是知己把libtinyply.so的路径改为绝对路径了,也暂时没有把tinyply移动文件夹什么的

输入图片说明

输入图片说明

这么改cmakelists之后再运行./build.sh,确实可以链接libtinyply.so了,也百分之百编译完了!!!!!当然考虑到后期方便部署可以移到manhattenSLAM的Thirdparty里

输入图片说明

输入图片说明

这是associate.py的内容

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements: 
# sudo apt-get install python-argparse
​
"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.
​
For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""
​
import argparse
import sys
import os
import numpy
​
​
def read_file_list(filename):
    """
    Reads a trajectory from a text file. 
    
    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
    Input:
    filename -- File name
    
    Output:
    dict -- dictionary of (stamp,data) tuples
    
    """
    file = open(filename)
    data = file.read()
    lines = data.replace(","," ").replace("\t"," ").split("\n") 
    list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
    list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
    return dict(list)
​
def associate(first_list, second_list,offset,max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
    to find the closest match for every input tuple.
    
    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation
​
    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
    """
    first_keys = first_list.keys()
    second_keys = second_list.keys()
    potential_matches = [(abs(a - (b + offset)), a, b) 
                         for a in first_keys 
                         for b in second_keys 
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))
    
    matches.sort()
    return matches
​
if __name__ == '__main__':
    
    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
    parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
    args = parser.parse_args()
​
    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)
​
    matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    
​
    if args.first_only:
        for a,b in matches:
            print("%f %s"%(a," ".join(first_list[a])))
    else:
        for a,b in matches:
            print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
            
        
​

可以点这里下载 Computer Vision Group - Useful tools for the RGB-D benchmark

输入图片说明

在associate.py所在文件夹打开终端运行下面命令

python associate.py /home/maxi/下载/rgbd_dataset_freiburg1_xyz/rgb.txt /home/maxi/下载/rgbd_dataset_freiburg1_xyz/depth.txt > associations.txt

上面这个命令运行完,associate.py的同文件夹下会多出一个associations.txt

输入图片说明

在ManhattanSLAM目录下打开终端运行下面命令

./Example/manhattan_slam Vocabulary/ORBvoc.txt Example/TUM1.yaml /home/maxi/下载/rgbd_dataset_freiburg1_xyz /home/maxi/下载/associations.txt

真的跑起来了,感觉还是挺爽的!!!

输入图片说明

输入图片说明

输入图片说明

这可能也是我第一次跑RGBD SLAM

下TUM数据集可以见这篇博文,在自己SLAM文章多的那个wiki集里。

基本上整个部署按照这篇博客走没什么问题,然后运行把readme细看一下还是可以明白的,是一些基础的TUM数据集的操作。 自己需要手动装2.3.2版本的tinyply以及v0.5版本的pangolin,以及把manhattanSLAM里cmakelists里面的libtinyply.so的路径改为找得到的路径。 https://blog.csdn.net/weixin_50508111/article/details/125421143

我又再下了一个数据集

输入图片说明

/home/maxi/下载/rgbd_dataset_freiburg3_structure_notexture_near

python associate.py /home/maxi/下载/rgbd_dataset_freiburg3_structure_notexture_near/rgb.txt /home/maxi/下载/rgbd_dataset_freiburg3_structure_notexture_near/depth.txt > associations.txt

输入图片说明

./Example/manhattan_slam Vocabulary/ORBvoc.txt Example/TUM1.yaml /home/maxi/下载/rgbd_dataset_freiburg3_structure_notexture_near /home/maxi/下载/associations.txt

输入图片说明

输入图片说明

这个数据集的环境确实就很6了,很考验了,这个ManhattanSLAM还是跑得可以的。

输入图片说明

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值