以前文章汇总:
多传感器融合SLAM部分:
开源框架测试
一:Tixiao Shan最新力作LVI-SAM(Lio-SAM+Vins-Mono),基于视觉-激光-惯导里程计的SLAM框架,环境搭建和跑通过程_goldqiu的博客-CSDN博客
二.激光SLAM框架学习之A-LOAM框架---介绍及其演示_goldqiu的博客-CSDN博客
八.激光SLAM框架学习之LeGO-LOAM框架---框架介绍和运行演示_goldqiu的博客-CSDN博客
十一.激光惯导LIO-SLAM框架学习之LIO-SAM框架---框架介绍和运行演示_goldqiu的博客-CSDN博客
十二.激光SLAM框架学习之livox-loam框架安装和跑数据集_goldqiu的博客-CSDN博客
二十二.香港大学火星实验室R3LIVE框架跑官方数据集_goldqiu的博客-CSDN博客
二十四-香港大学火星实验室FAST-LIO2框架跑官方数据集_goldqiu的博客-CSDN博客
二十七-VIO-SLAM开源框架Vin-mono跑EuRoC数据集_goldqiu的博客-CSDN博客
实车测试
七.激光SLAM框架学习之A-LOAM框架---速腾Robosense-16线雷达室内建图_goldqiu的博客-CSDN博客
九.激光SLAM框架学习之LeGO-LOAM框架---速腾Robosense-16线雷达室外建图和其他框架对比、录包和保存数据_goldqiu的博客-CSDN博客
十三.激光SLAM框架学习之livox-Mid-70雷达使用和实时室外跑框架_goldqiu的博客-CSDN博客
十六.激光和惯导LIO-SLAM框架学习之配置自用传感器实时室外跑LIO-SAM框架_goldqiu的博客-CSDN博客
十八.多个SLAM框架(A-LOAM、Lego-loam、LIO-SAM、livox-loam)室外测试效果粗略对比分析_goldqiu的博客-CSDN博客
二十六-香港大学火星实验室FAST-LIO2框架跑自录数据集(Mid-70和SBG-Ellipse-N惯导)_goldqiu的博客-CSDN博客
标定
十四.激光和惯导LIO-SLAM框架学习之惯导内参标定_goldqiu的博客-CSDN博客
十五.激光和惯导LIO-SLAM框架学习之惯导与雷达外参标定(1)_goldqiu的博客-CSDN博客
二十.激光、视觉和惯导LVIO-SLAM框架学习之相机内参标定_goldqiu的博客-CSDN博客
二十一.激光、视觉和惯导LVIO-SLAM框架学习之相机与雷达外参标定(1)_goldqiu的博客-CSDN博客
开源框架学习
三.激光SLAM框架学习之A-LOAM框架---项目工程代码介绍---1.项目文件介绍(除主要源码部分)_goldqiu的博客-CSDN博客
四.激光SLAM框架学习之A-LOAM框架---项目工程代码介绍---2.scanRegistration.cpp--前端雷达处理和特征提取_goldqiu的博客-CSDN博客
五.激光SLAM框架学习之A-LOAM框架---项目工程代码介绍---3.laserOdometry.cpp--前端雷达里程计和位姿粗估计_goldqiu的博客-CSDN博客
六.激光SLAM框架学习之A-LOAM框架---项目工程代码介绍---4.laserMapping.cpp--后端建图和帧位姿精估计(优化)_goldqiu的博客-CSDN博客
十.激光SLAM框架学习之LeGO-LOAM框架---算法原理和改进、项目工程代码_goldqiu的博客-CSDN博客
十七.激光和惯导LIO-SLAM框架学习之IMU和IMU预积分_goldqiu的博客-CSDN博客
十九.激光和惯导LIO-SLAM框架学习之项目工程代码介绍---代码框架和一些文件解释_goldqiu的博客-CSDN博客
二十三.激光和惯导LIO-SLAM框架学习之LIO-SAM项目工程代码介绍---基础知识_goldqiu的博客-CSDN博客
从这篇博客就开始进入到Localization、Navigation部分了
二十五.SLAM中Mapping和Localization区别和思考
Planning
一.路径规划---二维路径规划仿真实现-gmapping+amcl+map_server+move_base_goldqiu的博客-CSDN博客
二.路径规划---二维路径规划实车实现---gmapping+amcl+map_server+move_base_goldqiu的博客-CSDN博客
Global Localization
一.全局定位--开源定位框架LIO-SAM_based_relocalization实录数据集测试_goldqiu的博客-CSDN博客
二.全局定位--开源定位框架livox-relocalization实录数据集测试_goldqiu的博客-CSDN博客_livox数据集
三.全局定位--LIO-SAM在RTK全局约束下建图和定位(1)_goldqiu的博客-CSDN博客
栅格地图:
二十八-三维点云实时和离线生成二维栅格、三维栅格地图_goldqiu的博客-CSDN博客
RealSenseD435驱动安装
安装SDK
1. 下载source
git clone https://github.com/IntelRealSense/librealsense
cd librealsense
建议下载稍微低版本的SDK,我下载的是librealsense-2.36.0版本的,或者librealsense-2.45.0
2. 安装依赖项
sudo apt-get install libudev-dev pkg-config libgtk-3-dev
sudo apt-get install libusb-1.0-0-dev pkg-config
sudo apt-get install libglfw3-dev
sudo apt-get install libssl-dev
3. 安装权限脚本
在librealsense文件夹
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules && udevadm trigger
卸载重装的话,99-realsense-libusb.rules文件需要删除。
4. 编译安装
mkdir build
cd build
cmake ../ -DBUILD_EXAMPLES=true
make
sudo make install
realsense-viewer
工具查看效果
realsense-viewer
安装RealSense-ROS
sudo apt-get install ros-melodic-rgbd-launch
git clone https://github.com/IntelRealSense/realsense-ros.git
git clone https://github.com/pal-robotics/ddynamic_reconfigure.git
cd ~/catkin_ws && catkin_make
编译完成后,使用如下命令测试:
roslaunch realsense2_camera demo_pointcloud.launch
建议下载稍微低版本的,我下载的分别是ddynamic_reconfigure-0.2.2和realsense-ros-2.3.2,或者ddynamic_reconfigure-0.3.2和realsense-ros-2.3.2
开启相机节点
roslaunch realsense2_camera rs_rgbd.launch
报错:
/opt/ros/melodic/lib/nodelet/nodelet: symbol lookup error: /home/qjs/code/D435_ws/devel/lib//librealsense2_camera.so: undefined symbol: _ZN20ddynamic_reconfigure19DDynamicReconfigure16registerVariableIiEEvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEET_RKN5boost8functionIFvSA_EEES9_SA_SA_S9_
[camera/realsense2_camera_manager-2] process has died [pid 27469, exit code 127, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=realsense2_camera_manager __log:=/home/qjs/.ros/log/1eca8a82-e8c4-11ec-807e-70b5e831e2ce/camera-realsense2_camera_manager-2.log].
log file: /home/qjs/.ros/log/1eca8a82-e8c4-11ec-807e-70b5e831e2ce/camera-realsense2_camera_manager-2*.log
sudo find / -name librealsense2_camera.so
删除其中一个地方的librealsense2_camera.so,只保留一个。
报警告:
10/06 21:55:35,555 WARNING [140685336372992] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11
不管,可以使用
高翔博士ORBSLAM2_with_pointcloud_map安装
安装相关依赖库
sudo apt-get install cmake gcc g++ git vim
sudo apt-get install libglew-dev
sudo apt-get install libboost-dev libboost-thread-dev
sudo apt-get install libboost-filesystem-dev
sudo apt-get install libpython2.7-dev
sudo apt-get install build-essential
安装Pangolin和Eigen 这两个库,建议分别是0.5和3.2版本,报错可能性小。
cd Pangolin
mkdir build
cd build
cmake ..
make
sudo make install
cd eigen
mkdir build
cd build
cmake ..
make
sudo make install
编译非ROS环境
下载改好的代码:
g2o_with_orbslam2主文件夹下
mkdir build
cmake ..
make -j8
sudo make install
/ORB_SLAM2_modified/build /ORB_SLAM2_modified/Thirdparty/DBoW2/build /ORB_SLAM2_modified/Thirdparty/g2o/build 删掉三个 build 文件夹
在ORB_SLAM2_modified下右键打开终端
cd ORB_SLAM2_modified/Thirdparty/DBoW2
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
重新在ORB_SLAM2_modified下右键打开终端
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j8
数据集测试
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml datasets/rgbd_dataset_freiburg1_xyz datasets/rgbd_dataset_freiburg1_xyz/association.txt
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml datasets/rgbd_dataset_freiburg1_room datasets/rgbd_dataset_freiburg1_room/association.txt
保存的点云在主文件夹下,名称为vslam.pcd
编译ROS环境
来到ORB_SLAM2_modified文件夹 在打开的文本最后添加你的ROS路径,保存
gedit ~/.bashrc
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/你的目录/ORB_SLAM2_modified/Examples/ROS
保存后在终端输入
source ~/.bashrc
配置ros环境
sudo gedit /opt/ros/melodic/setup.bash
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/你的目录/ORB_SLAM2_modified/Examples/ROS
保存后在终端输入
source /opt/ros/melodic/setup.bash
删除ORB_SLAM2_modified/Examples/ROS/ORB_SLAM2/build中的文件
然后编译
chmod +x build_ros.sh
./build_ros.sh
ROS 报错 ModuleNotFoundError: No module named ‘rospkg‘
pip install rospkg
boost的库问题:
在~/catkin_ws/src/ORB_SLAM2/Examples/ROS/ORB_SLAM2/CMakeLists.txt里添加一句-lboost_system
出现g2o库的报错,大概率是和ros自带的g2o冲突了,卸载:
sudo apt-get remove ros-melodic-libg2o
查看相机内参
roslaunch realsense2_camera rs_rgbd.launch
rostopic echo /camera/color/camera_info
header:
seq: 1469
stamp:
secs: 1654853975
nsecs: 751267910
frame_id: "camera_color_optical_frame"
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [611.2393798828125, 0.0, 319.9852600097656, 0.0, 610.677490234375, 242.4569854736328, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [611.2393798828125, 0.0, 319.9852600097656, 0.0, 0.0, 610.677490234375, 242.4569854736328, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
终端中显示的K,为参数,其中K = [fx 0 cx 0 fy cy 0 0 1 ] ,基线50mm
修改参数,得到一个新的D435i.yaml
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 611.239380
Camera.fy: 610.677490
Camera.cx: 319.985260
Camera.cy: 242.456985
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.p3: 0.0
Camera.width: 640
Camera.height: 480
# Camera frames per second
Camera.fps: 30.0
# IR projector baseline times fx (aprox.)
# bf = baseline (in meters) * fx, D435i的 baseline = 50 mm
Camera.bf: 50.0
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 40.0
# Deptmap values factor
DepthMapFactor: 1000.0
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
PointCloudMapping.Resolution: 0.01
meank: 50
thresh: 2.0
运行
roslaunch realsense2_camera rs_rgbd.launch
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.txt Examples/RGB-D/D435i.yaml
录制包
rosbag record -o 20220611.bag /camera/color/image_raw /camera/aligned_depth_to_color/image_raw
参考博客
最后效果