LiDAR-Visual SLAM

1、小觅相机+vins

1.1 MYNT SDK

标准版: https://mynt-eye-s-sdk-docs-zh-cn.readthedocs.io/zh_CN/latest/src/sdk/install_ubuntu_src.html

深度版: https://mynt-eye-d-sdk.readthedocs.io/zh_CN/latest/index.html

1.2 VINS-Fusion 如何整合

小觅将vins-fusion作为例程(samles)托管在了github上[2]。这里有两个分支,分支docker_feat需要安装docker,分支master则不需要(使用该分支)。

官网以docker方式

另一种 下载编译使用分支master https://blog.csdn.net/CSDN_XCS/article/details/91039481

因为我已经编译过港科大原始版本的VINS_Fusion,只需把部分覆盖编译即可。两个版本的vins_fusion相同

# cd ~/catkin_ws
echo "\nConfiguring and rebuilding VINS-Fusion/vins_estimator from MYNT-EYE-VINS-FUSION-Samples..."

mkdir build/VINS-Fusion/vins_estimator_mynteyed
cd build/VINS-Fusion/vins_estimator_mynteyed
cmake ../../../src/MYNT-EYE-VINS-FUSION-Samples/vins_estimator -DCMAKE_INSTALL_PREFIX=../../../install -DCATKIN_DEVEL_PREFIX=../../../devel -DCMAKE_BUILD_TYPE=Release
make -j8
cd ../../../

1.3 在 MYNT® EYE D SDK 上运行 VINS-FUSION

1 运行mynteye节点

cd /home/whu/tools/MYNT-EYE-D-SDK
source ./wrappers/ros/devel/setup.zsh
roslaunch mynteye_wrapper_d vins_fusion.launch

# roslaunch mynt_eye_ros_wrapper vins_fusion.launch #standard

2 打开另一个命令行,运行例程MYNT-EYE-FUSION-Sample的vins节点

# deep
roslaunch vins mynteye-d-stereo-imu.launch

# standard
roslaunch vins mynteye-s-stereo-imu.launch 

1.4 offline 运行bag

1) record bag data, 参考/home/whu/tools/MYNT-EYE-D-SDK/tools/README.md;注意初始化要充分,有充足平移量

roslaunch mynteye_wrapper_d vins_fusion.launch

rosbag record -o mynteye.bag /mynteye/left/image_color /mynteye/imu/data_raw /mynteye/right/image_color

2)run

roslaunch vins mynteye-d-stereo-imu.launch
(optional) rosrun loop_fusion loop_fusion_node /home/whu/tools/MYNT-EYE-VINS-FUSION-Samples/vins_estimator/../config/mynteye-d/mynt_stereo_imu_config.yaml
# another terminal
rosbag play mynteye.bag

2、小觅D1000-IR-120/Color+Velodyne VLP16 硬件整合

2.1、硬件结构设计

黑色包为移动便携电池包。

2.2、利用kalibr工具进行camera内标定及camera-IMU外标定

安装--Kalibr标定工具:Camera+IMU联合标定(MYNTEYE相机)

Kalibr 标定双目内外参数以及 IMU 外参数

2.3、利用Autoware进行激光雷达(lidar)和相机(camera)外标定

1)autoware官网安装教程

安装中的坑(版本不同会有变化)

a. python3-colcon-common-extensions 失败的解决方案: 

https://colcon.readthedocs.io/en/released/user/installation.html  

b. 推荐浏览器下载zip, git clone网速太慢常常失败;

c. 推荐源码安装旧版本,Version 1.10 or older Compilean

安装完毕,执行

cd ~/tools/autoware/ros
./run

autoware测试demo

2)使用方法一:利用界面中的Calibration Tool Kit  手动选取平面 激光雷达(lidar)和相机(camera)联合标定调研

步骤:

启动传感器驱动,record 数据

cd ~/tools/MYNT-EYE-D-SDK & 
source ./wrappers/ros/devel/setup.zsh 
roslaunch mynteye_wrapper_d display.launch
roslaunch velodyne_pointcloud VLP16_points.launch


# 足够空间的文件夹
cd '/media/whu/Research/07data'
# 降低camera 频率,减少数据量
rosrun topic_tools throttle messages /mynteye/left/image_color 20.0 /left &
rosbag record -o camera_lidar_calibra.bag /left /velodyne_points
rosbag record -o camera_lidar_calibra.bag /mynteye/left/image_color /velodyne_points

播放数据,注意把 /velodyne_points 转化为/points_raw

rosbag play '/home/whu/camera_lidar_calibra.bag' /velodyne_points:=/points_raw

[1] autoware-wiki-calibration

[2] 激光雷达(lidar)和相机(camera)联合标定调研(基于Autoware的详细步骤)

[3] 无人驾驶汽车系统入门(二十二)——使用Autoware实践激光雷达与摄像机组合标定

3)使用方法二:利用launch文件刺取9对2D-3D连接点,自动化程度比较高

https://github.com/Autoware-AI/utilities/tree/master/autoware_camera_lidar_calibrator

http://s1nh.org/post/calib-velodyne-camera/

参数的计算

启动launch文件,依次在image和pointcloud中刺取9对2D-3D连接点,获得外标定参数

# autoware.ai 
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:='/home/chenshoubin/data/calibration/20191008_1022_autoware_camera_calibration_from_factory.yaml'    image_src:=/left

rviz

rosbag play data/calibration/t03_camera_lidar_calibra_2019-09-29-17-58-05.bag -r 3

参数的验证

打开rviz时,选择Panels –> Add New Panel –> ImageViewerPlugin,然后在新窗口中选好Image TopicPoint Topic即可

# 1st terminal
roslaunch src/autoware/utilities/runtime_manager/scripts/calibration_publisher.launch file:='/home/chenshoubin/data/calibration/20191009_manaul_autoware_lidar_camera_calibration.yaml'   image_topic_src:=/left

# 2nd terminal
rosrun points2image points2image _points_node:=/velodyne_points

# 3rd terminal
rviz

# 4th terminal
rosbag play '/home/chenshoubin/data/calibration/t03_camera_lidar_calibra_2019-09-29-17-58-05.bag' -r 1

注意:Save的时候报了个错说没有cv2有关的那个对象没write方法,然后pip重新装一下opencv-python即可 sudo pip3 install opencv-python

 

3、LiDAL Visual Odometry (LVO)

3.1  record data

# source SDK setup.zsh
roslaunch mynteye_wrapper_d vins_fusion.launch

#another terminal & IP setup
roslaunch velodyne_pointcloud VLP16_points.launch

#another terminal
rosbag record -o velo_mynteye.bag /mynteye/imu/data_raw /mynteye/left/image_color /mynteye/right/image_color /velodyne_points

3.2  loop detection with vision

libDBoW3的install

problem: /usr/bin/ld: /home/whu/tools/DBow3/install/lib/libDBoW3.a(Vocabulary.cpp.o): relocation R_X86_64_32S against `_ZTVN5DBoW310VocabularyE' can not be used when making a shared object; recompile with -fPIC

solution: DBoW3/CMakeList.txt add

set(CMAKE_CXX_FLAGS "-fPIC")  
set(CMAKE_C_FLAGS "-fPIC")

3.3  run lv-slam

roslaunch lv_slam global_graph_kitti.launch res_dir:='/home/whu/data/WHUKylinHandheldLidarVisual3D/velo_mynteye_2019-10-14-22-16-59.bag_lvo'        seq:=04

rosbag play --clock  '/home/whu/data/WHUKylinHandheldLidarVisual3D/velo_mynteye_2019-10-14-22-16-59.bag' 

rosservice call /global_graph/dump "destination: '/home/whu/data/WHUKylinHandheldLidarVisual3D/velo_mynteye_2019-10-14-22-16-59.bag_lvo/data/dump'   "

5、部分第三方库\软件的安装

5.1 evo

evo

# git clone & for source
pip install .
# if python3, then
pip3 install .

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值