ubuntu18.04系统小觅深度相机跑vins-mono(解决了rviz没有轨迹出现的问题)

这两天刚开始弄imu+vslam,把学习的过程记录一下,以便后续查阅。同时也希望能帮助到大家。仅供参考,如有不对请指正!

step1:下载小觅深度相机的官方SDK 可以用我提供的这个链接
cd <sdk>
make init
make ros
make all
echo "source ~/MYNT-EYE-D-SDK/wrappers/ros/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc

到此SDK就编译好了
要使用相机就得配置相机的标定参数,这些参数都可以通过SDK去获取。

step2:获取相机的参数,
cd MYNT-EYE-D-SDK
 ./samples/_output/bin/get_imu_params
 ./samples/_output/bin/get_img_params 

执行完成以上步骤之后,会在b会在上一级目录bin/生成两个参数文件,分别是相机和imu的参数,可供后续查看。

setp3:安装vins-mono并配置参数文件

vins-mono源码:下载链接
注意:以下步骤中,前提是我已经建立好wokespace然后把下载的vins-mono放如src/下,这一步我提前已经做好,以下步骤直接进入了。

cd  ~/Downloads/vins-mono/catkin_ws    //your vins-mono work-space
catkin_make
source  devel/setup.bash

1、在vins-mono/catkin_ws/src/VINS-Mono-master/vins_estimator/launch下新建一个mynteye.launch文件
mynteye.launch文件的内容如下:

<launch>
    <arg name="config_path" default = "$(find feature_tracker)/../config/mynteye/mynteye_config.yaml" />
	  <arg name="vins_path" default = "$(find feature_tracker)/../config/../" />
    
    <node name="feature_tracker" pkg="feature_tracker" type="feature_tracker" output="screen">
        <param name="config_file" type="string" value="$(arg config_path)" />
        <param name="vins_folder" type="string" value="$(arg vins_path)" />
    </node>

    <node name="vins_estimator" pkg="vins_estimator" type="vins_estimator" output="screen">
       <param name="config_file" type="string" value="$(arg config_path)" />
       <param name="vins_folder" type="string" value="$(arg vins_path)" />
    </node>

    <node name="pose_graph" pkg="pose_graph" type="pose_graph" output="screen">
        <param name="config_file" type="string" value="$(arg config_path)" />
        <param name="visualization_shift_x" type="int" value="0" />
        <param name="visualization_shift_y" type="int" value="0" />
        <param name="skip_cnt" type="int" value="0" />
        <param name="skip_dis" type="double" value="0" />
    </node>

</launch>

以上内容直接复制添加,不需要修改!
2、在/vins-mono/catkin_ws/src/VINS-Mono-master/config/文件夹下新建一个名为mynteye的文件夹,然后在该文件夹里面新建一个名为mynteye_config.yaml的文件。
mynteye_config.yaml文件的内容如下:

%YAML:1.0

#common parameters
imu_topic: "/mynteye/imu/data_raw"  #换成你的IMU的话题
image_topic: "/mynteye/left/image_color"  #换成你的相机的话题
output_path: "/home/shuchun/Downloads/vins-mono/catkin_ws/src/VINS-Mono/config/output_path/" #换成你的路径

#camera calibration 
model_type: PINHOLE
camera_name: camera
image_width: 1280   #换成你的相机参数(step2中获取的参数)
image_height: 640  #换成你的相机参数
distortion_parameters:   #换成你的畸变参数
   k1: -0.266278
   k2: 0.0527945
   p1: -0.000182013
   p2: 0.000422317
projection_parameters:   #换成你的相机内参
   fx: 365.75
   fy: 373.236
   cx: 357.402
   cy: 241.418

# Extrinsic parameter between IMU and Camera. 
# 大家如果用的是小觅的深度相机,这一步可以选择0,然后R,t可以通过小觅自带的sdk获取
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                        # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.                        
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [-0.00646620000000000, -0.99994994000000004, -0.00763565000000000, 0.99997908999999996, -0.00646566000000000, -0.00009558000000000, 0.00004620000000000, -0.00763611000000000, 0.99997084000000003]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [0.00533646000000000, -0.04302922000000000, 0.02303124000000000]

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 1             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 0.08          # accelerometer measurement noise standard deviation. #0.2   0.04
gyr_n: 0.004         # gyroscope measurement noise standard deviation.     #0.05  0.004
acc_w: 0.00004         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 2.0e-6       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.81007     # gravity magnitude

#loop closure parameters
loop_closure: 1                    # start loop closure
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
fast_relocalization: 0             # useful in real-time and large project
pose_graph_save_path: "/home/fish/ws_vins/src/VINS-Mono/config/output_path/" # #换成你的路径

#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#rolling shutter parameters
rolling_shutter: 0                  # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0               # unit: s. rolling shutter read out time per frame (from data sheet). 

#visualization parameters
save_image: 1                   # save image in pose graph for visualization prupose; you can close this function by setting 0 
visualize_imu_forward: 0        # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4      # size of camera marker in RVIZ

接下来演示之前,先插上相机,不然待会忘记了!

step4:启动相机,运行vins-mono

1)开启相机

cd  /home/shuchun/Downloads/MYNT-EYE-D-SDK-master  //your file path
source ./wrappers/ros/devel/setup.bash
roslaunch mynteye_wrapper_d mynteye.launch

在这里插入图片描述
2)开启vins

cd /home/shuchun/Downloads/vins-mono/catkin_ws   //your workspace
source devel/setup.bash
roslaunch vins_estimator mynteye.launch  //这个mynteye.launch文件就是我刚刚修改的

在这里插入图片描述
这一步正常开启的话,终端是会不断输出相机发布的消息的,如果没有或者报错你就要仔细查看前面的步骤或想想哪个环节出问题了(这一步不正常,后面不能显示轨迹的)
3)开启ros可视化界面rviz

cd /home/shuchun/Downloads/vins-mono/catkin_ws   //your workspace
source devel/setup.bash
roslaunch vins_estimator vins_rviz.launch

在这里插入图片描述
无轨迹问题原因:
一开始会没有轨迹,手持相机小幅度平移和旋转一下,我个人觉得这个也是单目,应该和ORBSLAM2中mono一样,不具备尺度性,需要初始化一下。接下轨迹就出来,如果还不行,把第2,3步终端关闭,再开启第二步的终端,紧接着初始化一下(手持相机平移和旋转)然后开启第三步可视化界面。

问题待完善…

  • 1
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 9
    评论
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值