13-[LVI-SAM]visual_odometry_初探

2021SC@SDUSC

visual_odometry_初探

​ 上一章已经把visual_feature的大部分代码和功能,虽然没有完全把那一个节点讲明白,但是已经大致清楚整一份代码的思路。至于更多的细枝末节的算法思想,这些涉及比较多的数学知识,脱离了写博客的目的,就不再赘述了,留下了链接,等有空可以自己去探索。

​ 而我们团队其他人还没有完成自己节点部分的分析,于是,经过商量,我来帮另外一个@同学分析他的部分。由于他还在分析initial文件夹下的内容,于是,我从这个节点本身出发,开始分析,避免分析冲突。

1. 概览

​ 这个节点其实就是通过视觉特征点输出里程计了,与激光有互动的地方主要体现在两点,一是带有激光雷达深度图获取到深度的视觉特征点,二是把之前激光IMU参与的IMUPreIntegration模块输出的odometry结果作为初始化的值。

visual_odometry模块其实就是下面的lvi_sam_visual_odometry节点。但在源代码中,这个模块的名字其实是visual_estimator。在图中,可以看到,这个节点有着众多的输入和输出。可以说,是LVI-SAM的视觉核心模块。其中,有和前面的lidar模块中的lvi_sam_mapOptmizationlvi_sam_PreIntegration,以及视觉的其他模块相关联。

13-1

​ 这里可以再穿插一个小知识,是我在看这个节点收获到的和ROS相关的知识。有人可能很好奇,为什么节点源代码的文件名和这个图中实际运行时的节点名不一致?在前面blog的源码分析体会中,我写过是因为launch中定义了节点的名字,但是如果看过launch文件可以知道(见下文),它并没有明确指定节点存放在哪个位置。只有在type参数的时候写明了节点。其实,这是在编译的时候就已经指明了节点的type,也可以说是节点的唯一名字。编译的时候,CMakeLists已经列好了编译的源代码文件位置和节点名称(见下文)。

launch文件:

<launch>

    <arg name="project" default="lvi_sam"/>
    <!-- Lidar odometry param -->
    <rosparam file="$(find lvi_sam)/config/params_lidar.yaml" command="load" />
    <!-- VINS config -->
    <param name="vins_config_file" type="string" value="$(find lvi_sam)/config/params_camera.yaml" />
    
    <!-- Lidar odometry -->
    <node pkg="$(arg project)"      type="$(arg project)_imuPreintegration"   name="$(arg project)_imuPreintegration"    output="screen"     respawn="true"/>
    <node pkg="$(arg project)"      type="$(arg project)_imageProjection"     name="$(arg project)_imageProjection"      output="screen"     respawn="true"/>
    <node pkg="$(arg project)"      type="$(arg project)_featureExtraction"   name="$(arg project)_featureExtraction"    output="screen"     respawn="true"/>
    <node pkg="$(arg project)"      type="$(arg project)_mapOptmization"      name="$(arg project)_mapOptmization"       output="screen"     respawn="true"/>

    <!-- Visual feature and odometry -->
    <node pkg="$(arg project)"      type="$(arg project)_visual_feature"      name="$(arg project)_visual_feature"       output="screen"     respawn="true"/>
    <node pkg="$(arg project)"      type="$(arg project)_visual_odometry"     name="$(arg project)_visual_odometry"      output="screen"     respawn="true"/>
    <node pkg="$(arg project)"      type="$(arg project)_visual_loop"         name="$(arg project)_visual_loop"          output="screen"     respawn="true"/>

    <!-- Image conversion -->
    <node pkg="image_transport" type="republish" name="$(arg project)_republish" args="compressed in:=/camera/image_raw raw out:=/camera/image_raw" output="screen" respawn="true"/>

</launch>

CMakeLists 文件(节选):

file(GLOB visual_odometry_files
    "src/visual_odometry/visual_estimator/*.cpp"
    "src/visual_odometry/visual_estimator/factor/*.cpp"
    "src/visual_odometry/visual_estimator/initial/*.cpp"
    "src/visual_odometry/visual_estimator/utility/*.cpp"
)

# Visual Odometry
add_executable(${PROJECT_NAME}_visual_odometry ${visual_odometry_files})
target_link_libraries(${PROJECT_NAME}_visual_odometry ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS} ${CERES_LIBRARIES}) 

2. 输入输出

下面开始节点lvi_sam_visual_odometry的分析,首先从输入和输出开始。

a. 输入

输入的分别是IMU预积分得到的里程计,原始IMU信息,经过激光深度图处理过的特征点,是否重启的信息。

ros::Subscriber sub_imu     = n.subscribe(IMU_TOPIC,      5000, imu_callback,  ros::TransportHints().tcpNoDelay()); // IMU预积分得到的里程计
ros::Subscriber sub_odom    = n.subscribe("odometry/imu", 5000, odom_callback); // 原始IMU信息
ros::Subscriber sub_image   = n.subscribe(PROJECT_NAME + "/vins/feature/feature", 1, feature_callback); // 经过激光深度图处理过的特征点
ros::Subscriber sub_restart = n.subscribe(PROJECT_NAME + "/vins/feature/restart", 1, restart_callback); // 是否重启的信息

b. 输出

这里输出比较多,从上面的图中也可以看出来。但真正的有效的输出也比较少。真正有效的输出分别是/vins/odometry/imu_propagate_ros, /vins/odometry/extrinsic, /vins/odometry/keyframe_point, /vins/odometry/keyframe_pose, /vins/odometry/point_cloud

pub_latest_odometry     = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/imu_propagate", 1000);
pub_latest_odometry_ros = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/imu_propagate_ros", 1000);
pub_path                = n.advertise<nav_msgs::Path>                   (PROJECT_NAME + "/vins/odometry/path", 1000);
pub_odometry            = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/odometry", 1000);
pub_point_cloud         = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/point_cloud", 1000);
pub_margin_cloud        = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/history_cloud", 1000);
pub_key_poses           = n.advertise<visualization_msgs::Marker>       (PROJECT_NAME + "/vins/odometry/key_poses", 1000);
pub_camera_pose         = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/camera_pose", 1000);
pub_camera_pose_visual  = n.advertise<visualization_msgs::MarkerArray>  (PROJECT_NAME + "/vins/odometry/camera_pose_visual", 1000);
pub_keyframe_pose       = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/keyframe_pose", 1000);
pub_keyframe_point      = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/keyframe_point", 1000);
pub_extrinsic           = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/extrinsic", 1000);

3. Main函数

​ 这里的内容和在8-visual_feature_初探的内容差不多,唯一需要注意的是几个函数。这里也是,先初始化节点,节点具体的名称由CMakeListslaunch文件确定。然后是读如参数,并且设置相应的参数。由于发布的内容较多,因此,这个节点的发布工作由readParameters(n)函数完成。之后,创建完成odomRegister对象之后,订阅相关话题。然后,分配线程。整个main函数的大致流程就是这样。

int main(int argc, char **argv)
{
    // 初始化节点
    ros::init(argc, argv, "vins");
    ros::NodeHandle n;
    ROS_INFO("\033[1;32m----> Visual Odometry Estimator Started.\033[0m");
    ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Warn);

    readParameters(n); // 读入参数
    estimator.setParameter(); // 设置参数

    registerPub(n); // 注册发布的话题

    odomRegister = new odometryRegister(n); // 初始化对象,并把函数句柄传给这个对象

    // 订阅相关话题
    ros::Subscriber sub_imu     = n.subscribe(IMU_TOPIC,      5000, imu_callback,  ros::TransportHints().tcpNoDelay());
    ros::Subscriber sub_odom    = n.subscribe("odometry/imu", 5000, odom_callback);
    ros::Subscriber sub_image   = n.subscribe(PROJECT_NAME + "/vins/feature/feature", 1, feature_callback);
    ros::Subscriber sub_restart = n.subscribe(PROJECT_NAME + "/vins/feature/restart", 1, restart_callback);
    if (!USE_LIDAR)
        sub_odom.shutdown();

    // 创建视觉估计模块的主线程process
    std::thread measurement_process{process};
	// 分配剩余线程
    ros::MultiThreadedSpinner spinner(4);
    spinner.spin();

    return 0;
}

a. readParameters()

​ 这里是直接获取vins_config_file,然后读入参数。这里和8-visual_feature_初探类似,但是需要的参数不一样。

void readParameters(ros::NodeHandle &n)
{
    std::string config_file;
    n.getParam("vins_config_file", config_file);
    cv::FileStorage fsSettings(config_file, cv::FileStorage::READ);
    if(!fsSettings.isOpened())
    {
        std::cerr << "ERROR: Wrong path to settings" << std::endl;
    }

    fsSettings["project_name"] >> PROJECT_NAME;
    std::string pkg_path = ros::package::getPath(PROJECT_NAME);

    fsSettings["imu_topic"] >> IMU_TOPIC;

    fsSettings["use_lidar"] >> USE_LIDAR;
    fsSettings["align_camera_lidar_estimation"] >> ALIGN_CAMERA_LIDAR_COORDINATE;

    SOLVER_TIME = fsSettings["max_solver_time"];
    NUM_ITERATIONS = fsSettings["max_num_iterations"];
    MIN_PARALLAX = fsSettings["keyframe_parallax"];
    MIN_PARALLAX = MIN_PARALLAX / FOCAL_LENGTH;

    ACC_N = fsSettings["acc_n"];
    ACC_W = fsSettings["acc_w"];
    GYR_N = fsSettings["gyr_n"];
    GYR_W = fsSettings["gyr_w"];
    G.z() = fsSettings["g_norm"];
    ROW = fsSettings["image_height"];
    COL = fsSettings["image_width"];
    ROS_INFO("Image dimention: ROW: %f COL: %f ", ROW, COL);

    /** IMU和CAM的外参是否提供 **/
    ESTIMATE_EXTRINSIC = fsSettings["estimate_extrinsic"];
    if (ESTIMATE_EXTRINSIC == 2) // 不提供
    {
        ROS_INFO("have no prior about extrinsic param, calibrate extrinsic param");
        RIC.push_back(Eigen::Matrix3d::Identity());
        TIC.push_back(Eigen::Vector3d::Zero());
        EX_CALIB_RESULT_PATH = pkg_path + "/config/extrinsic_parameter.csv";

    }
    else 
    {
        if ( ESTIMATE_EXTRINSIC == 1) // 不准确
        {
            ROS_INFO(" Optimize extrinsic param around initial guess!");
            EX_CALIB_RESULT_PATH = pkg_path + "/config/extrinsic_parameter.csv";
        }
        if (ESTIMATE_EXTRINSIC == 0) // 准确
            ROS_INFO(" Fix extrinsic param.");

        /** 读取初始R,t存入各自vector **/
        cv::Mat cv_R, cv_T;
        fsSettings["extrinsicRotation"] >> cv_R;
        fsSettings["extrinsicTranslation"] >> cv_T;
        Eigen::Matrix3d eigen_R;
        Eigen::Vector3d eigen_T;
        cv::cv2eigen(cv_R, eigen_R);
        cv::cv2eigen(cv_T, eigen_T);
        Eigen::Quaterniond Q(eigen_R);
        eigen_R = Q.normalized(); // 归一化
        RIC.push_back(eigen_R);
        TIC.push_back(eigen_T);
        ROS_INFO_STREAM("Extrinsic_R : " << std::endl << RIC[0]);
        ROS_INFO_STREAM("Extrinsic_T : " << std::endl << TIC[0].transpose());
        
    } 

    INIT_DEPTH = 5.0;
    BIAS_ACC_THRESHOLD = 0.1;
    BIAS_GYR_THRESHOLD = 0.1;

    /** IMU和cam时间校准 **/
    TD = fsSettings["td"];
    ESTIMATE_TD = fsSettings["estimate_td"];
    if (ESTIMATE_TD)
        ROS_INFO_STREAM("Unsynchronized sensors, online estimate time offset, initial td: " << TD);
    else
        ROS_INFO_STREAM("Synchronized sensors, fix time offset: " << TD);

    ROLLING_SHUTTER = fsSettings["rolling_shutter"];
    if (ROLLING_SHUTTER)
    {
        TR = fsSettings["rolling_shutter_tr"];
        ROS_INFO_STREAM("rolling shutter camera, read out time per line: " << TR);
    }
    else
    {
        TR = 0;
    }
    
    fsSettings.release();
    usleep(100);
}

b. estimator.setParameter()

​ 这里其中一个是设置计时器的参数。另外是,设置视觉测量残差的协方差矩阵

void Estimator::setParameter()
{
    for (int i = 0; i < NUM_OF_CAM; i++)
    {
        tic[i] = TIC[i];
        ric[i] = RIC[i];
    }
    /** 视觉测量残差的协方差矩阵 **/
    f_manager.setRic(ric);
    ProjectionFactor::sqrt_info = FOCAL_LENGTH / 1.5 * Matrix2d::Identity();
    ProjectionTdFactor::sqrt_info = FOCAL_LENGTH / 1.5 * Matrix2d::Identity();
    td = TD;
}

c. registerPub()

void registerPub(ros::NodeHandle &n)
{
    /** 发布话题 **/
    pub_latest_odometry     = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/imu_propagate", 1000);
    pub_latest_odometry_ros = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/imu_propagate_ros", 1000);
    pub_path                = n.advertise<nav_msgs::Path>                   (PROJECT_NAME + "/vins/odometry/path", 1000);
    pub_odometry            = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/odometry", 1000);
    pub_point_cloud         = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/point_cloud", 1000);
    pub_margin_cloud        = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/history_cloud", 1000);
    pub_key_poses           = n.advertise<visualization_msgs::Marker>       (PROJECT_NAME + "/vins/odometry/key_poses", 1000);
    pub_camera_pose         = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/camera_pose", 1000);
    pub_camera_pose_visual  = n.advertise<visualization_msgs::MarkerArray>  (PROJECT_NAME + "/vins/odometry/camera_pose_visual", 1000);
    pub_keyframe_pose       = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/keyframe_pose", 1000);
    pub_keyframe_point      = n.advertise<sensor_msgs::PointCloud>          (PROJECT_NAME + "/vins/odometry/keyframe_point", 1000);
    pub_extrinsic           = n.advertise<nav_msgs::Odometry>               (PROJECT_NAME + "/vins/odometry/extrinsic", 1000);

    /** 摄像机视觉位姿参数设置 **/
    cameraposevisual.setScale(1);
    cameraposevisual.setLineWidth(0.05);
    /** 视觉关键帧的坐标参数设置 **/
    keyframebasevisual.setScale(0.1);
    keyframebasevisual.setLineWidth(0.01);
}

d. odomRegister类

​ 这个类可以把里程计信息从lidar帧的坐标系转到VINS视觉图像帧的坐标系,在initial的文件夹下。这个类原本是可以是发布相关的信息的,但是,作者把相关的东西注释掉了,估计是写程序时的调试信息。因此,这个类其实并不需要把NodeHandle传进去作为参数,只剩下坐标系转换的功能了。这里的内容,也可以看看另外一个同学的blog。

附录:引用

LVI-SAM代码– xuwuzhou的SLAM之路 – 静心,慎思,明辨,笃学

ManiiXu/VINS-Mono-Learning: VINS-Mono

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值