fast-lio2代码解析

        代码结构很清晰,从最外层看包含两个文件夹,一个是fast-lio,另外一个是加上scan-context的回环检测与位姿图优化。

fast-lio

主要是论文的fast-lio2论文的实现,包括前向处理和ikd-tree的实现

 

 1.先从cmakelist入手看代码结构:

#这是定义代码中的ROOT_DIR
add_definitions(-DROOT_DIR=\"${CMAKE_CURRENT_SOURCE_DIR}/\")

#寻找机器的cpu核数,来选择是否采用多核计算,且留一个核的余量
if(CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)|(amd64)|(AMD64)" )
  include(ProcessorCount)
  ProcessorCount(N)
  message("Processer number:  ${N}")
  if(N GREATER 4)  
    add_definitions(-DMP_EN)
    add_definitions(-DMP_PROC_NUM=3)
    message("core for MP: 3")
  elseif(N GREATER 3)
    add_definitions(-DMP_EN)
    add_definitions(-DMP_PROC_NUM=2)
    message("core for MP: 2")
  else()
    add_definitions(-DMP_PROC_NUM=1)
  endif()
else()
  add_definitions(-DMP_PROC_NUM=1)
endif() 

#依赖openMP  PythonLibs  MATPLOTLIB_CPP_INCLUDE_DIRS绘图库

#自定义了 Pose6D.msg
add_message_files(
  FILES
  Pose6D.msg
)

#主要程序是
src/laserMapping.cpp 
include/ikd-Tree/ikd_Tree.cpp 
src/preprocess.cpp

 Pose6D.msg:

雷达在IMU坐标系下的预积分值

float64   IMU 和 第一帧雷达点的时延
float64[3] acc       # the preintegrated total acceleration (global frame) at the Lidar origin
float64[3] gyr       # the unbiased angular velocity (body frame) at the Lidar origin
float64[3] vel       # the preintegrated velocity (global frame) at the Lidar origin
float64[3] pos       # the preintegrated position (global frame) at the Lidar origin
float64[9] rot       # the preintegrated rotation (global frame) at the Lidar origin

 主程序入口在src/laserMapping.cpp 中,其他的两个cpp以库的形式给它使用

main()程序流程:

ros节点初始化-》参数读取--》参数初始化、指针初始化---》读取的雷达和IMU外参矩阵---》IMU积分参数设置,如测量协方差 ----》设置卡尔曼滤波器的参数,如迭代精度设置、迭代次数,迭代卡尔曼滤波器模型等-----》日志记录初始化

1. 获取激光雷达类型之后,开始订阅standard_pcl_cbk() 、    imu_cbk()

time_buffer为基于激光时间戳的队列,安装激光时间进行处理

void standard_pcl_cbk(const sensor_msgs::PointCloud2::ConstPtr &msg) //velodyne回调
{
    mtx_buffer.lock();
    scan_count ++;
    double preprocess_start_time = omp_get_wtime();//可以理解为当前时间戳
    if (msg->header.stamp.toSec() < last_timestamp_lidar)  //检测激光时间戳是否异常
    {
        ROS_ERROR("lidar loop back, clear buffer");
        lidar_buffer.clear();
    }

    PointCloudXYZI::Ptr  ptr(new PointCloudXYZI());
    p_pre->process(msg, ptr);      //激光雷达预处理,获得特征点云
    lidar_buffer.push_back(ptr);  //激光雷达预处理完的雷达数据
    time_buffer.push_back(msg->header.stamp.toSec());  //time_buffer是以激光雷达时间戳为基准的时间戳队列
    last_timestamp_lidar = msg->header.stamp.toSec();
    s_plot11[scan_count] = omp_get_wtime() - preprocess_start_time; //用于绘图显示处理时间
    mtx_buffer.unlock();
    sig_buffer.notify_all();  //信号量的提示 唤醒线程
}


void imu_cbk(const sensor_msgs::Imu::ConstPtr &msg_in)
{
    publish_count ++;
    // cout<<"IMU got at: "<<msg_in->header.stamp.toSec()<<endl;
    sensor_msgs::Imu::Ptr msg(new sensor_msgs::Imu(*msg_in));

    if (abs(timediff_lidar_wrt_imu) > 0.1 && time_sync_en) //timediff_lidar_wrt_imu仅在使用lovix雷达时才会使用
    {
        msg->header.stamp = \
        ros::Time().fromSec(timediff_lidar_wrt_imu + msg_in->header.stamp.toSec());
    }

    double timestamp = msg->header.stamp.toSec(); //经过补偿的IMU时间戳,如果是lovix雷达才需要补偿,其他不需要

    mtx_buffer.lock();

    if (timestamp < last_timestamp_imu)  //校验IMU时间戳的一维性,检测跳变
    {
        ROS_WARN("imu loop back, clear buffer");
        imu_buffer.clear();
    }

    last_timestamp_imu = timestamp;  //最新IMU的时间

    imu_buffer.push_back(msg);  //数据插入队列中
    mtx_buffer.unlock();
    sig_buffer.notify_all();  //有信号时,唤醒线程
}

       此次的激光点云回调会调用预处理类,获得特征点云的输出。

         然后开启ros的无限循环,当然,此处添加了信号处理,通常终端结束进程时是通过发送信号的,当收到信号时,唤醒所以线程。

2.这里需要先看测量量的定义:

包括了当前帧点云和imu数据队列

struct MeasureGroup     // Lidar data and imu dates for the curent process
{
    MeasureGroup()
    {
        lidar_beg_time = 0.0;
        this->lidar.reset(new PointCloudXYZI());
    };
    double lidar_beg_time;
    PointCloudXYZI::Ptr lidar;
    deque<sensor_msgs::Imu::ConstPtr> imu;
};

3.然后看数据同步:bool sync_packages(MeasureGroup &meas)

//这部分主要处理了buffer中的数据,将两帧激光雷达点云数据时间内的IMU数据从缓存队列中取出,进行时间对齐,并保存到meas中
bool sync_packages(MeasureGroup &meas)
{
    if (lidar_buffer.empty() || imu_buffer.empty()) {
        return false;
    }

    /*** push a lidar scan ***/
    if(!lidar_pushed)   //如果程序初始化时没指定,默认值是false, 是否已经将测量值插入雷达帧数据
    {
        meas.lidar = lidar_buffer.front();   //将雷达队列最前面的数据塞入测量值
        if(meas.lidar->points.size() <= 1)  //保证塞入的雷达数据点都是有效的
        {
            lidar_buffer.pop_front();
            return false;
        }
        meas.lidar_beg_time = time_buffer.front(); //雷达的时间按照time_buffer队首处理,因为它存的就是雷达的时间戳
        //雷达帧头的时间戳是帧头的时间戳,这和驱动有关系,通过公式推导该帧激光的帧尾时间戳
        lidar_end_time = meas.lidar_beg_time + meas.lidar->points.back().curvature / double(1000);
        lidar_pushed = true;  // 成功提取到lidar测量的标志
    }

    if (last_timestamp_imu < lidar_end_time) //如果最新的IMU时间戳都闭雷达帧尾的时间早,则这一帧不处理了
    {
        return false;
    }

    /*** push imu data, and pop from imu buffer ***/
    double imu_time = imu_buffer.front()->header.stamp.toSec(); //从最早的IMU队列开始,初始化imu_time
    meas.imu.clear();
    while ((!imu_buffer.empty()) && (imu_time < lidar_end_time))
    {
        imu_time = imu_buffer.front()->header.stamp.toSec(); //从最早的IMU队列开始
        if(imu_time > lidar_end_time) break;     //没有跳出循环的话就会将IMU数据添加进去测量量
        meas.imu.push_back(imu_buffer.front());  
        imu_buffer.pop_front();  //弹出已经塞进测量量的IMU数据
    }
    //从这出来的,测量数据中包含了当前帧的激光数据, 当前帧帧尾结束前的新增IMU数据

    lidar_buffer.pop_front(); //处理过的数据出栈
    time_buffer.pop_front();
    lidar_pushed = false;  //又重新置位,这样下一帧雷达来了又可以刷新时间,获取点云帧头和帧尾的时间
    return true;
}

        这个同步是基于激光雷达的数据存入测量量,获得帧头和帧尾之间的IMU数据队列,存入测量量中。

4.上面用到了激光的预处理,这里先插播激光预处理的内容:

        通过实例 shared_ptr<Preprocess> p_pre(new Preprocess());进行预处理,预处理仅在激光回调中使用,激光回调前是读取参数设置预处理的参数。

preprocess.h/cpp

#define IS_VALID(a)  ((abs(a)>1e8) ? true : false)  //定义一个数字是否有效

//使用枚举变量描述激光的几个特征,

enum LID_TYPE{AVIA = 1, VELO16, OUST64}; //{1, 2, 3}

enum Feature{Nor, Poss_Plane, Real_Plane, Edge_Jump, Edge_Plane, Wire, ZeroPoint};

enum Surround{Prev, Next};

enum E_jump{Nr_nor, Nr_zero, Nr_180, Nr_inf, Nr_blind};

void Preprocess::set(bool feat_en, int lid_type, double bld, int pfilt_num)
{
  feature_enabled = feat_en;
  lidar_type = lid_type;
  blind = bld;
  point_filter_num = pfilt_num; //设置雷达盲区和类型
}

//针对机械雷达
void Preprocess::process(const sensor_msgs::PointCloud2::ConstPtr &msg, PointCloudXYZI::Ptr &pcl_out)
{
  switch (lidar_type)
  {
  case OUST64:
    oust64_handler(msg);
    break;

  case VELO16:
    velodyne_handler(msg);
    break;
  
  default:
    printf("Error LiDAR Type");
    break;
  }
  *pcl_out = pl_surf;//输出分割后的面点
}


//将点云格式转化为ROS消息类型,但是没有发布
void Preprocess::pub_func(PointCloudXYZI &pl, const ros::Time &ct)
{
  pl.height = 1; pl.width = pl.size();
  sensor_msgs::PointCloud2 output;
  pcl::toROSMsg(pl, output);
  output.header.frame_id = "livox";
  output.header.stamp = ct;
}

void Preprocess::velodyne_handler(const sensor_msgs::PointCloud2::ConstPtr &msg)
{
    pl_surf.clear();
    pl_corn.clear();
    pl_full.clear(); //清空面点、角点点云

    pcl::PointCloud<velodyne_ros::Point> pl_orig;
    pcl::fromROSMsg(*msg, pl_orig);
    int plsize = pl_orig.points.size();
    pl_surf.reserve(plsize);//原始点云大小

    bool is_first[MAX_LINE_NUM];
    double yaw_fp[MAX_LINE_NUM]={0};     // yaw of first scan point
    double omega_l=3.61;       // scan angular velocity  //10Hz 0.1s转360度 
    float yaw_last[MAX_LINE_NUM]={0.0};  // yaw of last scan point
    float time_last[MAX_LINE_NUM]={0.0}; // last offset time

    if (pl_orig.points[plsize - 1].time > 0) //假如提供了每个点的时间戳
    {
      given_offset_time = true;  //提供时间偏移
    }
    else
    {
      given_offset_time = false;
      memset(is_first, true, sizeof(is_first)); //初始化数组
      double yaw_first = atan2(pl_orig.points[0].y, pl_orig.points[0].x) * 57.29578; //180/PI = 57.29
      double yaw_end  = yaw_first;     //该帧第一个点的yaw角
      int layer_first = pl_orig.points[0].ring;    //该帧第一个点的所在环
      for (uint i = plsize - 1; i > 0; i--)
      {
        if (pl_orig.points[i].ring == layer_first)
        {
          yaw_end = atan2(pl_orig.points[i].y, pl_orig.points[i].x) * 57.29578; //在同一个线上的点的yaw角
          break;
        }
      } //所以这里的yaw_end角是指和第一个点的同线序的点圆环的角度
    }

    if(feature_enabled)  //使用特征,这个参数打开
    {
      for (int i = 0; i < N_SCANS; i++)
      {
        pl_buff[i].clear();
        pl_buff[i].reserve(plsize);
      }
      
      for (int i = 0; i < plsize; i++)
      {
        PointType added_pt;
        added_pt.normal_x = 0; //法线
        added_pt.normal_y = 0;
        added_pt.normal_z = 0;
        int layer  = pl_orig.points[i].ring;
        if (layer >= N_SCANS) continue;  //这里过滤掉设置的线束N_SCANS,如果真实的雷达和N_SCANS不一致,用的是N_SCANS
        added_pt.x = pl_orig.points[i].x;
        added_pt.y = pl_orig.points[i].y;
        added_pt.z = pl_orig.points[i].z;
        added_pt.intensity = pl_orig.points[i].intensity;
        added_pt.curvature = pl_orig.points[i].time / 1000.0; // units: ms 用pcl点中曲率字段存每个点的时间,和lego-loam有点相似

        if (!given_offset_time) //因为点的遍历是从后往前的
        {
          double yaw_angle = atan2(added_pt.y, added_pt.x) * 57.2957;
          if (is_first[layer]) //is_first最开始初始化都是true的,处理了过后就是false
          {
            // printf("layer: %d; is first: %d", layer, is_first[layer]);
              yaw_fp[layer]=yaw_angle;    //按点的顺序记录了这个一线的最后yaw角
              is_first[layer]=false;
              added_pt.curvature = 0.0;    //将这个点的曲率设置为0,也就是说曲率为0 的点为该所在线的第一个点
              yaw_last[layer]=yaw_angle;
              time_last[layer]=added_pt.curvature;  //将这个点的timelast设置为0
              continue;
          }

          if (yaw_angle <= yaw_fp[layer]) //时间早于这个最后一个点,通过按照匀角速度的方式插值每个点的时间
          {
            added_pt.curvature = (yaw_fp[layer]-yaw_angle) / omega_l;
          }
          else  //当前点的时间晚于这个最后一个点,通过按照匀角速度的方式插值每个点的时间,但是是超了一圈的
          {
            added_pt.curvature = (yaw_fp[layer]-yaw_angle+360.0) / omega_l;
          }
          //time_last[layer] = 0 
          if (added_pt.curvature < time_last[layer])  added_pt.curvature+=360.0/omega_l;

          yaw_last[layer] = yaw_angle; //存下这个点
          time_last[layer]=added_pt.curvature;
        }

        pl_buff[layer].points.push_back(added_pt); //分层,将一帧点云分成多线存储在pl_buff
      }

      for (int j = 0; j < N_SCANS; j++)
      {
        PointCloudXYZI &pl = pl_buff[j];//第N线的点云,而不是单个点
        int linesize = pl.size(); //每个点云的小
        if (linesize < 2) continue;
        vector<orgtype> &types = typess[j];
        types.clear();
        types.resize(linesize);//重新分配内存
        linesize--;
        for (uint i = 0; i < linesize; i++)
        {
          types[i].range = sqrt(pl[i].x * pl[i].x + pl[i].y * pl[i].y);  //平面距离,用来确定盲区
          vx = pl[i].x - pl[i + 1].x;
          vy = pl[i].y - pl[i + 1].y;
          vz = pl[i].z - pl[i + 1].z;
          types[i].dista = vx * vx + vy * vy + vz * vz; //空间距离
        }
        types[linesize].range = sqrt(pl[linesize].x * pl[linesize].x + pl[linesize].y * pl[linesize].y);
        give_feature(pl, types); //每个线点云给出类型
      }
    }
    else //不使用特征 默认不使用特征
    {
      for (int i = 0; i < plsize; i++)
      {
        PointType added_pt;
        // cout<<"!!!!!!"<<i<<" "<<plsize<<endl;
        
        added_pt.normal_x = 0;
        added_pt.normal_y = 0;
        added_pt.normal_z = 0;
        added_pt.x = pl_orig.points[i].x;
        added_pt.y = pl_orig.points[i].y;
        added_pt.z = pl_orig.points[i].z;
        added_pt.intensity = pl_orig.points[i].intensity;
        added_pt.curvature = pl_orig.points[i].time / 1000.0; //需要驱动带有时间戳,用曲率来存放时间

        if (!given_offset_time) //没有给出偏置时间
        {
          int layer = pl_orig.points[i].ring;
          double yaw_angle = atan2(added_pt.y, added_pt.x) * 57.2957;

          if (is_first[layer]) //该线第一个yaw角
          {
            // printf("layer: %d; is first: %d", layer, is_first[layer]);
              yaw_fp[layer]=yaw_angle;
              is_first[layer]=false;
              added_pt.curvature = 0.0;
              yaw_last[layer]=yaw_angle;
              time_last[layer]=added_pt.curvature;
              continue;
          }

          // compute offset time
          if (yaw_angle <= yaw_fp[layer])
          {
            added_pt.curvature = (yaw_fp[layer]-yaw_angle) / omega_l;
          }//时间补偿,根据yaw角差
          else
          {
            added_pt.curvature = (yaw_fp[layer]-yaw_angle+360.0) / omega_l;
          }

          if (added_pt.curvature < time_last[layer])  added_pt.curvature+=360.0/omega_l;

          // added_pt.curvature = pl_orig.points[i].t;

          yaw_last[layer] = yaw_angle;
          time_last[layer]=added_pt.curvature;
        }

        // if(i==(plsize-1))  printf("index: %d layer: %d, yaw: %lf, offset-time: %lf, condition: %d\n", i, layer, yaw_angle, added_pt.curvature, prints);
        if (i % point_filter_num == 0)  //间隔几个点
        {
          if(added_pt.x*added_pt.x+added_pt.y*added_pt.y+added_pt.z*added_pt.z > blind) //大于盲区的
          {
            pl_surf.points.push_back(added_pt);
            // printf("time mode: %d time: %d \n", given_offset_time, pl_orig.points[i].t);
          }
        }
      }
    }

    
    // pub_func(pl_surf, pub_full, msg->header.stamp);
    // pub_func(pl_surf, pub_surf, msg->header.stamp);
    // pub_func(pl_surf, pub_corn, msg->header.stamp);
}

 默认是不使用特征的,输入原始激光点云, 输出pl_surf点云给主程序。

5.ros的主要流程,也即整个SLAM的流程

    //ROS循环的主要流程
    signal(SIGINT, SigHandle);
    ros::Rate rate(5000);
    bool status = ros::ok();
    while (status)
    {
        if (flg_exit) break;
        ros::spinOnce();
        if(sync_packages(Measures))
        {
            if (flg_reset)
            {
                ROS_WARN("reset when rosbag play back");
                p_imu->Reset();
                flg_reset = false;
                Measures.imu.clear();
                continue;
            }

            double t0,t1,t2,t3,t4,t5,match_start, solve_start, svd_time;

            match_time = 0;
            kdtree_search_time = 0.0;
            solve_time = 0;
            solve_const_H_time = 0;
            svd_time   = 0;
            t0 = omp_get_wtime();

            p_imu->Process(Measures, kf, feats_undistort);
            state_point = kf.get_x();
            pos_lid = state_point.pos + state_point.rot * state_point.offset_T_L_I;

            if (feats_undistort->empty() || (feats_undistort == NULL))
            {
                first_lidar_time = Measures.lidar_beg_time;
                p_imu->first_lidar_time = first_lidar_time;
                // cout<<"FAST-LIO not ready"<<endl;
                continue;
            }

            flg_EKF_inited = (Measures.lidar_beg_time - first_lidar_time) < INIT_TIME ? \
                            false : true;
            /*** Segment the map in lidar FOV ***/
            lasermap_fov_segment();

            /*** downsample the feature points in a scan ***/
            downSizeFilterSurf.setInputCloud(feats_undistort);
            downSizeFilterSurf.filter(*feats_down_body);
            t1 = omp_get_wtime();
            feats_down_size = feats_down_body->points.size();
            /*** initialize the map kdtree ***/
            if(ikdtree.Root_Node == nullptr)
            {
                if(feats_down_size > 5)
                {
                    ikdtree.set_downsample_param(filter_size_map_min);
                    feats_down_world->resize(feats_down_size);
                    for(int i = 0; i < feats_down_size; i++)
                    {
                        pointBodyToWorld(&(feats_down_body->points[i]), &(feats_down_world->points[i]));
                    }
                    ikdtree.Build(feats_down_world->points);
                }
                continue;
            }
            int featsFromMapNum = ikdtree.validnum();
            kdtree_size_st = ikdtree.size();

            // cout<<"[ mapping ]: In num: "<<feats_undistort->points.size()<<" downsamp "<<feats_down_size<<" Map num: "<<featsFromMapNum<<"effect num:"<<effct_feat_num<<endl;

            /*** ICP and iterated Kalman filter update ***/
            normvec->resize(feats_down_size);
            feats_down_world->resize(feats_down_size);

            V3D ext_euler = SO3ToEuler(state_point.offset_R_L_I);
            fout_pre<<setw(20)<<Measures.lidar_beg_time - first_lidar_time<<" "<<euler_cur.transpose()<<" "<< state_point.pos.transpose()<<" "<<ext_euler.transpose() << " "<<state_point.offset_T_L_I.transpose()<< " " << state_point.vel.transpose() \
            <<" "<<state_point.bg.transpose()<<" "<<state_point.ba.transpose()<<" "<<state_point.grav<< endl;

            if(0) // If you need to see map point, change to "if(1)"
            {
                PointVector ().swap(ikdtree.PCL_Storage);
                ikdtree.flatten(ikdtree.Root_Node, ikdtree.PCL_Storage, NOT_RECORD);
                featsFromMap->clear();
                featsFromMap->points = ikdtree.PCL_Storage;
            }

            pointSearchInd_surf.resize(feats_down_size);
            Nearest_Points.resize(feats_down_size);
            int  rematch_num = 0;
            bool nearest_search_en = true; //

            t2 = omp_get_wtime();

            /*** iterated state estimation ***/
            double t_update_start = omp_get_wtime();
            double solve_H_time = 0;
            kf.update_iterated_dyn_share_modified(LASER_POINT_COV, solve_H_time);
            state_point = kf.get_x();
            euler_cur = SO3ToEuler(state_point.rot);
            pos_lid = state_point.pos + state_point.rot * state_point.offset_T_L_I;
            geoQuat.x = state_point.rot.coeffs()[0];
            geoQuat.y = state_point.rot.coeffs()[1];
            geoQuat.z = state_point.rot.coeffs()[2];
            geoQuat.w = state_point.rot.coeffs()[3];

            double t_update_end = omp_get_wtime();

            /******* Publish odometry *******/
            publish_odometry(pubOdomAftMapped);

            /*** add the feature points to map kdtree ***/
            t3 = omp_get_wtime();
            map_incremental();
            t5 = omp_get_wtime();

            /******* Publish points *******/
            publish_path(pubPath);
            if (scan_pub_en || pcd_save_en)      publish_frame_world(pubLaserCloudFull);
            if (scan_pub_en && scan_body_pub_en) {
              publish_frame_body(pubLaserCloudFull_body);
              publish_frame_lidar(pubLaserCloudFull_lidar);
            }
            // publish_effect_world(pubLaserCloudEffect);
            // publish_map(pubLaserCloudMap);

            /*** Debug variables ***/
            if (runtime_pos_log)
            {
                frame_num ++;
                kdtree_size_end = ikdtree.size();
                aver_time_consu = aver_time_consu * (frame_num - 1) / frame_num + (t5 - t0) / frame_num;
                aver_time_icp = aver_time_icp * (frame_num - 1)/frame_num + (t_update_end - t_update_start) / frame_num;
                aver_time_match = aver_time_match * (frame_num - 1)/frame_num + (match_time)/frame_num;
                aver_time_incre = aver_time_incre * (frame_num - 1)/frame_num + (kdtree_incremental_time)/frame_num;
                aver_time_solve = aver_time_solve * (frame_num - 1)/frame_num + (solve_time + solve_H_time)/frame_num;
                aver_time_const_H_time = aver_time_const_H_time * (frame_num - 1)/frame_num + solve_time / frame_num;
                T1[time_log_counter] = Measures.lidar_beg_time;
                s_plot[time_log_counter] = t5 - t0;
                s_plot2[time_log_counter] = feats_undistort->points.size();
                s_plot3[time_log_counter] = kdtree_incremental_time;
                s_plot4[time_log_counter] = kdtree_search_time;
                s_plot5[time_log_counter] = kdtree_delete_counter;
                s_plot6[time_log_counter] = kdtree_delete_time;
                s_plot7[time_log_counter] = kdtree_size_st;
                s_plot8[time_log_counter] = kdtree_size_end;
                s_plot9[time_log_counter] = aver_time_consu;
                s_plot10[time_log_counter] = add_point_size;
                time_log_counter ++;
                printf("[ mapping ]: time: IMU + Map + Input Downsample: %0.6f ave match: %0.6f ave solve: %0.6f  ave ICP: %0.6f  map incre: %0.6f ave total: %0.6f icp: %0.6f construct H: %0.6f \n",t1-t0,aver_time_match,aver_time_solve,t3-t1,t5-t3,aver_time_consu,aver_time_icp, aver_time_const_H_time);
                ext_euler = SO3ToEuler(state_point.offset_R_L_I);
                fout_out << setw(20) << Measures.lidar_beg_time - first_lidar_time << " " << euler_cur.transpose() << " " << state_point.pos.transpose()<< " " << ext_euler.transpose() << " "<<state_point.offset_T_L_I.transpose()<<" "<< state_point.vel.transpose() \
                <<" "<<state_point.bg.transpose()<<" "<<state_point.ba.transpose()<<" "<<state_point.grav<<" "<<feats_undistort->points.size()<<endl;
                dump_lio_state_to_log(fp);
            }
        }

        status = ros::ok();
        rate.sleep();
    }

6.基于IMU的状态转移和迭代卡尔曼滤波器

先看IMU的一些处理

待续

fast-lio-localization 是一个基于激光雷达的定位算法,它的核心思想是将激光雷达数据转化为一些特征,并根据这些特征进行定位。下面是 fast-lio-localization 的代码详解: 1. 读取激光雷达数据 首先,在 main 函数中我们需要读取激光雷达数据。fast-lio-localization 支持多种不同类型的激光雷达,包括 Velodyne HDL-32E、Velodyne VLP-16、Hokuyo UTM-30LX 和 Sick LMS200。 2. 预处理激光雷达数据 预处理激光雷达数据是 fast-lio-localization 的一个重要步骤。在这个步骤中,我们需要将激光雷达数据转化为一些特征。这些特征包括: - 点云:将激光雷达数据转化为点云,便于后续处理。 - 点云滤波:对点云进行滤波,去除一些噪声点。 - 特征提取:从点云中提取一些特征,如角点和平面点。 3. 初始化粒子滤波器 fast-lio-localization 使用粒子滤波器来进行定位。在初始化粒子滤波器时,我们需要设置一些参数,如粒子数量、初始位置和方向等。 4. 运行粒子滤波器 在运行粒子滤波器时,我们需要执行以下步骤: - 根据当前机器人的运动模型,对粒子进行预测。 - 使用激光雷达数据和地图,计算每个粒子的权重。 - 根据粒子的权重,重新采样粒子。 5. 输出定位结果 在粒子滤波器运行完毕后,我们可以得到一些最终的粒子。通过对这些粒子的统计分析,我们可以得到机器人的位置和方向,并输出定位结果。 以上就是 fast-lio-localization 的代码详解。需要注意的是,fast-lio-localization 是一个比较复杂的算法,需要对激光雷达数据和粒子滤波器等知识有一定的了解才能深入理解它的实现。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值