自动驾驶与机器人中的SLAM技术--第七章(7.4-7.5)--3D SLAM

目录

7.4 特征法激光雷达里程计

7.4.1 特征的提取

7.4.2 基于激光雷达线束的特征提取

7.4.3 特征提取的代码实现——feature_extraction.cc

7.4.4  特征法激光雷达里程计的实现

7.5 松耦合LIO系统

7.5.1 坐标系说明

7.5.2 松耦合LIO系统的运动与观测方程

7.5.3 松耦合LIO系统的数据准备

7.5.4 松耦合LIO系统的主要流程 


7.4 特征法激光雷达里程计

7.4.1 特征的提取

        特征法激光雷达里程计(间接法激光雷达里程计)。先对点云提取一些简单的特征,对特征点进行配准,同时根据特征点本身的不同性质,采取不同的配准方法,实现更好的效果。LOAM系列包括LOAM,以及后续改进版本LeGO-LOAM,ALOAM,FLOAM,是自动驾驶行业中常用的开源方案,也是许多LIO系统的基础。
        对于一个多线激光雷达,应该提取什么样的特征?
        1. 特征应该能反应点云的特点。
        2. 可以使用ICP或NDT方法对这些特征点进行配准。       
        3. 特征点提取不应占用太多计算资源。
        4. 整个系统应使用相同的计算机构,不希望数据在CPU和GPU上相互传输造成额外的耗时。

7.4.2 基于激光雷达线束的特征提取

        多线激光雷达自身带有线束信息,除了x,y,z位置信息,还有:
        1. 某个激光点来自哪一条扫描线
        2. 同一条线上的激光点的先后顺序关系
        3. 有些驱动会输出各点的极坐标角度、扫描时间等信息

        计算这些点的曲率,使用线束曲率来判断这些点是角点还是平面点。

7.4.3 特征提取的代码实现——feature_extraction.cc

        函数作用:从输入点云中提取特征点,分类为角点和平面点,并输出到两个CloudPtr中。
        此数据是一个16线激光雷达,将每个点分配到对应的线束中,然后根据这个点的前后五个点来计算曲率。对当前扫描线的每个区间选取特征,360度分为6个区间。在每个区间中,对每个点曲率进行排序,选取20个角点,剩下的都是平面点。

//
// Created by xiang on 2022/7/21.
//
#include "ch7/loam-like/feature_extraction.h"
#include <glog/logging.h>

namespace sad {

void FeatureExtraction::Extract(FullCloudPtr pc_in, CloudPtr pc_out_edge, CloudPtr pc_out_surf) {
    int num_scans = 16;
    std::vector<CloudPtr> scans_in_each_line;  // 分线数的点云
    for (int i = 0; i < num_scans; i++) {
        scans_in_each_line.emplace_back(new PointCloudType);
    }

    for (auto &pt : pc_in->points) {
        assert(pt.ring >= 0 && pt.ring < num_scans);
        PointType p;
        p.x = pt.x;
        p.y = pt.y;
        p.z = pt.z;
        p.intensity = pt.intensity;

        scans_in_each_line[pt.ring]->points.emplace_back(p);
    }

    // 处理曲率
    for (int i = 0; i < num_scans; i++) {
        if (scans_in_each_line[i]->points.size() < 131) {
            continue;
        }

        std::vector<IdAndValue> cloud_curvature;  // 每条线对应的曲率
        int total_points = scans_in_each_line[i]->points.size() - 10;
        for (int j = 5; j < (int)scans_in_each_line[i]->points.size() - 5; j++) {
            // 两头留一定余量,采样周围10个点取平均值
            double diffX = scans_in_each_line[i]->points[j - 5].x + scans_in_each_line[i]->points[j - 4].x +
                           scans_in_each_line[i]->points[j - 3].x + scans_in_each_line[i]->points[j - 2].x +
                           scans_in_each_line[i]->points[j - 1].x - 10 * scans_in_each_line[i]->points[j].x +
                           scans_in_each_line[i]->points[j + 1].x + scans_in_each_line[i]->points[j + 2].x +
                           scans_in_each_line[i]->points[j + 3].x + scans_in_each_line[i]->points[j + 4].x +
                           scans_in_each_line[i]->points[j + 5].x;
            double diffY = scans_in_each_line[i]->points[j - 5].y + scans_in_each_line[i]->points[j - 4].y +
                           scans_in_each_line[i]->points[j - 3].y + scans_in_each_line[i]->points[j - 2].y +
                           scans_in_each_line[i]->points[j - 1].y - 10 * scans_in_each_line[i]->points[j].y +
                           scans_in_each_line[i]->points[j + 1].y + scans_in_each_line[i]->points[j + 2].y +
                           scans_in_each_line[i]->points[j + 3].y + scans_in_each_line[i]->points[j + 4].y +
                           scans_in_each_line[i]->points[j + 5].y;
            double diffZ = scans_in_each_line[i]->points[j - 5].z + scans_in_each_line[i]->points[j - 4].z +
                           scans_in_each_line[i]->points[j - 3].z + scans_in_each_line[i]->points[j - 2].z +
                           scans_in_each_line[i]->points[j - 1].z - 10 * scans_in_each_line[i]->points[j].z +
                           scans_in_each_line[i]->points[j + 1].z + scans_in_each_line[i]->points[j + 2].z +
                           scans_in_each_line[i]->points[j + 3].z + scans_in_each_line[i]->points[j + 4].z +
                           scans_in_each_line[i]->points[j + 5].z;
            IdAndValue distance(j, diffX * diffX + diffY * diffY + diffZ * diffZ);
            cloud_curvature.push_back(distance);
        }

        // 对每个区间选取特征,把360度分为6个区间
        for (int j = 0; j < 6; j++) {
            int sector_length = (int)(total_points / 6);
            int sector_start = sector_length * j;
            int sector_end = sector_length * (j + 1) - 1;
            if (j == 5) {
                sector_end = total_points - 1;
            }

            std::vector<IdAndValue> sub_cloud_curvature(cloud_curvature.begin() + sector_start,
                                                        cloud_curvature.begin() + sector_end);

            ExtractFromSector(scans_in_each_line[i], sub_cloud_curvature, pc_out_edge, pc_out_surf);
        }
    }
}

void FeatureExtraction::ExtractFromSector(const CloudPtr &pc_in, std::vector<IdAndValue> &cloud_curvature,
                                          CloudPtr &pc_out_edge, CloudPtr &pc_out_surf) {
    // 按曲率排序
    std::sort(cloud_curvature.begin(), cloud_curvature.end(),
              [](const IdAndValue &a, const IdAndValue &b) { return a.value_ < b.value_; });

    int largest_picked_num = 0;
    int point_info_count = 0;

    /// 按照曲率最大的开始搜,选取曲率最大的角点
    std::vector<int> picked_points;  // 标记被选中的角点,角点附近的点都不会被选取
    for (int i = cloud_curvature.size() - 1; i >= 0; i--) {
        int ind = cloud_curvature[i].id_;
        if (std::find(picked_points.begin(), picked_points.end(), ind) == picked_points.end()) {
            if (cloud_curvature[i].value_ <= 0.1) {
                break;
            }

            largest_picked_num++;
            picked_points.push_back(ind);

            if (largest_picked_num <= 20) {
                pc_out_edge->push_back(pc_in->points[ind]);
                point_info_count++;
            } else {
                break;
            }

            for (int k = 1; k <= 5; k++) {
                double diffX = pc_in->points[ind + k].x - pc_in->points[ind + k - 1].x;
                double diffY = pc_in->points[ind + k].y - pc_in->points[ind + k - 1].y;
                double diffZ = pc_in->points[ind + k].z - pc_in->points[ind + k - 1].z;
                if (diffX * diffX + diffY * diffY + diffZ * diffZ > 0.05) {
                    break;
                }
                picked_points.push_back(ind + k);
            }
            for (int k = -1; k >= -5; k--) {
                double diffX = pc_in->points[ind + k].x - pc_in->points[ind + k + 1].x;
                double diffY = pc_in->points[ind + k].y - pc_in->points[ind + k + 1].y;
                double diffZ = pc_in->points[ind + k].z - pc_in->points[ind + k + 1].z;
                if (diffX * diffX + diffY * diffY + diffZ * diffZ > 0.05) {
                    break;
                }
                picked_points.push_back(ind + k);
            }
        }
    }

    /// 选取曲率较小的平面点
    for (int i = 0; i <= (int)cloud_curvature.size() - 1; i++) {
        int ind = cloud_curvature[i].id_;
        if (std::find(picked_points.begin(), picked_points.end(), ind) == picked_points.end()) {
            pc_out_surf->push_back(pc_in->points[ind]);
        }
    }
}

}  // namespace sad

7.4.4  特征法激光雷达里程计的实现

        计算流程和基于NDT的类似,主要区别在于有两个局部地图,一个是存储角点的,一个是存储平面点的。在与局部地图配准的过程中,角点使用点到点残差,平面点使用点到面残差。把两种残差以及对应的雅可比矩阵加到一块,求解出位姿。

//
// Created by xiang on 2022/7/25.
//

#include "ch7/loam-like/loam_like_odom.h"
#include "ch7/loam-like/feature_extraction.h"
#include "common/lidar_utils.h"
#include "common/math_utils.h"
#include "common/point_cloud_utils.h"

#include <execution>

namespace sad {

LoamLikeOdom::LoamLikeOdom(LoamLikeOdom::Options options)
    : options_(options), feature_extraction_(new FeatureExtraction), global_map_(new PointCloudType()) {
    if (options_.display_realtime_cloud_) {
        viewer_ = std::make_shared<PCLMapViewer>(0.1);
    }

    kdtree_edge_.SetEnableANN();
    kdtree_surf_.SetEnableANN();
}

void LoamLikeOdom::ProcessPointCloud(FullCloudPtr cloud) {
    LOG(INFO) << "processing frame " << cnt_frame_++;
    // step 1. 提特征
    CloudPtr current_edge(new PointCloudType), current_surf(new PointCloudType);
    feature_extraction_->Extract(cloud, current_edge, current_surf);

    if (current_edge->size() < options_.min_edge_pts_ || current_surf->size() < options_.min_surf_pts_) {
        LOG(ERROR) << "not enough edge/surf pts: " << current_edge->size() << "," << current_surf->size();
        return;
    }

    LOG(INFO) << "edge: " << current_edge->size() << ", surf: " << current_surf->size();

    if (local_map_edge_ == nullptr || local_map_surf_ == nullptr) {
        // 首帧特殊处理
        local_map_edge_ = current_edge;
        local_map_surf_ = current_surf;

        kdtree_edge_.BuildTree(local_map_edge_);
        kdtree_surf_.BuildTree(local_map_surf_);

        edges_.emplace_back(current_edge);
        surfs_.emplace_back(current_surf);
        return;
    }

    /// 与局部地图配准
    SE3 pose = AlignWithLocalMap(current_edge, current_surf);
    CloudPtr scan_world(new PointCloudType);
    pcl::transformPointCloud(*ConvertToCloud<FullPointType>(cloud), *scan_world, pose.matrix());

    CloudPtr edge_world(new PointCloudType), surf_world(new PointCloudType);
    pcl::transformPointCloud(*current_edge, *edge_world, pose.matrix());
    pcl::transformPointCloud(*current_surf, *surf_world, pose.matrix());

    if (IsKeyframe(pose)) {
        LOG(INFO) << "inserting keyframe";
        last_kf_pose_ = pose;
        last_kf_id_ = cnt_frame_;

        // 重建local map
        edges_.emplace_back(edge_world);
        surfs_.emplace_back(surf_world);

        if (edges_.size() > options_.num_kfs_in_local_map_) {
            edges_.pop_front();
        }
        if (surfs_.size() > options_.num_kfs_in_local_map_) {
            surfs_.pop_front();
        }

        local_map_surf_.reset(new PointCloudType);
        local_map_edge_.reset(new PointCloudType);

        for (auto& s : edges_) {
            *local_map_edge_ += *s;
        }
        for (auto& s : surfs_) {
            *local_map_surf_ += *s;
        }

        local_map_surf_ = VoxelCloud(local_map_surf_, 1.0);
        local_map_edge_ = VoxelCloud(local_map_edge_, 1.0);

        LOG(INFO) << "insert keyframe, surf pts: " << local_map_surf_->size()
                  << ", edge pts: " << local_map_edge_->size();

        kdtree_surf_.BuildTree(local_map_surf_);
        kdtree_edge_.BuildTree(local_map_edge_);

        *global_map_ += *scan_world;
    }

    LOG(INFO) << "current pose: " << pose.translation().transpose() << ", "
              << pose.so3().unit_quaternion().coeffs().transpose();

    if (viewer_ != nullptr) {
        viewer_->SetPoseAndCloud(pose, scan_world);
    }
}

bool LoamLikeOdom::IsKeyframe(const SE3& current_pose) {
    if ((cnt_frame_ - last_kf_id_) > 30) {
        return true;
    }

    // 只要与上一帧相对运动超过一定距离或角度,就记关键帧
    SE3 delta = last_kf_pose_.inverse() * current_pose;
    return delta.translation().norm() > options_.kf_distance_ ||
           delta.so3().log().norm() > options_.kf_angle_deg_ * math::kDEG2RAD;
}

SE3 LoamLikeOdom::AlignWithLocalMap(CloudPtr edge, CloudPtr surf) {
    // 这部分的ICP需要自己写
    SE3 pose;
    if (estimated_poses_.size() >= 2) {
        // 从最近两个pose来推断
        SE3 T1 = estimated_poses_[estimated_poses_.size() - 1];
        SE3 T2 = estimated_poses_[estimated_poses_.size() - 2];
        pose = T1 * (T2.inverse() * T1);
    }

    int edge_size = edge->size();
    int surf_size = surf->size();

    // 我们来写一些并发代码
    for (int iter = 0; iter < options_.max_iteration_; ++iter) {
        std::vector<bool> effect_surf(surf_size, false);
        std::vector<Eigen::Matrix<double, 1, 6>> jacob_surf(surf_size);  // 点面的残差是1维的
        std::vector<double> errors_surf(surf_size);

        std::vector<bool> effect_edge(edge_size, false);
        std::vector<Eigen::Matrix<double, 3, 6>> jacob_edge(edge_size);  // 点线的残差是3维的
        std::vector<Vec3d> errors_edge(edge_size);

        std::vector<int> index_surf(surf_size);
        std::iota(index_surf.begin(), index_surf.end(), 0);  // 填入
        std::vector<int> index_edge(edge_size);
        std::iota(index_edge.begin(), index_edge.end(), 0);  // 填入

        // gauss-newton 迭代
        // 最近邻,角点部分
        if (options_.use_edge_points_) {
            std::for_each(std::execution::par_unseq, index_edge.begin(), index_edge.end(), [&](int idx) {
                Vec3d q = ToVec3d(edge->points[idx]);
                Vec3d qs = pose * q;

                // 检查最近邻
                std::vector<int> nn_indices;

                kdtree_edge_.GetClosestPoint(ToPointType(qs), nn_indices, 5);
                effect_edge[idx] = false;

                if (nn_indices.size() >= 3) {
                    std::vector<Vec3d> nn_eigen;
                    for (auto& n : nn_indices) {
                        nn_eigen.emplace_back(ToVec3d(local_map_edge_->points[n]));
                    }

                    // point to point residual
                    Vec3d d, p0;
                    if (!math::FitLine(nn_eigen, p0, d, options_.max_line_distance_)) {
                        return;
                    }

                    Vec3d err = SO3::hat(d) * (qs - p0);
                    if (err.norm() > options_.max_line_distance_) {
                        return;
                    }

                    effect_edge[idx] = true;

                    // build residual
                    Eigen::Matrix<double, 3, 6> J;
                    J.block<3, 3>(0, 0) = -SO3::hat(d) * pose.so3().matrix() * SO3::hat(q);
                    J.block<3, 3>(0, 3) = SO3::hat(d);

                    jacob_edge[idx] = J;
                    errors_edge[idx] = err;
                }
            });
        }

        /// 最近邻,平面点部分
        if (options_.use_surf_points_) {
            std::for_each(std::execution::par_unseq, index_surf.begin(), index_surf.end(), [&](int idx) {
                Vec3d q = ToVec3d(surf->points[idx]);
                Vec3d qs = pose * q;

                // 检查最近邻
                std::vector<int> nn_indices;

                kdtree_surf_.GetClosestPoint(ToPointType(qs), nn_indices, 5);
                effect_surf[idx] = false;

                if (nn_indices.size() == 5) {
                    std::vector<Vec3d> nn_eigen;
                    for (auto& n : nn_indices) {
                        nn_eigen.emplace_back(ToVec3d(local_map_surf_->points[n]));
                    }

                    // 点面残差
                    Vec4d n;
                    if (!math::FitPlane(nn_eigen, n)) {
                        return;
                    }

                    double dis = n.head<3>().dot(qs) + n[3];
                    if (fabs(dis) > options_.max_plane_distance_) {
                        return;
                    }

                    effect_surf[idx] = true;

                    // build residual
                    Eigen::Matrix<double, 1, 6> J;
                    J.block<1, 3>(0, 0) = -n.head<3>().transpose() * pose.so3().matrix() * SO3::hat(q);
                    J.block<1, 3>(0, 3) = n.head<3>().transpose();

                    jacob_surf[idx] = J;
                    errors_surf[idx] = dis;
                }
            });
        }

        // 累加Hessian和error,计算dx
        // 原则上可以用reduce并发,写起来比较麻烦,这里写成accumulate
        double total_res = 0;
        int effective_num = 0;

        Mat6d H = Mat6d::Zero();
        Vec6d err = Vec6d::Zero();

        for (const auto& idx : index_surf) {
            if (effect_surf[idx]) {
                H += jacob_surf[idx].transpose() * jacob_surf[idx];
                err += -jacob_surf[idx].transpose() * errors_surf[idx];
                effective_num++;
                total_res += errors_surf[idx] * errors_surf[idx];
            }
        }

        for (const auto& idx : index_edge) {
            if (effect_edge[idx]) {
                H += jacob_edge[idx].transpose() * jacob_edge[idx];
                err += -jacob_edge[idx].transpose() * errors_edge[idx];
                effective_num++;
                total_res += errors_edge[idx].norm();
            }
        }

        if (effective_num < options_.min_effective_pts_) {
            LOG(WARNING) << "effective num too small: " << effective_num;
            return pose;
        }

        Vec6d dx = H.inverse() * err;
        pose.so3() = pose.so3() * SO3::exp(dx.head<3>());
        pose.translation() += dx.tail<3>();

        // 更新
        LOG(INFO) << "iter " << iter << " total res: " << total_res << ", eff: " << effective_num
                  << ", mean res: " << total_res / effective_num << ", dxn: " << dx.norm();

        if (dx.norm() < options_.eps_) {
            LOG(INFO) << "converged, dx = " << dx.transpose();
            break;
        }
    }

    estimated_poses_.emplace_back(pose);
    return pose;
}

void LoamLikeOdom::SaveMap(const std::string& path) {
    if (global_map_ && global_map_->empty() == false) {
        sad::SaveCloudToFile(path, *global_map_);
    }
}

}  // namespace sad

        对特征提取方式的细化,比如将地面、天花板分离出来单独处理;按照线束曲率大小,把点云进一步细分为角点、不那么尖锐的角点、平面点、不那么平的平面点,等等,都可以进一步提升配准的效果。

7.5 松耦合LIO系统

        在松耦合LIO系统中,有一个状态估计器来计算车辆自身的状态,IMU和轮速为其提供惯性和速度方面的观测源,点云匹配结果为系统提供位姿数据的观测源。在松耦合估计中,点云配准和卡尔曼滤波是相对解耦的。本节实现一个简单的松耦合LIO系统,使用ESKF,和增量式NDT里程计,实现松耦合LIO系统。

7.5.1 坐标系说明

        有三个坐标系,世界坐标系(W)、IMU坐标系(I)、雷达坐标系(L),将点云转换到IMU坐标系下,整个系统以IMU为中心来实现定位建图,将雷达点云转换到IMU坐标系:

p_{I}=T_{IL}P_{L}=R_{IL}p_{L}+t_{IL}

        这样不会在观测模型中引入额外的外参参数,避免雅可比矩阵变得更复杂。

7.5.2 松耦合LIO系统的运动与观测方程

        运动方程和第三章(3.47)保持一致,激光雷达里程计的输出位姿,直接视为对状态变量R,p的观测。写成抽象的z=h(x)形式,平移部分应为:

p_{LO}=p

        旋转部分,写成:

\delta \theta =Log(R^{\top}R_{LO})

        旋转观测相对于误差状态变量\delta \theta的观测雅可比矩阵应为I,同时残差定义为:

r_{R} =-Log(R^{\top}R_{LO})

7.5.3 松耦合LIO系统的数据准备

        代码实现分为三部分:
        1. IMU数据与雷达数据同步。雷达通常是10Hz,IMU通常是100Hz,统一处理两个雷达数据之间的10个IMU数据。
        2. 需要处理雷达的运动补偿,需要有雷达测量时间内的位姿数据来源,可以用ESKF对每个IMU数据的预测值。
        3. 从ESKF中拿到预测的位姿数据,交给里程计算法,再将里程计配准之后的位姿放入ESKF中。
        CloudConvert类负责将各种格式的点云转化为PCL格式的点云:

#pragma once

#include <livox_ros_driver/CustomMsg.h>
#include <pcl_conversions/pcl_conversions.h>

#include <pcl/point_cloud.h>
#include <pcl/point_types.h>

#include "common/point_types.h"

namespace sad {

/**
 * 预处理雷达点云
 *
 * 将Velodyne, ouster, avia等数据转到FullCloud
 * 该类由MessageSync类持有,负责将收到的雷达消息与IMU同步并预处理后,再交给LO/LIO算法
 */
class CloudConvert {
   public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW

    enum class LidarType {
        AVIA = 1,  // 大疆的固态雷达
        VELO32,    // Velodyne 32线
        OUST64,    // ouster 64线
    };

    CloudConvert() = default;
    ~CloudConvert() = default;

    /**
     * 处理livox avia 点云
     * @param msg
     * @param pcl_out
     */
    void Process(const livox_ros_driver::CustomMsg::ConstPtr &msg, FullCloudPtr &pcl_out);

    /**
     * 处理sensor_msgs::PointCloud2点云
     * @param msg
     * @param pcl_out
     */
    void Process(const sensor_msgs::PointCloud2::ConstPtr &msg, FullCloudPtr &pcl_out);

    /// 从YAML中读取参数
    void LoadFromYAML(const std::string &yaml);

   private:
    void AviaHandler(const livox_ros_driver::CustomMsg::ConstPtr &msg);
    void Oust64Handler(const sensor_msgs::PointCloud2::ConstPtr &msg);
    void VelodyneHandler(const sensor_msgs::PointCloud2::ConstPtr &msg);

    FullPointCloudType cloud_full_, cloud_out_;  // 输出点云
    LidarType lidar_type_ = LidarType::AVIA;     // 雷达类型
    int point_filter_num_ = 1;                   // 跳点
    int num_scans_ = 6;                          // 扫描线数
    float time_scale_ = 1e-3;                    // 雷达点的时间字段与秒的比例
};
}  // namespace sad

        后续使用FullCloudPtr即可。接下来将IMU数与点云进行同步,定义MessageSync类:

//
// Created by xiang on 22-9-9.
//

#ifndef SLAM_IN_AUTO_DRIVING_MEASURE_SYNC_H
#define SLAM_IN_AUTO_DRIVING_MEASURE_SYNC_H

#include "cloud_convert.h"
#include "common/imu.h"
#include "common/point_types.h"

#include <glog/logging.h>
#include <deque>

namespace sad {

/// IMU 数据与雷达同步
struct MeasureGroup {
    MeasureGroup() { this->lidar_.reset(new FullPointCloudType()); };

    double lidar_begin_time_ = 0;   // 雷达包的起始时间
    double lidar_end_time_ = 0;     // 雷达的终止时间
    FullCloudPtr lidar_ = nullptr;  // 雷达点云
    std::deque<IMUPtr> imu_;        // 上一时时刻到现在的IMU读数
};

/**
 * 将激光数据和IMU数据同步
 */
class MessageSync {
   public:
    using Callback = std::function<void(const MeasureGroup &)>;

    MessageSync(Callback cb) : callback_(cb), conv_(new CloudConvert) {}

    /// 初始化
    void Init(const std::string &yaml);

    /// 处理IMU数据
    void ProcessIMU(IMUPtr imu) {
        double timestamp = imu->timestamp_;
        if (timestamp < last_timestamp_imu_) {
            LOG(WARNING) << "imu loop back, clear buffer";
            imu_buffer_.clear();
        }

        last_timestamp_imu_ = timestamp;
        imu_buffer_.emplace_back(imu);
    }

    /**
     * 处理sensor_msgs::PointCloud2点云
     * @param msg
     */
    void ProcessCloud(const sensor_msgs::PointCloud2::ConstPtr &msg) {
        if (msg->header.stamp.toSec() < last_timestamp_lidar_) {
            LOG(ERROR) << "lidar loop back, clear buffer";
            lidar_buffer_.clear();
        }

        FullCloudPtr cloud(new FullPointCloudType());
        conv_->Process(msg, cloud);
        lidar_buffer_.push_back(cloud);
        time_buffer_.push_back(msg->header.stamp.toSec());
        last_timestamp_lidar_ = msg->header.stamp.toSec();

        Sync();
    }

    /// 处理Livox点云
    void ProcessCloud(const livox_ros_driver::CustomMsg::ConstPtr &msg) {
        if (msg->header.stamp.toSec() < last_timestamp_lidar_) {
            LOG(WARNING) << "lidar loop back, clear buffer";
            lidar_buffer_.clear();
        }

        last_timestamp_lidar_ = msg->header.stamp.toSec();
        FullCloudPtr ptr(new FullPointCloudType());
        conv_->Process(msg, ptr);

        if (ptr->empty()) {
            return;
        }

        lidar_buffer_.emplace_back(ptr);
        time_buffer_.emplace_back(last_timestamp_lidar_);

        Sync();
    }

   private:
    /// 尝试同步IMU与激光数据,成功时返回true
    bool Sync();

    Callback callback_;                             // 同步数据后的回调函数
    std::shared_ptr<CloudConvert> conv_ = nullptr;  // 点云转换
    std::deque<FullCloudPtr> lidar_buffer_;         // 雷达数据缓冲
    std::deque<IMUPtr> imu_buffer_;                 // imu数据缓冲
    double last_timestamp_imu_ = -1.0;              // 最近imu时间
    double last_timestamp_lidar_ = 0;               // 最近lidar时间
    std::deque<double> time_buffer_;
    bool lidar_pushed_ = false;
    MeasureGroup measures_;
    double lidar_end_time_ = 0;
};

}  // namespace sad

#endif  // SLAM_IN_AUTO_DRIVING_MEASURE_SYNC_H

        在MessageSync::Sync()函数中,把IMU数据放入lidar数据开始时间和结束时间之间。然后调用回调函数处理同步好的数据。

//
// Created by xiang on 23-2-2.
//
#include "measure_sync.h"

namespace sad {

bool MessageSync::Sync() {
    if (lidar_buffer_.empty() || imu_buffer_.empty()) {
        return false;
    }

    if (!lidar_pushed_) {
        measures_.lidar_ = lidar_buffer_.front();
        measures_.lidar_begin_time_ = time_buffer_.front();

        lidar_end_time_ = measures_.lidar_begin_time_ + measures_.lidar_->points.back().time / double(1000);

        measures_.lidar_end_time_ = lidar_end_time_;
        lidar_pushed_ = true;
    }

    if (last_timestamp_imu_ < lidar_end_time_) {
        return false;
    }

    double imu_time = imu_buffer_.front()->timestamp_;
    measures_.imu_.clear();
    while ((!imu_buffer_.empty()) && (imu_time < lidar_end_time_)) {
        imu_time = imu_buffer_.front()->timestamp_;
        if (imu_time > lidar_end_time_) {
            break;
        }
        measures_.imu_.push_back(imu_buffer_.front());
        imu_buffer_.pop_front();
    }

    lidar_buffer_.pop_front();
    time_buffer_.pop_front();
    lidar_pushed_ = false;

    if (callback_) {
        callback_(measures_);
    }

    return true;
}

void MessageSync::Init(const std::string& yaml) { conv_->LoadFromYAML(yaml); }

}  // namespace sad

7.5.4 松耦合LIO系统的主要流程 

        系统包含一个ESKF对象和MessageSync对象,通过callback_(measures_)用来处理同步之后的点云数据和IMU数据,进入到LooselyLIO::ProcessMeasurements函数。当IMU未初始化,则静止初始化,然后使用IMU数据进行状态预测,使用预测数据进行点云去畸变,最后对去畸变的点云做配准。

void LooselyLIO::ProcessMeasurements(const MeasureGroup &meas) {
    LOG(INFO) << "call meas, imu: " << meas.imu_.size() << ", lidar pts: " << meas.lidar_->size();
    measures_ = meas;

    if (imu_need_init_) {
        // 初始化IMU系统
        TryInitIMU();
        return;
    }

    // 利用IMU数据进行状态预测
    Predict();

    // 对点云去畸变
    Undistort();

    // 配准
    Align();
}

         点云预测:

void LooselyLIO::Predict() {
    imu_states_.clear();
    imu_states_.emplace_back(eskf_.GetNominalState());

    /// 对IMU状态进行预测
    for (auto &imu : measures_.imu_) {
        eskf_.Predict(*imu);
        imu_states_.emplace_back(eskf_.GetNominalState());
    }
}

        去畸变:使用IMU预测位姿进行运动补偿
        运动补偿:补偿由车辆运动带来的扫描数据畸变。大部分雷达的扫描频率为10-20Hz,扫描时间为0.1s或0.05s,如果车辆以20m/s的速度运行,那么扫描开始时和扫描结束时的车辆位姿可能相差1-2m。
        进行运动补偿,需要知道雷达单次扫描时间ts中,t\in (0,ts),任意时刻的雷达位姿。记t时刻IMU位姿为T(t)_{WI},结束时刻为T(t_{s})_{WI}。设IMU与雷达之间的外参为T_{IL},运动补偿转换公式可以写为:

p^{'}=T_{LI}T(t_{s})_{IW}T(t)_{WI}T_{IL}p_{t}

        即把所有点云都统一转换到扫描结束时的雷达位姿。

void LooselyLIO::Undistort() {
    auto cloud = measures_.lidar_;
    auto imu_state = eskf_.GetNominalState();  // 最后时刻的状态
    SE3 T_end = SE3(imu_state.R_, imu_state.p_);

    if (options_.save_motion_undistortion_pcd_) {
        sad::SaveCloudToFile("./data/ch7/before_undist.pcd", *cloud);
    }

    /// 将所有点转到最后时刻状态上
    std::for_each(std::execution::par_unseq, cloud->points.begin(), cloud->points.end(), [&](auto &pt) {
        SE3 Ti = T_end;
        NavStated match;

        // 根据pt.time查找时间,pt.time是该点打到的时间与雷达开始时间之差,单位为毫秒
        math::PoseInterp<NavStated>(
            measures_.lidar_begin_time_ + pt.time * 1e-3, imu_states_, [](const NavStated &s) { return s.timestamp_; },
            [](const NavStated &s) { return s.GetSE3(); }, Ti, match);

        Vec3d pi = ToVec3d(pt);
        Vec3d p_compensate = TIL_.inverse() * T_end.inverse() * Ti * TIL_ * pi;

        pt.x = p_compensate(0);
        pt.y = p_compensate(1);
        pt.z = p_compensate(2);
    });
    scan_undistort_ = cloud;

    if (options_.save_motion_undistortion_pcd_) {
        sad::SaveCloudToFile("./data/ch7/after_undist.pcd", *cloud);
    }
}

        将去畸变后的点云转换到IMU坐标系中, 然后将点云转换为另一种形式并用体素滤波器进行降采样。随后开始处理雷达数据,从ESKF中获取预测pose,将其作为增量式NDT的初始位姿,然后把配准后的pose,作为ESKF的观测,进行更新。

void LooselyLIO::Align() {
    FullCloudPtr scan_undistort_trans(new FullPointCloudType);
    pcl::transformPointCloud(*scan_undistort_, *scan_undistort_trans, TIL_.matrix());
    scan_undistort_ = scan_undistort_trans;

    auto current_scan = ConvertToCloud<FullPointType>(scan_undistort_);

    // voxel 滤波
    pcl::VoxelGrid<PointType> voxel;
    voxel.setLeafSize(0.5, 0.5, 0.5);
    voxel.setInputCloud(current_scan);

    CloudPtr current_scan_filter(new PointCloudType);
    voxel.filter(*current_scan_filter);

    /// 处理首帧雷达数据
    if (flg_first_scan_) {
        SE3 pose;
        inc_ndt_lo_->AddCloud(current_scan_filter, pose);
        flg_first_scan_ = false;
        return;
    }

    /// 从EKF中获取预测pose,放入LO,获取LO位姿,最后合入EKF
    SE3 pose_predict = eskf_.GetNominalSE3();
    inc_ndt_lo_->AddCloud(current_scan_filter, pose_predict, true);
    pose_of_lo_ = pose_predict;
    eskf_.ObserveSE3(pose_of_lo_, 1e-2, 1e-2);

    if (options_.with_ui_) {
        // 放入UI
        ui_->UpdateScan(current_scan, eskf_.GetNominalSE3());  // 转成Lidar Pose传给UI
        ui_->UpdateNavState(eskf_.GetNominalState());
    }
    frame_num_++;
}

  • 35
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值