LidarSlam(一)整体算法流程

目录

1、算法介绍

2、CheckFrames

3、ExtractKeypoints

4、ComputeEgoMotion

5、ComputeSensorConstraints

6、Localization

7、EstimateOverlap

8、CheckMotionLimits

9、CheckKeyFrame

10、UpdateMapsUsingTworld


1、算法介绍

本文介绍的是Kitware开发的LiDAR SLAM开源项目。源代码如下:

KEU-ComputerVision / Slam · GitLab

该slam算法主要针对激光雷达,为LOAM系类的算法。该源码主要有3个部分:lib_slam为核心算法,paraview_wrapping为ParaView框架下的lib_slam封装,ros_warpping为ROS框架下lib_slam的封装。首先先对lib_slam算法进行分析。该算法的入口类为Slam类。

该激光slam算法由三个连续的步骤组成:

(1)关键点提取。

该步骤从点云中提取关键点。将雷达扫描线投影到XY平面上,本根据垂直角度进行缩放。然后计算曲率,并分类为两个类别的关键点:代表高曲率的边缘关键点和低曲率的平面关键点。

(2)帧间运动估计

该步骤计算相邻两个雷达帧之间的运动。使用匀速运动模型,因此可以将运动模型参数化为平移和旋转,并根据点的时间戳进行插值得到帧内的运动。因为雷达点云是稀疏的,所以不能在相邻的两帧间进行点与点的配对。因此使用LOAM论文中的点与线以及点与面的配对方式,并进行残差计算。通过最小化点到匹配线以及点到匹配面的距离来更新旋转(R)和平移(T)。因为位姿变换时非线性的,因此使用LM算法进行优化。

(3)定位

该步骤会对上一步得到的帧间变换进行优化。并将优化后的点云结果加到地图中。

本算法中,有3个坐标系:

(1)LIDAR{L}:雷达坐标系,原点为雷达中心。每一帧雷达点云是在该坐标系下。雷达坐标系与BASE坐标系是固定的变换关系。

(2)BASE{B}:搭载雷达的物体的坐标系(如汽车)。其余载荷(GPS,IMU)都需要转换到该坐标系下,进行空间坐标系的统一。

(3)WORLD{W}:世界坐标系。多个运动物体进行坐标统一。输出的轨迹是运动物体BASE坐标系在世界坐标系下的位姿。

Slam类的构造函数如下:

Slam::Slam()
{
  // Allocate a default Keypoint Extractor for device 0
  this->KeyPointsExtractors[0] = std::make_shared<SpinningSensorKeypointExtractor>();

  // Allocate maps
  for (auto k : this->UsableKeypoints)
    this->LocalMaps[k] = std::make_shared<RollingGrid>();

  // Set default maps parameters
  if (this->UseKeypoints[EDGE])
    this->InitMap(EDGE);
  if (this->UseKeypoints[INTENSITY_EDGE])
    this->InitMap(INTENSITY_EDGE);
  if (this->UseKeypoints[PLANE])
    this->InitMap(PLANE);
  if (this->UseKeypoints[BLOB])
    this->InitMap(BLOB);

  // Reset SLAM internal state
  this->Reset();
}

构造函数中,创建了雷达点云特征点提取器,地图以及各个特征点地图。 

Slam类中AddFrames成员变量是处理雷达帧的主要函数,该函数中进行了以下步骤:

(1)检查点云,调用了CheckFrames

(2)设置初始位姿(this->Tworld)

(3)提取关键点,调用了ExtractKeypoints

(4)计算帧间运动,调用了ComputeEgoMotion

(5)如果有其他传感器数据,则计算对应传感器约束,调用了ComputeSensorConstraints

(6)定位,优化当前帧位姿,并可以进行点云运动畸变的去除,调用了Localization

(7)检查重叠度,调用了EstimateOverlap及CheckMotionLimits

(8)检查是否为关键帧,调用了CheckKeyFrame

(9)更新地标点位姿,使用了LandmarkManager类中的UpdateAbsolutePose

(10)将关键点加入到当前地图中,调用了UpdateMapsUsingTworld

(11)记录当前帧轨迹:位姿,协方差矩阵及关键点

(12)使用新位姿更新IMU,优化IMU零偏

2、CheckFrames

输入为点云的数组,即该雷达slam算法能够适配多雷达模式,所以输入输入为点云的数组。该函数检查点云是否为空,只要有一个不为空则返回true,否则返回false。如果当前点云与上一次处理的点云时间戳相同,返回false。

// 检查当前输入帧的数据是否可用于slam算法。该方式适配多雷达模式,因此传入参数为点云的数组
bool Slam::CheckFrames(const std::vector<PointCloud::Ptr>& frames)
{
  // Check input frames and return if they are all empty
  bool allFramesEmpty = true;
  for (unsigned int i = 0; i < frames.size(); ++i)
  {
    if (!frames[i] || !frames[i]->empty())
      allFramesEmpty = false;
    else
      PRINT_WARNING("SLAM input frame " << i << " is an empty pointcloud : frame ignored.");
  }
  // 只要有一帧点云有数据,返回true。所有点云都为空,返回false
  if (allFramesEmpty)
  {
    PRINT_ERROR("SLAM input only contains empty pointclouds : exiting.");
    return false;
  }

  // Skip frames if it has the same timestamp as previous ones (will induce problems in extrapolation)
  // 此时currentFrmaes中存储的是上一帧的数据,currentFrames在运行完该函数后进行赋值
  if (frames[0]->header.stamp == this->CurrentFrames[0]->header.stamp)
  {
    // 当前帧点云的时间抽和上一帧一样,则跳过该帧数据
    PRINT_ERROR("SLAM frames have the same timestamp (" << frames[0]->header.stamp << ") as previous ones : frames ignored.");
    return false;
  }

  // Check frame dropping
  for (unsigned int i = 0; i < frames.size(); ++i)
  {
    unsigned int droppedFrames = frames[i]->header.seq - this->PreviousFramesSeq[i] - 1;
    if ((this->PreviousFramesSeq[i] > 0) && (droppedFrames > 0))
      PRINT_WARNING(droppedFrames << " frame(s)" << (frames.size() > 1 ? " from LiDAR device " + std::to_string(i) : "") << " were dropped by SLAM\n");
    this->PreviousFramesSeq[i] = frames[i]->header.seq;
  }

  return true;
}

3、ExtractKeypoints

调用了SpinningSensorKeypointExtractor类进行点云关键点提取,在邻域内计算点的曲率,根据曲率大小判断边缘点和平面点。

// 提取关键点,并将点云从LIDAR坐标系转换到BASE坐标系
// 该函数中主要调用了SpinningSensorKeypointExtractor类的方法
void Slam::ExtractKeypoints()
{
  PRINT_VERBOSE(2, "========== Keypoints extraction ==========");

  // Current keypoints become previous ones
  this->PreviousRawKeypoints = this->CurrentRawKeypoints;

  // Extract keypoints from each input cloud
  std::map<Keypoint, std::vector<PointCloud::Ptr>> keypoints;
  for (const auto& frame: this->CurrentFrames)
  {
    // If the frame is empty, ignore it
    if (frame->empty())
      continue;

    // Get keypoints extractor to use for this LiDAR device
    int lidarDevice = frame->front().device_id;
    // Check if KE exists
    if (!this->KeyPointsExtractors.count(lidarDevice))
    {
      // If KE does not exist but we are only using a single KE, use default one
      if (this->KeyPointsExtractors.size() == 1)
      {
        PRINT_WARNING("Input frame comes from LiDAR device " << lidarDevice
                    << " but no keypoints extractor has been set for this device : using default extractor for device 0.");
        lidarDevice = 0;
      }
      // Otherwise ignore frame
      else
      {
        PRINT_ERROR("Input frame comes from LiDAR device " << lidarDevice
                    << " but no keypoints extractor has been set for this device : ignoring frame.");
        continue;
      }
    }
    KeypointExtractorPtr& ke = this->KeyPointsExtractors[lidarDevice];
    ke->Enable(this->UsableKeypoints);
    // Extract keypoints from this frame
    ke->ComputeKeyPoints(frame);
    for (auto k : this->UsableKeypoints)
      keypoints[k].push_back(ke->GetKeypoints(k));
  }

  // Merge all keypoints extracted from different frames together
  for (auto k : this->UsableKeypoints)
    // 将所有雷达设备的点云转换到统一的坐标系下
    // 输入:1需要转换的点云;2true转换到世界坐标系false转换到base坐标系;3是否进行运动畸变去除
    // 此处调用为转换到BASE坐标系,不进行运动畸变去除
    this->CurrentRawKeypoints[k] = this->AggregateFrames(keypoints[k], false, false);

  if (this->Verbosity >= 2)
  {
    std::cout << "Extracted features : ";
    for (auto k : this->UsableKeypoints)
      std::cout << this->CurrentRawKeypoints[k]->size() << " " << Utils::Plural(KeypointTypeNames.at(k)) << " ";
    std::cout << std::endl;
  }
}

4、ComputeEgoMotion

计算相邻两帧之间的相对运动。首先使用IMU或者匀速模型计算初始位姿,然后使用优化方法进行位姿优化。

this->EgoMotion为计算帧间相对运动的方法:

NONE:当前帧位姿等于上一帧,即无相对运动。计算快,但是如果运动量过大,会导致结果精度低。

MOTION_EXTRAPOLATION:通过前两帧计算速度,进行线性插值计算当前帧位姿。速度快,使用与接近匀速连续运动的场景。

REGISTRATION:通过将当前帧配准到前一帧计算两帧之间的相对运动。计算量大,好处是不依赖匀速模型,计算结果相对精确。

MOTION_EXTRAPOLATION_AND_REGISTRATION:先使用匀速模型,再使用当前帧与前一帧配准进行优调。计算量大,准确。

EXTRANAL:使用其他测量值(如IMU)进行计算,如果其他测量值不可用,则为NONE。

EXTERNAL_OR_MOTION_EXTRAPOLATION:优先使用其他测量值,如不可用,则使用匀速模型。

void Slam::ComputeEgoMotion()
{
  PRINT_VERBOSE(2, "========== Ego-Motion ==========");

  if (this->LogStates.empty())
  {
    PRINT_WARNING("The LogStates is empty : cannot use them to compute ego motion")
    return;
  }

  // Reset ego-motion
  this->Trelative = Eigen::Isometry3d::Identity();

  bool externalAvailable = false;
  // Linearly extrapolate previous motion to estimate new pose
  // 先检查是否有IMU数据
  if (!this->LogStates.empty() &&
      (this->EgoMotion == EgoMotionMode::EXTERNAL ||
       this->EgoMotion == EgoMotionMode::EXTERNAL_OR_MOTION_EXTRAPOLATION))
  {
    if (this->PoseHasData())
    {
      ExternalSensors::PoseMeasurement synchPreviousPoseMeas; // Virtual measure with synchronized timestamp and calibration applied
      if (this->PoseManager->ComputeSynchronizedMeasureBase(this->LogStates.back().Time, synchPreviousPoseMeas))
      {
        ExternalSensors::PoseMeasurement synchPoseMeas; // Virtual measure with synchronized timestamp and calibration applied
        if (this->PoseManager->ComputeSynchronizedMeasureBase(this->CurrentTime, synchPoseMeas))
        {
          this->Trelative = synchPreviousPoseMeas.Pose.inverse() * synchPoseMeas.Pose;
          externalAvailable = true;
          PRINT_VERBOSE(3, "Prior pose computed using external poses supplied");
        }
      }
    }
    else
      PRINT_WARNING("External poses are empty : cannot use them to compute ego motion")
  }

  // Linearly extrapolate previous motion to estimate new pose
  // 使用匀速模型进行计算
  if (this->LogStates.size() >= 2 && (
      (this->EgoMotion == EgoMotionMode::MOTION_EXTRAPOLATION ||
       this->EgoMotion == EgoMotionMode::MOTION_EXTRAPOLATION_AND_REGISTRATION) ||
      (this->EgoMotion == EgoMotionMode::EXTERNAL_OR_MOTION_EXTRAPOLATION &&
       !externalAvailable)))
  {
    // Estimate new Tworld with a constant velocity model
    auto itSt = this->LogStates.end();

    const double t1 = (--itSt)->Time;
    const Eigen::Isometry3d& T1 = itSt->Isometry;
    const double t0 = (--itSt)->Time;
    const Eigen::Isometry3d& T0 = itSt->Isometry;
    if (std::abs((this->CurrentTime - t1) / (t1 - t0)) > this->MaxExtrapolationRatio)
      PRINT_WARNING("Unable to extrapolate scan pose from previous motion : extrapolation time is too far.")
    else
    {
      Eigen::Isometry3d nextTworldEstimation = LinearInterpolation(T0, T1, this->CurrentTime, t0, t1);
      this->Trelative = this->Tworld.inverse() * nextTworldEstimation;
    }
  }

  // 优化微调位姿
  // Refine Trelative estimation by registering current frame on previous one
  if (this->EgoMotion == EgoMotionMode::REGISTRATION ||
      this->EgoMotion == EgoMotionMode::MOTION_EXTRAPOLATION_AND_REGISTRATION)
  {
    // kd-tree to process fast nearest neighbor
    // among the keypoints of the previous pointcloud
    IF_VERBOSE(3, Utils::Timer::Init("EgoMotion : build KD tree"));

    // Build a new submap for PreviousRawKeypoints
    Maps previousKeypoints;
    this->InitSubMaps(previousKeypoints);
    // Reduce the leaf size to get all the keypoints from the previous frame
    for (auto k : this->UsableKeypoints)
      previousKeypoints[k]->SetLeafSize(0.05);
    for (auto k : this->UsableKeypoints)
      previousKeypoints[k]->Add(this->PreviousRawKeypoints[k]);

    // The iteration is not directly on Keypoint types
    // because of openMP behaviour which needs int iteration on MSVC
    int nbKeypointTypes = static_cast<int>(this->UsableKeypoints.size());
    // Build a kd-tree for previousKeypoints maps
    #pragma omp parallel for num_threads(std::min(this->NbThreads, nbKeypointTypes))
    for (int i = 0; i < nbKeypointTypes; ++i)
    {
      Keypoint k = static_cast<Keypoint>(this->UsableKeypoints[i]);
      if (!previousKeypoints[k]->IsSubMapKdTreeValid())
        previousKeypoints[k]->BuildSubMapKdTree();
    }

    if (this->Verbosity >= 2)
    {
      std::cout << "Keypoints extracted from previous frame : ";
      for (auto k : this->UsableKeypoints)
        std::cout << previousKeypoints[k]->GetSubMapKdTree().GetInputCloud()->size() << " " << Utils::Plural(KeypointTypeNames.at(k)) << " ";
      std::cout << std::endl;
    }

    IF_VERBOSE(3, Utils::Timer::StopAndDisplay("EgoMotion : build KD tree"));
    IF_VERBOSE(3, Utils::Timer::Init("Ego-Motion : whole ICP-LM loop"));

    // Set localization parameters which do not have a setter
    this->EgoMotionParams.MatchingParams.NbThreads = static_cast<unsigned int>(this->NbThreads);
    this->EgoMotionParams.MatchingParams.SingleEdgePerRing = true;
    // ICP - Levenberg-Marquardt loop to update Trelative
    this->EstimatePose(this->CurrentRawKeypoints, previousKeypoints,
                       this->EgoMotionParams, this->Trelative,
                       this->EgoMotionMatchingResults);

    IF_VERBOSE(3, Utils::Timer::StopAndDisplay("Ego-Motion : whole ICP-LM loop"));
    if (this->Verbosity >= 2)
    {
      std::cout << "Matched keypoints: " << this->TotalMatchedKeypoints << " (";
      for (auto k : this->UsableKeypoints)
        std::cout << this->EgoMotionMatchingResults[k].NbMatches() << " " << Utils::Plural(KeypointTypeNames.at(k)) << " ";
      std::cout << ")" << std::endl;
    }
  }

  // Print EgoMotion results
  SET_COUT_FIXED_PRECISION(3);
  PRINT_VERBOSE(2, "Estimated Ego-Motion (motion since last frame):\n"
                   " translation = [" << this->Trelative.translation().transpose() << "] m\n"
                   " rotation    = [" << Utils::Rad2Deg(Utils::RotationMatrixToRPY(this->Trelative.linear())).transpose() << "] °");
  RESET_COUT_FIXED_PRECISION;
}

5、ComputeSensorConstraints

添加处雷达之外传感器的约束,包括轮式里程计,IMU,相机路标点。只作为前后相邻两帧之间的约束。

void Slam::ComputeSensorConstraints()
{
  if (this->WheelOdomManager && this->WheelOdomManager->CanBeUsedLocally() &&
      this->WheelOdomManager->ComputeConstraint(this->CurrentTime))
    PRINT_VERBOSE(3, "Wheel odometry constraint added")
  if (this->GravityManager && this->GravityManager->CanBeUsedLocally() &&
      this->GravityManager->ComputeConstraint(this->CurrentTime))
    PRINT_VERBOSE(3, "IMU gravity constraint added")
  if (this->LmCanBeUsedLocally())
  {
    for (auto& idLm : this->LandmarksManagers)
    {
      PRINT_VERBOSE(3, "Checking state of tag #" << idLm.first)
      if (idLm.second.ComputeConstraint(this->CurrentTime))
        PRINT_VERBOSE(3, "\t Adding constraint for tag #" << idLm.first)
    }
  }
  if (this->PoseManager && this->PoseManager->CanBeUsedLocally())
  {
    if (!this->LogStates.empty())
    {
      // Update last pose information because the external pose constraint is relative
      // The last pose logged corresponds to last lidar frame
      this->PoseManager->SetPrevLidarTime(this->LogStates.back().Time);
      this->PoseManager->SetPrevPoseTransform(this->LogStates.back().Isometry);
      if (this->PoseManager->ComputeConstraint(this->CurrentTime))
        PRINT_VERBOSE(3, "External pose constraint added")
    }
  }
}

6、Localization

首先将上一步计算出来相对位姿转换到世界坐标系中,构建局部地图(kdtree),然后使用非线性迭代优化位姿。

// 根据上一步计算的相对位姿,计算当前帧在世界坐标系的位姿
// 并使用关键点匹配将其匹配到全局地图中
void Slam::Localization()
{
  this->LocalizationUncertainty = LocalOptimizer::RegistrationError();
  this->Valid = true;
  PRINT_VERBOSE(2, "========== Localization ==========");

  // Integrate the relative motion to the world transformation
  // Store previous tworld for next iteration
  this->Tworld = this->Tworld * this->Trelative;

  // Init undistorted keypoints clouds from raw points
  // The iteration is not directly on Keypoint types
  // because of openMP behaviour which needs int iteration on MSVC
  int nbKeypointTypes = static_cast<int>(this->UsableKeypoints.size());
  #pragma omp parallel for num_threads(std::min(this->NbThreads, nbKeypointTypes))
  for (int i = 0; i < nbKeypointTypes; ++i)
  {
    Keypoint k = static_cast<Keypoint>(this->UsableKeypoints[i]);
    this->CurrentUndistortedKeypoints[k].reset(new PointCloud);
    *this->CurrentUndistortedKeypoints[k] = *this->CurrentRawKeypoints[k];
  }

  // Init and run undistortion if required
  if (this->Undistortion)
  {
    IF_VERBOSE(3, Utils::Timer::Init("Localization : initial undistortion"));
    if (this->Undistortion != UndistortionMode::EXTERNAL)
    {
      // Init the within frame motion interpolator time bounds
      this->InitUndistortion();
      // Undistort keypoints clouds
      this->RefineUndistortion();
    }
    else if (this->PoseHasData())
    {
      // Get synchronized point pose relatively to frame
      // CurrentUndistortedKeypoints are represented in base frame
      for (auto k : this->UsableKeypoints)
        this->UndistortWithPoseMeasurement(this->CurrentUndistortedKeypoints[k], this->CurrentTime);
    }

    IF_VERBOSE(3, Utils::Timer::StopAndDisplay("Localization : initial undistortion"));
  }

  // Get keypoints from maps and build kd-trees for fast nearest neighbors search
  IF_VERBOSE(3, Utils::Timer::Init("Localization : map keypoints extraction"));

  // The iteration is not directly on Keypoint types
  // because of openMP behaviour which needs int iteration on MSVC
  nbKeypointTypes = static_cast<int>(this->UsableKeypoints.size());
  #pragma omp parallel for num_threads(std::min(this->NbThreads, nbKeypointTypes))
  for (int i = 0; i < nbKeypointTypes; ++i)
  {
    Keypoint k = static_cast<Keypoint>(this->UsableKeypoints[i]);
    // Check the current frame contains not null number of k type keypoints
    if (this->CurrentUndistortedKeypoints[k] && this->CurrentUndistortedKeypoints[k]->empty())
      continue;
    // If the map has been updated, the KD-tree needs to be updated
    if (!this->LocalMaps[k]->IsSubMapKdTreeValid())
    {
      // If maps are fixed, we can build a single KD-tree
      // of the entire map to avoid rebuilding it again
      if (this->MapUpdate == MappingMode::NONE)
        this->LocalMaps[k]->BuildSubMapKdTree();

      // Otherwise, we only extract the local sub maps
      // to build a local and smaller KD-tree
      else
      {
        if (this->LocalMaps[k]->IsTimeThreshold())
        {
          IF_VERBOSE(3, Utils::Timer::Init("Localization : clearing old points"));
          this->LocalMaps[k]->ClearPoints(this->CurrentTime);
          IF_VERBOSE(3, Utils::Timer::StopAndDisplay("Localization : clearing old points"));
        }
        // Estimate current keypoints bounding box
        PointCloud currWorldKeypoints;
        pcl::transformPointCloud(*this->CurrentUndistortedKeypoints[k], currWorldKeypoints, this->Tworld.matrix());
        Eigen::Vector4f minPoint, maxPoint;
        pcl::getMinMax3D(currWorldKeypoints, minPoint, maxPoint);

        // Build submap of all points lying in this bounding box
        // Moving objects are rejected but the constraint is removed
        // if less than half the number of current keypoints are extracted from the map
        this->LocalMaps[k]->BuildSubMapKdTree(minPoint.head<3>().array(), maxPoint.head<3>().array(), currWorldKeypoints.size() / 2);
      }
    }
  }

  if (this->Verbosity >= 2)
  {
    std::cout << "Keypoints extracted from map : ";
    for (auto k : this->UsableKeypoints)
    {
      std::cout << this->LocalMaps[k]->GetSubMapKdTree().GetInputCloud()->size()
                << " " << Utils::Plural(KeypointTypeNames.at(k)) << " ";
    }
    std::cout << std::endl;
  }

  IF_VERBOSE(3, Utils::Timer::StopAndDisplay("Localization : map keypoints extraction"));
  IF_VERBOSE(3, Utils::Timer::Init("Localization : whole ICP-LM loop"));

  // Set localization parameters which do not have a setter
  this->LocalizationParams.MatchingParams.NbThreads = static_cast<unsigned int>(this->NbThreads);
  this->LocalizationParams.MatchingParams.SingleEdgePerRing = false;
  // ICP - Levenberg-Marquardt loop to update Tworld
  this->LocalizationUncertainty = this->EstimatePose(this->CurrentUndistortedKeypoints,
                                                     this->LocalMaps,
                                                     this->LocalizationParams,
                                                     this->Tworld,
                                                     this->LocalizationMatchingResults);
  this->Valid = this->LocalizationUncertainty.Valid;

  if (!this->Valid)
  {
    // Reset state to previous one to avoid instability
    this->Tworld = this->LogStates.empty() ? Eigen::UnalignedIsometry3d::Identity() : this->LogStates.back().Isometry;

    if (this->Undistortion)
      this->WithinFrameMotion.SetTransforms(Eigen::Isometry3d::Identity(), Eigen::Isometry3d::Identity());
  }

  IF_VERBOSE(3, Utils::Timer::StopAndDisplay("Localization : whole ICP-LM loop"));

  // Optionally print localization optimization summary
  if (this->Verbosity >= 2)
  {
    SET_COUT_FIXED_PRECISION(3);
    std::cout << "Matched keypoints: " << this->TotalMatchedKeypoints << " (";
    for (auto k : this->UsableKeypoints)
      std::cout << this->LocalizationMatchingResults[k].NbMatches() << " " << Utils::Plural(KeypointTypeNames.at(k)) << " ";

    std::cout << ")"
              << "\nPosition uncertainty    = " << this->LocalizationUncertainty.PositionError    << " m"
              << " (along [" << this->LocalizationUncertainty.PositionErrorDirection.transpose()    << "])"
              << "\nOrientation uncertainty = " << this->LocalizationUncertainty.OrientationError << " °"
              << " (along [" << this->LocalizationUncertainty.OrientationErrorDirection.transpose() << "])"
              << std::endl;
    RESET_COUT_FIXED_PRECISION;
  }
}

7、EstimateOverlap

将当前输入系统的所有帧转换到世界坐标系中,计算其与子地图的重叠率。LCP(Largetst Common Pointset),通过计算两个点云的重叠率,来评价ICP结果的好坏。

// 计算当前帧与关键点子地图的重叠率
void Slam::EstimateOverlap()
{
  // Aggregate all input points into WORLD coordinates
  PointCloud::Ptr aggregatedPoints = this->GetRegisteredFrame();

  // Keep only the maps to use
  Maps mapsToUse;
  for (auto k : this->UsableKeypoints)
  {
    if (this->LocalMaps[k]->IsSubMapKdTreeValid())
      mapsToUse[k] = this->LocalMaps[k];
  }

  // Compute LCP like estimator
  // (see http://geometry.cs.ucl.ac.uk/projects/2014/super4PCS/ for more info)
  this->OverlapEstimation = Confidence::LCPEstimator(aggregatedPoints, mapsToUse, this->OverlapSamplingRatio, this->NbThreads);
  PRINT_VERBOSE(3, "Overlap : " << this->OverlapEstimation << ", estimated on : "
                                << static_cast<int>(aggregatedPoints->size() * this->OverlapSamplingRatio) << " points.");
}

8、CheckMotionLimits

通过计算一个时间段内运动位姿的变化,来判断该位姿是否符合速度及加速度限制要求。this->VelocityLimits,Eigen::Array2f类型,第一个量存储了平移速度,第二个量存储了旋转速度。this->AccelerationLimits,Eigen::Array2f类型,第一个量存储了平移的加速度,第二个量存储了旋转的加速度。代码中通过Eigen::Array2f acceleration = (velocity - this->PreviousVelocity) / deltaTime来进行计算。

该检测中使用了一个时间窗口(this->TimeWindowDuration),以当前帧可是为重点,寻早该时间窗口时长的起始帧,然后计算该时间窗口内的平移速度和旋转速度。

// 检查当前位姿是否符合最大运动限制
void Slam::CheckMotionLimits()
{
  int nPoses = this->LogStates.size();
  if (nPoses == 0)
    return;

  // Extract number of poses to comply with the required window time, and relative time duration.
  double deltaTime = this->CurrentTime - this->LogStates.back().Time;
  double nextDeltaTime = FLT_MAX;
  // Index of the last pose that defines the window starting bound
  // The window ends on current pose
  auto startIt = this->LogStates.end();
  auto beforeBegin = this->LogStates.begin();
  --beforeBegin;
  --startIt;
  // If the time between current pose and the last pose is l.t. TimeWindowDuration, look for the window's starting bound pose
  if (deltaTime < this->TimeWindowDuration)
  {
    // Search an interval containing TimeWindowDuration : [deltaTime, nextDeltaTime]
    while (startIt != beforeBegin)
    {
      deltaTime = nextDeltaTime;
      nextDeltaTime = this->CurrentTime - startIt->Time;
      if (nextDeltaTime >= this->TimeWindowDuration)
        break;
      --startIt;
    }

    // If startIt lays before first element, no interval containing TimeWindowDuration was found, the oldest logged pose is taken
    if (startIt == beforeBegin)
    {
      PRINT_WARNING("Not enough logged trajectory poses to get the required time window to estimate velocity, using a smaller time window of "
                    << nextDeltaTime << "s")
      startIt = this->LogStates.begin();
    }

    // Choose which bound of the interval is the best window's starting bound
    else if (std::abs(deltaTime - this->TimeWindowDuration) < std::abs(nextDeltaTime - this->TimeWindowDuration))
      ++startIt;

    // Actualize deltaTime with the best startIndex
    deltaTime = this->CurrentTime - startIt->Time;
  }
  // If the time between current pose and the last pose is g.t. TimeWindowDuration, take the last pose as window's starting bound
  else
    PRINT_WARNING("The required time window is too short to estimate velocity, using motion since last pose")

  this->ComplyMotionLimits = true;

  // Compute transform between the two pose bounds of the window
  Eigen::Isometry3d TWindow = startIt->Isometry.inverse() * this->Tworld;

  // Compute angular part
  // NOTE : It is not possible to detect an angle greater than PI,
  // the detectable velocity and acceleration are limited on deltaTime
  // Rotation angle in [0, 2pi]
  float angle = Eigen::AngleAxisd(TWindow.linear()).angle();
  // Rotation angle in [0, pi]
  if (angle > M_PI)
    angle = 2 * M_PI - angle;
  angle = Utils::Rad2Deg(angle);
  // Compute linear part
  float distance = TWindow.translation().norm();

  // Compute velocity
  Eigen::Array2f velocity = {distance / deltaTime, angle / deltaTime};
  SET_COUT_FIXED_PRECISION(3);
  // Print local velocity
  PRINT_VERBOSE(3, "Velocity     = " << std::setw(6) << velocity[0] << " m/s,  "
                                     << std::setw(6) << velocity[1] << " °/s")
  RESET_COUT_FIXED_PRECISION;

  if (this->NbrFrameProcessed >= 2)
  {
    // Compute local acceleration in BASE
    Eigen::Array2f acceleration = (velocity - this->PreviousVelocity) / deltaTime;
    // Print local acceleration
    SET_COUT_FIXED_PRECISION(3);
    PRINT_VERBOSE(3, "Acceleration = " << std::setw(6) << acceleration[0] << " m/s2, "
                                       << std::setw(6) << acceleration[1] << " °/s2")
    RESET_COUT_FIXED_PRECISION;

    // Check velocity compliance
    bool complyVelocityLimits = (velocity < this->VelocityLimits).all();
    // Check acceleration compliance
    bool complyAccelerationLimits = (acceleration.abs() < this->AccelerationLimits).all();

    // Set ComplyMotionLimits
    this->ComplyMotionLimits = complyVelocityLimits && complyAccelerationLimits;
  }

  this->PreviousVelocity = velocity;

  if (!this->ComplyMotionLimits)
    PRINT_WARNING("The pose does not comply with the motion limitations. Lidar SLAM may have failed.")
}

9、CheckKeyFrame

满足关键帧的条件:

(1)当前地图中的点数量小于进行关键点匹配的最小值(this->MinNbMatchedKeypoints)的10倍;

(2)平移的向量的长度小于阈值(this->KfDistanceThreshold);

(3)旋转的角度小于阈值(this->KfAngleThreshold);

(4)当前帧观测到一定时间间隔之前的路标点(关键帧)。

// 检查当前帧是否为关键帧
bool Slam::CheckKeyFrame()
{
  // Compute motion since last keyframe
  Eigen::Isometry3d motionSinceLastKf = this->KfLastPose.inverse() * this->Tworld;
  double transSinceLastKf = motionSinceLastKf.translation().norm();
  double rotSinceLastKf = Eigen::AngleAxisd(motionSinceLastKf.linear()).angle();
  PRINT_VERBOSE(3, "Motion since last keyframe #" << this->KfCounter << ": "
                                                  << transSinceLastKf << " m, "
                                                  << Utils::Rad2Deg(rotSinceLastKf) << " °");
  // Check if current frame is a new keyframe
  // If we don't have enough keyframes yet, the threshold is linearly lowered
  constexpr double MIN_KF_NB = 10.;
  double thresholdCoef = std::min(this->KfCounter / MIN_KF_NB, 1.);
  unsigned int nbMapKpts = 0;
  for (const auto& mapKptsCloud : this->LocalMaps)
    nbMapKpts += mapKptsCloud.second->Size();

  // Mark as keyframe if a new tag was seen after some time
  // This allows to force some keyframes and therefore a constraint in the pose graph optimization
  // if the tag detections are quite sparse
  bool tagRequirement = false;
  for (auto& idLm : this->LandmarksManagers)
  {
    if (idLm.second.NeedsReferencePoseRefresh(this->CurrentTime))
    {
      tagRequirement = true;
      break;
    }
  }

  return nbMapKpts < this->MinNbMatchedKeypoints * 10 ||
         transSinceLastKf >= thresholdCoef * this->KfDistanceThreshold ||
         rotSinceLastKf >= Utils::Deg2Rad(thresholdCoef * this->KfAngleThreshold) ||
         tagRequirement;
}

10、UpdateMapsUsingTworld

将当前帧的关键点加入的地图中。

void Slam::UpdateMapsUsingTworld()
{
  PRINT_VERBOSE(3, "Adding new keyframe #" << this->KfCounter);

  // Transform keypoints to WORLD coordinates
  for (auto k : this->UsableKeypoints)
    this->CurrentWorldKeypoints[k] = this->GetKeypoints(k, true);

  // Add registered points to map
  // The iteration is not directly on Keypoint types
  // because of openMP behaviour which needs int iteration on MSVC
  int nbKeypointTypes = static_cast<int>(this->UsableKeypoints.size());
  #pragma omp parallel for num_threads(std::min(this->NbThreads, nbKeypointTypes))
  for (int i = 0; i < nbKeypointTypes; ++i)
  {
    Keypoint k = static_cast<Keypoint>(this->UsableKeypoints[i]);
    // Add not fixed points
    this->LocalMaps[k]->Add(this->CurrentWorldKeypoints[k], false);
  }
}
  • 11
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值