三维SLAM算法SegMap源码阅读(三)后端篇

在slam的过程中,再精确的前端也会带入一些细小误差,强如lego_loam的特征匹配也会偶尔陷入局部最优,看不出来只是因为尺度还不够大而已!-_-。像经典的小场景建图算法gmapping,它相当于只携带了前端的里程计误差模型,不太适用于颠簸的路面,也容易在大场景中走偏。在大尺度的建图中,一般需要具备一个监管者来时刻协调之前的轨迹,这便是slam的后端模型,它一般充斥着非线性优化、位姿图、图优化等等一系列名词,SegMap也不例外。

好了不再贻笑大方了,先进入SegMapper::segMatchThread函数中,这个函数始终是在围绕着回环检测来进行的,回环检测分为两部分,检测与后处理。.

一、检测回环

检测回环依然是SegMatchWorker::processLocalMap函数,这个函数依然是在和segmatch类打交道,所有的主要函数都是来源于此。loop_closure参数成功与否,在接下来的函数中,对&和*感兴趣的新同学可以学习一下引用与指针的实现。

const PairwiseMatches& SegMatch::recognize(const PairwiseMatches& predicted_matches,
                                           const unsigned int track_id,
                                           const laser_slam::Time timestamp_ns,
                                           laser_slam::RelativePose* loop_closure) {

  BENCHMARK_BLOCK("SM.Worker.Recognition");

  std::unique_ptr<CorrespondenceRecognizer>& recognizer = recognizers_[track_id];
  recognizer->recognize(predicted_matches);

  // Assume that the matches in the first cluster are true positives. Return in case recognition
  // was unsuccessful.
  const std::vector<PairwiseMatches>& candidate_clusters =
      recognizer->getCandidateClusters();
  if (candidate_clusters.empty()) {
    last_filtered_matches_ = PairwiseMatches();
    return last_filtered_matches_;
  } else {
    last_filtered_matches_ = candidate_clusters.front();
  }

  // If desired, return the loop-closure.
  if (loop_closure != nullptr) {
    BENCHMARK_BLOCK("SM.Worker.Recognition.GetLoopClosure");
    loops_timestamps_.push_back(timestamp_ns);

    // Find the trajectory poses to be linked by the loop-closure.
    // For each segment, find the timestamp of the closest segmentation pose.
    std::vector<Time> source_segmentation_times;
    std::vector<Time> target_segmentation_times;
    std::vector<Id> source_track_ids;
    std::vector<Id> target_track_ids;
    std::vector<Segment> source_segments;
    std::vector<Segment> target_segments;
    for (const auto& match : last_filtered_matches_) {
      Segment segment;
      CHECK(segmented_source_clouds_[track_id].findValidSegmentById(match.ids_.first, &segment));
      source_segmentation_times.push_back(findTimeOfClosestSegmentationPose(segment));
      source_segments.push_back(segment);
      source_track_ids.push_back(segment.track_id);

      CHECK(segmented_target_cloud_.findValidSegmentById(match.ids_.second, &segment));
      target_segmentation_times.push_back(findTimeOfClosestSegmentationPose(segment));
      target_segments.push_back(segment);
      target_track_ids.push_back(segment.track_id);
    }

    const Id source_track_id = findMostOccuringElement(source_track_ids);
    const Id target_track_id = findMostOccuringElement(target_track_ids);
    LOG(INFO) << "Found a loop between source_track_id " << source_track_id << " target_track_id " <<
        target_track_id;

    const Time target_most_occuring_time = findMostOccuringElement(target_segmentation_times);

    Time source_track_time_ns, target_track_time_ns;
    if (source_track_id != target_track_id) {
      // Get the head of the source trajectory.
      Time trajectory_last_time_ns = segmentation_poses_[source_track_id].rbegin()->first;
      Time start_time_of_head_ns;

      if (trajectory_last_time_ns > params_.min_time_between_segment_for_matches_ns) {
        start_time_of_head_ns = trajectory_last_time_ns - params_.min_time_between_segment_for_matches_ns;
      } else {
        start_time_of_head_ns = 0u;
      }

      Trajectory head_poses;

      for (const auto pose: segmentation_poses_[source_track_id]) {
        if (pose.first > start_time_of_head_ns) {
          head_poses.emplace(pose.first, pose.second);
        }
      }

      // Get a window over the target trajectory.
      const Time half_window_size_ns = 180000000000u;
      const Time window_max_value_ns = target_most_occuring_time + half_window_size_ns;
      Time window_min_value_ns;
      if (target_most_occuring_time > half_window_size_ns) {
        window_min_value_ns = target_most_occuring_time - half_window_size_ns;
      } else {
        window_min_value_ns = 0u;
      }
      Trajectory poses_in_window;
      for (const auto& pose: segmentation_poses_[target_track_id]) {
        if (pose.first >= window_min_value_ns &&
            pose.first <=  window_max_value_ns) {

          // Compute center of segments.
          PclPoint segments_center;
          for (const auto& segment: target_segments) {
            segments_center.z += segment.getLastView().centroid.z;
          }
          segments_center.z /= double(target_segments.size());

          // Check that pose lies below the segments center of mass.
          if (!params_.check_pose_lies_below_segments ||
              pose.second.getPosition()(2) < segments_center.z) {
            poses_in_window.emplace(pose.first, pose.second);
          }
        }
      }

      source_track_time_ns =  findTimeOfClosestPose(head_poses, source_segments);
      target_track_time_ns =  findTimeOfClosestPose(poses_in_window, target_segments);
    } else {
      // Split the trajectory into head and tail.
      Time trajectory_last_time_ns = segmentation_poses_[source_track_id].rbegin()->first;
      CHECK_GT(trajectory_last_time_ns, params_.min_time_between_segment_for_matches_ns);
      Time start_time_of_head_ns = trajectory_last_time_ns -
          params_.min_time_between_segment_for_matches_ns;

      Trajectory tail_poses, head_poses;

      for (const auto pose: segmentation_poses_[source_track_id]) {
        if (pose.first < start_time_of_head_ns) {
          tail_poses.emplace(pose.first, pose.second);
        } else {
          head_poses.emplace(pose.first, pose.second);
        }
      }

      source_track_time_ns =  findTimeOfClosestPose(head_poses, source_segments);
      target_track_time_ns =  findTimeOfClosestPose(tail_poses, target_segments);
    }

    loop_closure->time_a_ns = target_track_time_ns;
    loop_closure->time_b_ns = source_track_time_ns;
    loop_closure->track_id_a = target_track_id;
    loop_closure->track_id_b = source_track_id;

    // Again, assume that the best transformation is the correct one.
    CHECK_GT(recognizer->getCandidateTransformations().size(), 0u);
    SE3 w_T_a_b = fromApproximateTransformationMatrix(
        recognizer->getCandidateTransformations().front());
    loop_closure->T_a_b = w_T_a_b;

    // Save the loop closure.
    loop_closures_.push_back(*loop_closure);
  }

  return last_filtered_matches_;
}

对匹配过程的描写,在recognizers文件夹中,首先在初始化时,每一个点对都来源于predicted_matches,它在processLocalMap函数中,是通过findMatches和filterMatches两道工序来实现的。

    track_id = (track_id + 1u) % laser_slam_workers_.size();

接下来是匹配检测的四步: 

1.findMatches利用随机森林方法,具体实现在OpenCvRandomForest::findCandidates中,第一步是用knn(k近邻)求出供第二步计算的点云集合,其中eigen的api可以多学习一下,值得注意的是,knn使用的是Nabo库,而输入的正是一维特征向量,利用最近邻查找的是特征矩阵;而第二步是利用两幅点云的特征矩阵作差之后的范数来判断是否可能是同一副点云,这有点像VSLAM中一种回环检测的方法,将两幅图像的值直接作差后求范数,只不过受光照条件影响太大,而点云比较稳定。把这些有可能符合的点云对进行下一步处理。

2.filterMatches是检查id和时间戳是否对的上,这一块比较简要。

3.recognize,它分为两种方法,分别是基于几何相似性对应的GeometricConsistency方法与一致性图的ConsistencyGraph方法,均是继承了基类CorrespondenceRecognizer的方法。 

①.GeometricConsistencyRecognizer

它使用了pcl库中的pcl::Correspondences来实现对两幅点云一致性的描述,参数有源点云与目标点云的相对应的点的index,它一开始的提取点云特征有点类似于ICP,提取的是质心与距离平方:

PclPoint first_centroid = predicted_matches.at(i).getCentroids().first;
    first_cloud->push_back(first_centroid);
    // Second centroid.
    PclPoint second_centroid = predicted_matches.at(i).getCentroids().second;
    second_cloud->push_back(second_centroid);
    float squared_distance = 1.0 - predicted_matches.at(i).confidence_;
    correspondences->push_back(pcl::Correspondence(i, i, squared_distance));

 它直接使用pcl库中的GeometricConsistencyGrouping方法,具体用法可以参考http://docs.pointclouds.org/trunk/classpcl_1_1_geometric_consistency_grouping.html#details,主题思想仍然是进行范数的比较,并将结果由优至劣排序,std::transform是对一批数据进行批量操作。

 ②.GraphBasedGeometricConsistencyRecognizer

  typedef boost::adjacency_list<boost::vecS, boost::vecS, boost::undirectedS> ConsistencyGraph;

 这是一个邻接表表示的无向图,这种方法很有趣,是类似于最小生成树的方法得到最大的配对集群,具体的方法要参考一下论文。它使用的是单例模式,也就是说,整个程序中只含有一个该对象,并且是静态的,不属于任何一个类对象。

4.getCandidateClusters是获取匹配好的点云对,然后获取最新的匹配进行计算,在这里安利一下emplace_back,它比push_back更节约资源,省略了复制构造函数的开销,更类似于push_back(std::move())。

后面的内容就比较容易理解了,它将检测到的两块点云的位姿变换作为后端变换的基准,并且记录下回环的头和尾的轨迹关键帧的时间戳。那么,在成功检查到回环后,则跳到处理回环检测的函数,也就是后端的更新。

二、图结构的更新

SegMap的图结构也是因子图,在更新它之前,需要之前得到的loop_closure,利用它的相对位姿获取最终新增的约束,过程大致和LeGO-LOAM相同。

void IncrementalEstimator::processLoopClosure(const RelativePose& loop_closure) {
  std::lock_guard<std::recursive_mutex> lock(full_class_mutex_);

  if (loop_closure.track_id_a == loop_closure.track_id_b) {
    CHECK_LT(loop_closure.time_a_ns, loop_closure.time_b_ns) << "Loop closure has invalid time.";
  }
  CHECK_GE(loop_closure.time_a_ns, laser_tracks_[loop_closure.track_id_a]->getMinTime()) <<
      "Loop closure has invalid time.";
  CHECK_LE(loop_closure.time_a_ns, laser_tracks_[loop_closure.track_id_a]->getMaxTime()) <<
      "Loop closure has invalid time.";
  CHECK_GE(loop_closure.time_b_ns, laser_tracks_[loop_closure.track_id_b]->getMinTime()) <<
      "Loop closure has invalid time.";
  CHECK_LE(loop_closure.time_b_ns, laser_tracks_[loop_closure.track_id_b]->getMaxTime()) <<
      "Loop closure has invalid time.";

  RelativePose updated_loop_closure = loop_closure;

  // Convert the reference frame of the loop closure transformation.
  // When applying the transformation w_T_a_b to the source cloud, it will align it with the
  // target cloud.
 //基于回环匹配时的点云变换得到的粗匹配,track_id_a和track_id_b,分别是目标点云与变换点云的序号
  SE3 w_T_a_b = loop_closure.T_a_b;
  SE3 T_w_a = laser_tracks_[loop_closure.track_id_a]->evaluate(loop_closure.time_a_ns);
  SE3 T_w_b = laser_tracks_[loop_closure.track_id_b]->evaluate(loop_closure.time_b_ns);
  SE3 a_T_a_b = T_w_a.inverse() * w_T_a_b * T_w_b;
  updated_loop_closure.T_a_b = a_T_a_b;

  // Apply an ICP step if desired.
  //利用ICP变换得到精确的点云变换
  if (params_.do_icp_step_on_loop_closures)
  {
    // Get the initial guess.
    PointMatcher::TransformationParameters initial_guess =
        updated_loop_closure.T_a_b.getTransformationMatrix().cast<float>();

    LOG(INFO) << "Creating the submaps for loop closure ICP.";
    Clock clock;
    DataPoints sub_map_a;
    DataPoints sub_map_b;
    laser_tracks_[updated_loop_closure.track_id_a]->buildSubMapAroundTime(
        loop_closure.time_a_ns, params_.loop_closures_sub_maps_radius, &sub_map_a);
    laser_tracks_[updated_loop_closure.track_id_b]->buildSubMapAroundTime(
        loop_closure.time_b_ns, params_.loop_closures_sub_maps_radius, &sub_map_b);
    clock.takeTime();
    LOG(INFO) << "Took " << clock.getRealTime() << " ms to create loop closures sub maps.";

    LOG(INFO) << "Creating loop closure ICP.";
    clock.start();
    PointMatcher::TransformationParameters icp_solution = icp_.compute(sub_map_b, sub_map_a,
                                                                       initial_guess);
    clock.takeTime();
    LOG(INFO) << "Took " << clock.getRealTime() <<
        " ms to compute the icp_solution for the loop closure.";

    updated_loop_closure.T_a_b = convertTransformationMatrixToSE3(icp_solution);
  }

  LOG(INFO) << "Creating loop closure factor.";

  NonlinearFactorGraph new_factors, new_associations_factors;
  Expression<SE3> exp_T_w_b(laser_tracks_[loop_closure.track_id_b]->getValueExpression(
      updated_loop_closure.time_b_ns));
  Expression<SE3> exp_T_w_a(laser_tracks_[loop_closure.track_id_a]->getValueExpression(
      updated_loop_closure.time_a_ns));
  Expression<SE3> exp_T_a_w(kindr::minimal::inverse(exp_T_w_a));
  Expression<SE3> exp_relative(kindr::minimal::compose(exp_T_a_w, exp_T_w_b));
  //一般噪声模型是采取匹配的误差项
  ExpressionFactor<SE3> new_factor(loop_closure_noise_model_, updated_loop_closure.T_a_b,
                                   exp_relative);
  new_factors.push_back(new_factor);

  ExpressionFactor<SE3> new_association_factor(first_association_noise_model_,
                                               updated_loop_closure.T_a_b, exp_relative);

  new_associations_factors.push_back(new_association_factor);

  LOG(INFO) << "Estimating the trajectories.";
  std::vector<unsigned int> affected_worker_ids;//这是两块点云的序号
  affected_worker_ids.push_back(loop_closure.track_id_a);
  affected_worker_ids.push_back(loop_closure.track_id_b);
  Values new_values;
  //检查点云之间的约束,去除多余的约束,只留下未添加的约束
  Values result = estimateAndRemove(new_factors, new_associations_factors,
                                    new_values, affected_worker_ids,
                                    updated_loop_closure.time_b_ns);

  LOG(INFO) << "Updating the trajectories after LC.";
  for (auto& track: laser_tracks_) {
    track->updateFromGTSAMValues(result);
  }
  LOG(INFO) << "Updating the trajectories after LC done.";
}

最后,对SegMap的总结就是,整个过程似乎有些繁杂了,它的创新点segmatch在一般楼宇、道路之间的结构化建图过程中效果较好,后端的回环检测成功率很高,但在草木较密集的环境下,分割、识别的效果也不太理想。相比于Lego-LOAM来说,它的更适合于“东西”较多的场合,例如停放车辆较多的场景,这对分割本身很有意义;它的前端更为传统,精确程度并不如后者所采取的点线面分割的方式,而整体建图的效果也是与其不相上下,都是效果较好但有待改善的情况,这是我实践中的直观感受,从上手程度上来说,Lego-LOAM的安装编译难度也远小于SegMap,同学们可以先直观感受一下两者的建图过程,并进行参数的调优,再决定采用哪一种方式进行建图。

  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值