GraphSLAM算法源码Cartographer阅读

GraphSLAM是目前比较主流的建图方法之一,它的思想是将机器人的位姿以及地图上的特征抽象成图,每一时刻的位姿可以看作是图的顶点,而位姿之间由观测而产生的约束看作是图的边,那么建图的过程就是将图的点与边进行优化的过程。

GraphSLAM的大致流程如下:

1.首先根据运动模型初始化每一时刻的位姿,无视测量数据。

2.将一系列控制量、测量值、位姿的估计通过线性化方法构造信息矩阵与信息矢量。

3.减少信息矩阵与信息矢量的维数,把所有特征都移除直到只有位姿变量。

4.计算出机器人所有位姿与地图所有特征的后验。

Cartographer是基于graphSLAM与激光的建图方法,我们现在阅读它的源码。首先从cartographer_ros包开始,看CMakeList可知它的主函数位于cartographer_node_main.cc:

int main(int argc, char** argv) {
  google::InitGoogleLogging(argv[0]);
  google::ParseCommandLineFlags(&argc, &argv, true);

  CHECK(!FLAGS_configuration_directory.empty())
      << "-configuration_directory is missing.";
  CHECK(!FLAGS_configuration_basename.empty())
      << "-configuration_basename is missing.";

  ::ros::init(argc, argv, "cartographer_node");
  ::ros::start();

  ::cartographer_ros::ScopedRosLogSink ros_log_sink;
  ::cartographer_ros::Run();
  ::ros::shutdown();
}

在这里调用了cartographer_ros::Run()函数,这便是开始整个建图过程的循环:

void Run() {
  auto file_resolver =
      carto::common::make_unique<carto::common::ConfigurationFileResolver>(
          std::vector<string>{FLAGS_configuration_directory});
  const string code =
      file_resolver->GetFileContentOrDie(FLAGS_configuration_basename);
  carto::common::LuaParameterDictionary lua_parameter_dictionary(
      code, std::move(file_resolver), nullptr);

  Node node(CreateNodeOptions(&lua_parameter_dictionary));
  node.Initialize();
  node.SpinForever();
}

在这里初始化了Node类的对象,其中SpinForever是开始ros::spin开始用回调循环,而Initialize函数初始化了一些选项,例如二维或三维的雷达类型,以及是否使用IMU与里程计信息,还有子地图的存储结构与发布的topic,以下是Initialize函数的片段:

  CHECK_EQ(kTrajectoryBuilderId, map_builder_.AddTrajectoryBuilder());
  sensor_collator_.AddTrajectory(
      kTrajectoryBuilderId, expected_sensor_identifiers,
      [this](const int64 timestamp, std::unique_ptr<SensorData> sensor_data) {
        HandleSensorData(timestamp, std::move(sensor_data));
      });

  submap_list_publisher_ =
      node_handle_.advertise<::cartographer_ros_msgs::SubmapList>(
          kSubmapListTopic, kLatestOnlyPublisherQueueSize);
  submap_query_server_ = node_handle_.advertiseService(
      kSubmapQueryServiceName, &Node::HandleSubmapQuery, this);

  if (options_.map_builder_options.use_trajectory_builder_2d()) {
    occupancy_grid_publisher_ =
        node_handle_.advertise<::nav_msgs::OccupancyGrid>(
            kOccupancyGridTopic, kLatestOnlyPublisherQueueSize,
            true /* latched */);
    occupancy_grid_thread_ =
        std::thread(&Node::SpinOccupancyGridThreadForever, this);
  }

  scan_matched_point_cloud_publisher_ =
      node_handle_.advertise<sensor_msgs::PointCloud2>(
          kScanMatchedPointCloudTopic, kLatestOnlyPublisherQueueSize);

  finish_trajectory_server_ = node_handle_.advertiseService(
      kFinishTrajectoryServiceName, &Node::HandleFinishTrajectory, this);

  wall_timers_.push_back(node_handle_.createWallTimer(
      ::ros::WallDuration(options_.submap_publish_period_sec),
      &Node::PublishSubmapList, this));
  wall_timers_.push_back(node_handle_.createWallTimer(
      ::ros::WallDuration(options_.pose_publish_period_sec),
      &Node::PublishPoseAndScanMatchedPointCloud, this));

这段代码的第一句,在这里的CHECK_EQ是检查它们是否相等,而map_builder位于mapping包的map_builder.cc中:

MapBuilder::MapBuilder(const proto::MapBuilderOptions& options,
                       std::deque<TrajectoryNode::ConstantData>* constant_data)
    : options_(options), thread_pool_(options.num_background_threads()) {
  if (options.use_trajectory_builder_2d()) {
    sparse_pose_graph_2d_ = common::make_unique<mapping_2d::SparsePoseGraph>(
        options_.sparse_pose_graph_options(), &thread_pool_, constant_data);
    sparse_pose_graph_ = sparse_pose_graph_2d_.get();
  }
  if (options.use_trajectory_builder_3d()) {
    sparse_pose_graph_3d_ = common::make_unique<mapping_3d::SparsePoseGraph>(
        options_.sparse_pose_graph_options(), &thread_pool_, constant_data);
    sparse_pose_graph_ = sparse_pose_graph_3d_.get();
  }
}

从map_builder.lua文件中便可得知线程池中线程为4个,std::make_unique是一种智能指针,它创造了一种稀疏的位姿矩阵,具体声明位于sparse_pose_graph.h中:

class SparsePoseGraph {
 public:
  // A "constraint" as in the paper by Konolige, Kurt, et al. "Efficient sparse
  // pose adjustment for 2d mapping." Intelligent Robots and Systems (IROS),
  // 2010 IEEE/RSJ International Conference on (pp. 22--29). IEEE, 2010.
  struct Constraint2D {
    struct Pose {
      transform::Rigid2d zbar_ij;
      Eigen::Matrix<double, 3, 3> sqrt_Lambda_ij;
    };

    // Submap index.
    int i;

    // Scan index.
    int j;

    // Pose of the scan 'j' relative to submap 'i'.
    Pose pose;

    // Differentiates between intra-submap (where scan 'j' was inserted into
    // submap 'i') and inter-submap constraints (where scan 'j' was not inserted
    // into submap 'i').
    enum Tag { INTRA_SUBMAP, INTER_SUBMAP } tag;
  };

.......

回到initialize函数继续看,接下来是初始化一些publisher和service,略过不表。SensorCollator是传感器校准器,它位于sensor_collator.h:

template <typename SensorDataType>
class SensorCollator {
 public:
  using Callback = std::function<void(int64, std::unique_ptr<SensorDataType>)>;

  SensorCollator() {}

  SensorCollator(const SensorCollator&) = delete;
  SensorCollator& operator=(const SensorCollator&) = delete;

  // Adds a trajectory to produce sorted sensor output for. Calls 'callback'
  // for each collated sensor data.
  void AddTrajectory(const int trajectory_id,
                     const std::unordered_set<string>& expected_sensor_ids,
                     const Callback callback) {
    for (const auto& sensor_id : expected_sensor_ids) {
      const auto queue_key = SensorCollatorQueueKey{trajectory_id, sensor_id};
      queue_.AddQueue(queue_key, [callback](std::unique_ptr<Value> value) {
        callback(value->timestamp, std::move(value->sensor_data));
      });
      queue_keys_[trajectory_id].push_back(queue_key);
    }
  }
.....

Node::HandleSensorData函数是用来将初始化各种传感器数据并准备处理,先以IMU为例:

void Node::AddImu(const int64 timestamp, const string& frame_id,
                  const proto::Imu& imu) {
  const carto::common::Time time = carto::common::FromUniversal(timestamp);
  try {
    const Rigid3d sensor_to_tracking =
        LookupToTrackingTransformOrThrow(time, frame_id);
    CHECK(sensor_to_tracking.translation().norm() < 1e-5)
        << "The IMU frame must be colocated with the tracking frame. "
           "Transforming linear acceleration into the tracking frame will "
           "otherwise be imprecise.";
    map_builder_.GetTrajectoryBuilder(kTrajectoryBuilderId)
        ->AddImuData(time,
                     sensor_to_tracking.rotation() *
                         carto::transform::ToEigen(imu.linear_acceleration()),
                     sensor_to_tracking.rotation() *
                         carto::transform::ToEigen(imu.angular_velocity()));
  } catch (const tf2::TransformException& ex) {
    LOG(WARNING) << ex.what();
  }
}

 它调用了MapBuilder::GetTrajectoryBuilder以及GlobalTrajectoryBuilder::AddImuData,后者位于global_trajectory_builder.cc,可以看到调用层级直接到了LocalTrajectoryBuilder::AddImuData:

void LocalTrajectoryBuilder::AddImuData(
    const common::Time time, const Eigen::Vector3d& linear_acceleration,
    const Eigen::Vector3d& angular_velocity) {
  CHECK(options_.use_imu_data()) << "An unexpected IMU packet was added.";

  InitializePoseTracker(time);
  pose_tracker_->AddImuLinearAccelerationObservation(time, linear_acceleration);
  pose_tracker_->AddImuAngularVelocityObservation(time, angular_velocity);

  transform::Rigid3d pose_estimate;
  kalman_filter::PoseCovariance unused_covariance_estimate;
  pose_tracker_->GetPoseEstimateMeanAndCovariance(time, &pose_estimate,
                                                  &unused_covariance_estimate);

  // Log a warning if the backpack inclination exceeds 20 degrees. In these
  // cases, it's very likely that 2D SLAM will fail.
  const Eigen::Vector3d gravity_direction =
      Eigen::Quaterniond(pose_estimate.rotation()) * Eigen::Vector3d::UnitZ();
  const double inclination = std::acos(gravity_direction.z());
  constexpr double kMaxInclination = common::DegToRad(20.);
  LOG_IF_EVERY_N(WARNING, inclination > kMaxInclination, 1000)
      << "Max inclination exceeded: " << common::RadToDeg(inclination) << " > "
      << common::RadToDeg(kMaxInclination);
}

AddImuData其实就是初始化关于机器人位姿的卡尔曼滤波器,并用IMU数据更新先验与后验;里程计数据类似(如果我们使用里程计的话),转到LocalTrajectoryBuilder::AddOdometerPose函数,便可以发现它们的做法与navigation的robot_pose_ekf类似,是依次将里程计与IMU的数据传入滤波器之后依次更新。

除了位姿传感器,我们再看看激光数据的处理方式:

void Node::AddHorizontalLaserFan(const int64 timestamp, const string& frame_id,
                                 const proto::LaserScan& laser_scan) {
  const carto::common::Time time = carto::common::FromUniversal(timestamp);
  try {
    const Rigid3d sensor_to_tracking =
        LookupToTrackingTransformOrThrow(time, frame_id);
    const carto::sensor::LaserFan laser_fan = carto::sensor::ToLaserFan(
        laser_scan, options_.horizontal_laser_min_range,
        options_.horizontal_laser_max_range,
        options_.horizontal_laser_missing_echo_ray_length);

    const auto laser_fan_3d = carto::sensor::TransformLaserFan3D(
        carto::sensor::ToLaserFan3D(laser_fan),
        sensor_to_tracking.cast<float>());
    map_builder_.GetTrajectoryBuilder(kTrajectoryBuilderId)
        ->AddHorizontalLaserFan(time, laser_fan_3d);
  } catch (const tf2::TransformException& ex) {
    LOG(WARNING) << ex.what();
  }
}

LaserFan的定义如下,那么ToLaserFan其实就是将激光数据(由lua文件指定)传入到点云中。

struct LaserFan {
  Eigen::Vector2f origin;
  PointCloud2D point_cloud;
  PointCloud2D missing_echo_point_cloud;
};

接着就转到GlobalTrajectoryBuilder的AddHorizontalLaserFan函数,如下所示,在这里继续调用LocalTrajectoryBuilder之后,还调用了sparse_pose_graph稀疏矩阵的AddScan函数,这是将特征加入到信息矩阵的过程。

void GlobalTrajectoryBuilder::AddHorizontalLaserFan(
    const common::Time time, const sensor::LaserFan3D& laser_fan) {
  std::unique_ptr<LocalTrajectoryBuilder::InsertionResult> insertion_result =
      local_trajectory_builder_.AddHorizontalLaserFan(time, laser_fan);
  if (insertion_result != nullptr) {
    sparse_pose_graph_->AddScan(
        insertion_result->time, insertion_result->tracking_to_tracking_2d,
        insertion_result->laser_fan_in_tracking_2d,
        insertion_result->pose_estimate_2d,
        kalman_filter::Project2D(insertion_result->covariance_estimate),
        insertion_result->submaps, insertion_result->matching_submap,
        insertion_result->insertion_submaps);
  }
}

先看第一点,就是LocalTrajectoryBuilder::AddHorizontalLaserFan函数:

std::unique_ptr<LocalTrajectoryBuilder::InsertionResult>
LocalTrajectoryBuilder::AddHorizontalLaserFan(
    const common::Time time, const sensor::LaserFan3D& laser_fan) {
  // Initialize pose tracker now if we do not ever use an IMU.
  if (!options_.use_imu_data()) {
    InitializePoseTracker(time);
  }

  if (pose_tracker_ == nullptr) {
    // Until we've initialized the UKF with our first IMU message, we cannot
    // compute the orientation of the laser scanner.
    LOG(INFO) << "PoseTracker not yet initialized.";
    return nullptr;
  }

  transform::Rigid3d pose_prediction;
  kalman_filter::PoseCovariance covariance_prediction;
  pose_tracker_->GetPoseEstimateMeanAndCovariance(time, &pose_prediction,
                                                  &covariance_prediction);

  // Computes the rotation without yaw, as defined by GetYaw().
  const transform::Rigid3d tracking_to_tracking_2d =
      transform::Rigid3d::Rotation(
          Eigen::Quaterniond(Eigen::AngleAxisd(
              -transform::GetYaw(pose_prediction), Eigen::Vector3d::UnitZ())) *
          pose_prediction.rotation());

  const sensor::LaserFan laser_fan_in_tracking_2d =
      BuildProjectedLaserFan(tracking_to_tracking_2d.cast<float>(), laser_fan);

  if (laser_fan_in_tracking_2d.point_cloud.empty()) {
    LOG(WARNING) << "Dropped empty horizontal laser point cloud.";
    return nullptr;
  }

  transform::Rigid3d pose_observation;
  kalman_filter::PoseCovariance covariance_observation;
  ScanMatch(time, pose_prediction, tracking_to_tracking_2d,
            laser_fan_in_tracking_2d, &pose_observation,
            &covariance_observation);

  kalman_filter::PoseCovariance covariance_estimate;
  pose_tracker_->GetPoseEstimateMeanAndCovariance(
      time, &scan_matcher_pose_estimate_, &covariance_estimate);

  // Remove the untracked z-component which floats around 0 in the UKF.
  const auto translation = scan_matcher_pose_estimate_.translation();
  scan_matcher_pose_estimate_ = transform::Rigid3d(
      transform::Rigid3d::Vector(translation.x(), translation.y(), 0.),
      scan_matcher_pose_estimate_.rotation());

  const transform::Rigid3d tracking_2d_to_map =
      scan_matcher_pose_estimate_ * tracking_to_tracking_2d.inverse();
  last_pose_estimate_ = {
      time,
      {pose_prediction, covariance_prediction},
      {pose_observation, covariance_observation},
      {scan_matcher_pose_estimate_, covariance_estimate},
      scan_matcher_pose_estimate_,
      sensor::TransformPointCloud(
          sensor::ToPointCloud(laser_fan_in_tracking_2d.point_cloud),
          tracking_2d_to_map.cast<float>())};

  const transform::Rigid2d pose_estimate_2d =
      transform::Project2D(tracking_2d_to_map);
  if (motion_filter_.IsSimilar(time, transform::Embed3D(pose_estimate_2d))) {
    return nullptr;
  }

  const mapping::Submap* const matching_submap =
      submaps_.Get(submaps_.matching_index());
  std::vector<const mapping::Submap*> insertion_submaps;
  for (int insertion_index : submaps_.insertion_indices()) {
    insertion_submaps.push_back(submaps_.Get(insertion_index));
  }
  submaps_.InsertLaserFan(TransformLaserFan(laser_fan_in_tracking_2d,
                                            pose_estimate_2d.cast<float>()));

  return common::make_unique<InsertionResult>(InsertionResult{
      time, &submaps_, matching_submap, insertion_submaps,
      tracking_to_tracking_2d, tracking_2d_to_map, laser_fan_in_tracking_2d,
      pose_estimate_2d, covariance_estimate});
}

前半段依然是卡尔曼滤波器对位姿的先验, 而LocalTrajectoryBuilder::BuildProjectedLaserFan这个函数是为了得到体素网格滤波器处理之后的激光数据,VoxelFilter的实现细节请看voxel_filter.cc。接着又是调用了LocalTrajectoryBuilder::ScanMatch这个函数:

void LocalTrajectoryBuilder::ScanMatch(
    common::Time time, const transform::Rigid3d& pose_prediction,
    const transform::Rigid3d& tracking_to_tracking_2d,
    const sensor::LaserFan& laser_fan_in_tracking_2d,
    transform::Rigid3d* pose_observation,
    kalman_filter::PoseCovariance* covariance_observation) {
  const ProbabilityGrid& probability_grid =
      submaps_.Get(submaps_.matching_index())->probability_grid;
  transform::Rigid2d pose_prediction_2d =
      transform::Project2D(pose_prediction * tracking_to_tracking_2d.inverse());
  // The online correlative scan matcher will refine the initial estimate for
  // the Ceres scan matcher.
  transform::Rigid2d initial_ceres_pose = pose_prediction_2d;
  sensor::AdaptiveVoxelFilter adaptive_voxel_filter(
      options_.adaptive_voxel_filter_options());
  const sensor::PointCloud2D filtered_point_cloud_in_tracking_2d =
      adaptive_voxel_filter.Filter(laser_fan_in_tracking_2d.point_cloud);
  if (options_.use_online_correlative_scan_matching()) {
    real_time_correlative_scan_matcher_.Match(
        pose_prediction_2d, filtered_point_cloud_in_tracking_2d,
        probability_grid, &initial_ceres_pose);
  }

  transform::Rigid2d tracking_2d_to_map;
  kalman_filter::Pose2DCovariance covariance_observation_2d;
  ceres::Solver::Summary summary;
  ceres_scan_matcher_.Match(
      transform::Project2D(scan_matcher_pose_estimate_ *
                           tracking_to_tracking_2d.inverse()),
      initial_ceres_pose, filtered_point_cloud_in_tracking_2d, probability_grid,
      &tracking_2d_to_map, &covariance_observation_2d, &summary);

  CHECK(pose_tracker_ != nullptr);

  *pose_observation = transform::Embed3D(tracking_2d_to_map);
  // This covariance is used for non-yaw rotation and the fake height of 0.
  constexpr double kFakePositionCovariance = 1.;
  constexpr double kFakeOrientationCovariance = 1.;
  *covariance_observation =
      kalman_filter::Embed3D(covariance_observation_2d, kFakePositionCovariance,
                             kFakeOrientationCovariance);
  pose_tracker_->AddPoseObservation(
      time, (*pose_observation) * tracking_to_tracking_2d,
      *covariance_observation);
}

这个函数是匹配激光信息的关键,首先,将激光数据栅格化滤波处理,再选择是否在线匹配(默认为否),随后使用CeresScanMatcher::Match对子地图进行局部优化:

void CeresScanMatcher::Match(const transform::Rigid2d& previous_pose,
                             const transform::Rigid2d& initial_pose_estimate,
                             const sensor::PointCloud2D& point_cloud,
                             const ProbabilityGrid& probability_grid,
                             transform::Rigid2d* const pose_estimate,
                             kalman_filter::Pose2DCovariance* const covariance,
                             ceres::Solver::Summary* const summary) const {
  double ceres_pose_estimate[3] = {initial_pose_estimate.translation().x(),
                                   initial_pose_estimate.translation().y(),
                                   initial_pose_estimate.rotation().angle()};
  ceres::Problem problem;
  CHECK_GT(options_.occupied_space_cost_functor_weight(), 0.);
  problem.AddResidualBlock(
      new ceres::AutoDiffCostFunction<OccupiedSpaceCostFunctor, ceres::DYNAMIC,
                                      3>(
          new OccupiedSpaceCostFunctor(
              options_.occupied_space_cost_functor_weight() /
                  std::sqrt(static_cast<double>(point_cloud.size())),
              point_cloud, probability_grid),
          point_cloud.size()),
      nullptr, ceres_pose_estimate);
  CHECK_GT(options_.previous_pose_translation_delta_cost_functor_weight(), 0.);
  problem.AddResidualBlock(
      new ceres::AutoDiffCostFunction<TranslationDeltaCostFunctor, 2, 3>(
          new TranslationDeltaCostFunctor(
              options_.previous_pose_translation_delta_cost_functor_weight(),
              previous_pose)),
      nullptr, ceres_pose_estimate);
  CHECK_GT(options_.initial_pose_estimate_rotation_delta_cost_functor_weight(),
           0.);
  problem.AddResidualBlock(
      new ceres::AutoDiffCostFunction<RotationDeltaCostFunctor, 1,
                                      3>(new RotationDeltaCostFunctor(
          options_.initial_pose_estimate_rotation_delta_cost_functor_weight(),
          ceres_pose_estimate[2])),
      nullptr, ceres_pose_estimate);

  ceres::Solve(ceres_solver_options_, &problem, summary);

  *pose_estimate = transform::Rigid2d(
      {ceres_pose_estimate[0], ceres_pose_estimate[1]}, ceres_pose_estimate[2]);

  ceres::Covariance::Options options;
  ceres::Covariance covariance_computer(options);
  std::vector<std::pair<const double*, const double*>> covariance_blocks;
  covariance_blocks.emplace_back(ceres_pose_estimate, ceres_pose_estimate);
  CHECK(covariance_computer.Compute(covariance_blocks, &problem));
  double ceres_covariance[3 * 3];
  covariance_computer.GetCovarianceBlock(ceres_pose_estimate,
                                         ceres_pose_estimate, ceres_covariance);
  *covariance = Eigen::Map<kalman_filter::Pose2DCovariance>(ceres_covariance);
  *covariance *= options_.covariance_scale();
}

在这里用了一些ceres solver的非线性优化的方法,还需学习一个……一般的优化问题,应该都是由约束、迭代方法与终止阈值构成,我们先看看构造的优化问题, 在problem中加入三个事件块,分别是激光数据占据栅格的代价函数、平移代价函数与旋转代价函数,这些权值的默认值在lua文件中都有包括:

ceres_scan_matcher = {
    occupied_space_cost_functor_weight = 20.,
    previous_pose_translation_delta_cost_functor_weight = 1.,
    initial_pose_estimate_rotation_delta_cost_functor_weight = 1e2,
    covariance_scale = 2.34e-4,
    ceres_solver_options = {
      use_nonmonotonic_steps = true,
      max_num_iterations = 50,
      num_threads = 1,
    },
  },

我们要求的最优解就是ceres_pose_estimate。在这里得到了位姿的均值与协方差矩阵,便将局部优化的位姿输出。

接下来是GlobalTrajectoryBuilder::AddHorizontalLaserFan的第二点,也就是SparsePoseGraph::AddScan函数:

void SparsePoseGraph::AddScan(
    common::Time time, const transform::Rigid3d& tracking_to_pose,
    const sensor::LaserFan& laser_fan_in_pose, const transform::Rigid2d& pose,
    const kalman_filter::Pose2DCovariance& covariance,
    const mapping::Submaps* submaps,
    const mapping::Submap* const matching_submap,
    const std::vector<const mapping::Submap*>& insertion_submaps) {
  const transform::Rigid3d optimized_pose(GetLocalToGlobalTransform(*submaps) *
                                          transform::Embed3D(pose));

  common::MutexLocker locker(&mutex_);
  const int j = trajectory_nodes_.size();
  CHECK_LT(j, std::numeric_limits<int>::max());

  constant_node_data_->push_back(mapping::TrajectoryNode::ConstantData{
      time, laser_fan_in_pose,
      Compress(sensor::LaserFan3D{Eigen::Vector3f::Zero(), {}, {}, {}}),
      submaps, transform::Rigid3d(tracking_to_pose)});
  trajectory_nodes_.push_back(mapping::TrajectoryNode{
      &constant_node_data_->back(), optimized_pose,
  });
  trajectory_connectivity_.Add(submaps);

  if (submap_indices_.count(insertion_submaps.back()) == 0) {
    submap_indices_.emplace(insertion_submaps.back(),
                            static_cast<int>(submap_indices_.size()));
    submap_states_.emplace_back();
    submap_states_.back().submap = insertion_submaps.back();
    submap_states_.back().trajectory = submaps;
    CHECK_EQ(submap_states_.size(), submap_indices_.size());
  }
  const mapping::Submap* const finished_submap =
      insertion_submaps.front()->finished_probability_grid != nullptr
          ? insertion_submaps.front()
          : nullptr;

  // Make sure we have a sampler for this trajectory.
  if (!global_localization_samplers_[submaps]) {
    global_localization_samplers_[submaps] =
        common::make_unique<common::FixedRatioSampler>(
            options_.global_sampling_ratio());
  }
  AddWorkItem(
      std::bind(std::mem_fn(&SparsePoseGraph::ComputeConstraintsForScan), this,
                j, submaps, matching_submap, insertion_submaps, finished_submap,
                pose, covariance));
}

不难看出,AddScan是在对全局地图进行闭环检测优化。 在所有的局部子地图都优化完毕后,会生成一个std::unique_ptr<LocalTrajectoryBuilder::InsertionResult> insertion_result,它的数据正是AddScan的形参。我们看到TrajectoryConnectivity::Add这个函数,它使用了map 容器的成员函数 emplace,这个函数可以在适当的位置直接构造新元素,从而避免复制和移动操作。只有当容器中现有元素的键与这个元素的键不同时,才会构造这个元素。

接着看到SparsePoseGraph::ComputeConstraintsForScan这个函数,首先调用了SparsePoseGraph::GrowSubmapTransformsAsNeeded函数,它根据子地图的数量决定子地图变换的数量,其实就是计算第一帧与第二帧地图的变换。接着调用的是ConstraintBuilder::MaybeAddConstraint函数

void ConstraintBuilder::MaybeAddConstraint(
    const int submap_index, const mapping::Submap* const submap,
    const int scan_index, const sensor::PointCloud2D* const point_cloud,
    const transform::Rigid2d& initial_relative_pose) {
  if (initial_relative_pose.translation().norm() >
      options_.max_constraint_distance()) {
    return;
  }
  if (sampler_.Pulse()) {
    common::MutexLocker locker(&mutex_);
    CHECK_LE(scan_index, current_computation_);
    constraints_.emplace_back();
    auto* const constraint = &constraints_.back();
    ++pending_computations_[current_computation_];
    const int current_computation = current_computation_;
    ScheduleSubmapScanMatcherConstructionAndQueueWorkItem(
        submap_index, submap->finished_probability_grid,
        [=]() EXCLUDES(mutex_) {
          ComputeConstraint(submap_index, submap, scan_index,
                            nullptr, /* scan_trajectory */
                            nullptr, /* submap_trajectory */
                            false,   /* match_full_submap */
                            nullptr, /* trajectory_connectivity */
                            point_cloud, initial_relative_pose, constraint);
          FinishComputation(current_computation);
        });
  }
}

 接着在这里调用这个长长的函数,作用在于用线程池定期调用ConstructSubmapScanMatcher函数。

void ConstraintBuilder::ScheduleSubmapScanMatcherConstructionAndQueueWorkItem(
    const int submap_index, const ProbabilityGrid* const submap,
    const std::function<void()> work_item) {
  if (submap_scan_matchers_[submap_index].fast_correlative_scan_matcher !=
      nullptr) {
    thread_pool_->Schedule(work_item);
  } else {
    submap_queued_work_items_[submap_index].push_back(work_item);
    if (submap_queued_work_items_[submap_index].size() == 1) {
      thread_pool_->Schedule(
          std::bind(std::mem_fn(&ConstraintBuilder::ConstructSubmapScanMatcher),
                    this, submap_index, submap));
    }
  }
}

然后就跳到了FastCorrelativeScanMatcher::Match函数以及FastCorrelativeScanMatcher::MatchFullSubmap这一块,它们位于fast_correlative_scan_matcher.cc中,分别是在特定的小范围配准与全局配准,最终它们都需要调用MatchWithSearchParameters函数,如下:

bool FastCorrelativeScanMatcher::MatchWithSearchParameters(
    SearchParameters search_parameters,
    const transform::Rigid2d& initial_pose_estimate,
    const sensor::PointCloud2D& point_cloud, float min_score, float* score,
    transform::Rigid2d* pose_estimate) const {
  CHECK_NOTNULL(score);
  CHECK_NOTNULL(pose_estimate);

  const Eigen::Rotation2Dd initial_rotation = initial_pose_estimate.rotation();
  const sensor::PointCloud2D rotated_point_cloud =
      sensor::TransformPointCloud2D(
          point_cloud,
          transform::Rigid2d::Rotation(initial_rotation).cast<float>());
  const std::vector<sensor::PointCloud2D> rotated_scans =
      GenerateRotatedScans(rotated_point_cloud, search_parameters);
  const std::vector<DiscreteScan> discrete_scans = DiscretizeScans(
      limits_, rotated_scans,
      Eigen::Translation2f(initial_pose_estimate.translation().x(),
                           initial_pose_estimate.translation().y()));
  search_parameters.ShrinkToFit(discrete_scans, limits_.cell_limits());

  const std::vector<Candidate> lowest_resolution_candidates =
      ComputeLowestResolutionCandidates(discrete_scans, search_parameters);
  const Candidate best_candidate = BranchAndBound(
      discrete_scans, search_parameters, lowest_resolution_candidates,
      precomputation_grid_stack_->max_depth(), min_score);
  if (best_candidate.score > min_score) {
    *score = best_candidate.score;
    *pose_estimate = transform::Rigid2d(
        {initial_pose_estimate.translation().x() + best_candidate.x,
         initial_pose_estimate.translation().y() + best_candidate.y},
        initial_rotation * Eigen::Rotation2Dd(best_candidate.orientation));
    return true;
  }
  return false;
}

这首先把激光数据旋转到世界坐标系的零点,生成一个GenerateRotatedScans,也就是将激光信息根据步长进行旋转,再通过DiscretizeScans将激光点映射到地图栅格上,接下来是ShrinkToFit收紧搜索边界,再通过ComputeLowestResolutionCandidates在最低分辨率下求出所有可行解,接着利用BranchAndBound即分枝定界方法来计算最优的解,如果大于最小的阈值则选择它为最好的位姿估计。

值得一提的是在FastCorrelativeScanMatcher::BranchAndBound函数中,匹配查找是三层循环递归的,非常暴力,从初始位置开始,将匹配好的地图中得分最高的最大分辨率的粒子作为邻居区域的初始值,那么就逐步地向其他区域蔓延直至匹配好整幅地图。

这帮大神写的代码,stl函数和对象满天飞,秀的我脑壳疼。。。还是拿出Effective STL继续看看吧。。。orz

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值