三维SLAM算法SegMap源码阅读(二)前端篇

上一篇对SegMap整体的流程与模块进行了概括,接下来我们看看slam的前端部分。

一、匹配器

SegMatch是SegMap的最大创新点,利用点云分割进行地图的语义化,能够更好的实现回环。这篇论文的地址是https://arxiv.org/pdf/1609.07720.pdf

// SegMatch objects.
  segmatch_ros::SegMatchWorkerParams segmatch_worker_params_;
  segmatch_ros::SegMatchWorker segmatch_worker_;
  static constexpr double kSegMatchSleepTime_s = 0.01;

当然,segmapper在加载参数时也对segmatch进行了参数化:

segmatch_worker_params_ = segmatch_ros::getSegMatchWorkerParams(nh_, ns);
segmatch_worker_params_.world_frame = params_.world_frame;

在segmapper中使用了SegMatchWorker的init函数,转到segmatch_ros包, 它作为segmatch的接口类,定义了一系列segmatch的初始化对象。这便拉开了复杂多变的变量的大坑序幕,common.hpp中进行了各种typedef,在阅读时需不时复习。

typedef segmatch::PointExtended MapPoint;
typedef pcl::PointCloud<MapPoint> MapCloud;

SegMatch::init函数在此定义了描述器、分割器、分类器,我们依次进行探索。值得一提的是,它们都返回了unique_ptr,它和auto_ptr类似,不能复制构造,但可以作为函数的返回值。

descriptors_ = std::unique_ptr<Descriptors>(new Descriptors(params.descriptors_params));
  segmenter_ = segmenter_factory.create();
  classifier_ = std::unique_ptr<OpenCvRandomForest>(
      new OpenCvRandomForest(params.classifier_params));

1.描述器,位于descriptors包中,它包括卷积神经网络的C++接口、提取ESF描述子(一种三维点云形状特征的描述子)的描述器、特征向量的提取器。 

2.分割器,位于segmenters包中,Segmenter是各种分割器的基类,继承它的方法有欧几里得聚类法、平滑约束分割方法、增量式区域分割方法。欧式聚类法我们很熟悉,是直接将距离阈值以内的归为一个实体。利用平滑约束对点云进行分割的方法可以在点云中找到平滑的连通区域。它只使用局部曲面法线和点连接,可以使用近邻法实现,在欠分割和过分割之间进行了权衡。而增量式区域分割方法是先将点云进行体素网格化得到一系列点集,再对点集进行描述子的处理以及特征向量的提取。

当然,根据论文,它是使用增量分割的方法进行点云分割,主要流程如下:

template<typename ClusteredPointT, typename PolicyName>
void IncrementalSegmenter<ClusteredPointT, PolicyName>::segment(
    const PointNormals& normals, const std::vector<bool>& is_point_modified, ClusteredCloud& cloud,
    PointsNeighborsProvider<ClusteredPointT>& points_neighbors_provider,
    SegmentedCloud& segmented_cloud, std::vector<Id>& cluster_ids_to_segment_ids,
    std::vector<std::pair<Id, Id>>& renamed_segments) {
  BENCHMARK_BLOCK("SM.Worker.Segmenter");
  renamed_segments.clear();

  //根据原有分割进行增量式的增加,初始分割是根据半径、机器人位置进行圆柱形分割
  PartialClusters partial_clusters(cluster_ids_to_segment_ids.size());
  for (size_t i = 0u; i < partial_clusters.size(); i++) {
    partial_clusters[i].partial_clusters_set->partial_clusters_indices.insert(i);
    partial_clusters[i].partial_clusters_set->segment_id = cluster_ids_to_segment_ids[i];
  }

  // Find old clusters and new partial clusters.
  growRegions(normals, is_point_modified, cluster_ids_to_segment_ids, cloud,
              points_neighbors_provider, partial_clusters, renamed_segments);

  //对分割结果进行描述
  const size_t num_clusters = assignClusterIndices(partial_clusters);
  writeClusterIndicesToCloud(partial_clusters, cloud);

  //提取有效分割并加入分割后的点云
  addSegmentsToSegmentedCloud(cloud, partial_clusters, num_clusters, cluster_ids_to_segment_ids,
                              segmented_cloud);
}

3.分类器,它通过opencv_random_forest实现,而随机森林是一种随机产生的决策树分类器,具体来说是从大样本中随机抽取一部分数据构建决策树,在多次的决策中训练数据集的特征。它的作用是根据特征向量的差值来确定两块点云是否属于同一物体。这一部分的具体实现在后段部分再详细看看。

二、增量估计器

// Incremental estimator.
  std::shared_ptr<laser_slam::IncrementalEstimator> incremental_estimator_;

严格地说,增量估计器是为算法后端服务的不断累积的非线性优化结构,但它时刻为前端提供机器人当前的位姿估算:

Pose current_pose = incremental_estimator_->getCurrentPose(track_id);
{
  std::lock_guard<std::mutex> map_lock(local_maps_mutexes_[track_id]);
  local_maps_[track_id].updatePoseAndAddPoints(new_points, current_pose);
}

它的构造函数,除了初始化因子图与噪声模型,还有初始化LaserTrack这一工具类,这个类归为位姿估计器部分。

三、位姿估计器

这一部分位于laser_worker包,主要的流程在LaserSlamWorker::scanCallback当中。在此之前先要看懂几个关键的typedef:

//common.hpp
using Aligned = Container<Type, Eigen::aligned_allocator<Type> >;

typedef Aligned<std::vector, Pose> PoseVector;
typedef Aligned<std::vector, RelativePose> RelativePoseVector;

//laser_track.hpp
typedef std::map<Time, SE3> Trajectory;
typedef curves::DiscreteSE3Curve CurveType;
CurveType trajectory_;

LaserSlamWorker::scanCallback的主要流程如下:

void LaserSlamWorker::scanCallback(const sensor_msgs::PointCloud2& cloud_msg_in)
{
  std::lock_guard<std::recursive_mutex> lock_scan_callback(scan_callback_mutex_);
  if (!lock_scan_callback_) {
    //监听里程计与传感器的tf,如果机器人走过足够距离,则更新局部地图
    if (tf_listener_.waitForTransform(params_.odom_frame, params_.sensor_frame,
                                      cloud_msg_in.header.stamp, ros::Duration(kTimeout_s))) {
      // Get the tf transform.
      tf::StampedTransform tf_transform;
      tf_listener_.lookupTransform(params_.odom_frame, params_.sensor_frame,
                                   cloud_msg_in.header.stamp, tf_transform);

      bool process_scan = false;
      SE3 current_pose;

      if (!last_pose_set_) {
        process_scan = true;
        last_pose_set_ = true;
        last_pose_ = tfTransformToPose(tf_transform).T_w;
      } else {
        current_pose = tfTransformToPose(tf_transform).T_w;
        float dist_m = distanceBetweenTwoSE3(current_pose, last_pose_);
        if (dist_m > params_.minimum_distance_to_add_pose) {
          process_scan = true;
          last_pose_ = current_pose;
        }
      }
      
      //开始处理
      if (process_scan) {
        // Convert input cloud to laser scan.
        //这里的LaserScan包含了pointmatch的数据类型以及时间戳
        LaserScan new_scan;
        new_scan.scan = PointMatcher_ros::rosMsgToPointMatcherCloud<float>(cloud_msg_in);
        new_scan.time_ns = rosTimeToCurveTime(cloud_msg_in.header.stamp.toNSec());

        // Process the new scan and get new values and factors.
        //构建新的因子
        gtsam::NonlinearFactorGraph new_factors;
        gtsam::Values new_values;
        bool is_prior;
        
        //这个是在使用里程计信息的情况下,使用的pose就是odom的tf数据
        if (params_.use_odometry_information) {
          laser_track_->processPoseAndLaserScan(tfTransformToPose(tf_transform), new_scan,
                                                &new_factors, &new_values, &is_prior);
        }
        // 如果不使用里程计就从历史轨迹上一时刻的位姿与当前位姿进行估算
        else {
          Pose new_pose;

          Time new_pose_time_ns = tfTransformToPose(tf_transform).time_ns;

          if (laser_track_->getNumScans() > 2u) {
            Pose current_pose = laser_track_->getCurrentPose();

            if (current_pose.time_ns > new_pose_time_ns - current_pose.time_ns) {
              Time previous_pose_time = current_pose.time_ns -
                  (new_pose_time_ns - current_pose.time_ns);
              if (previous_pose_time >= laser_track_->getMinTime() &&
                  previous_pose_time <= laser_track_->getMaxTime()) {
                SE3 previous_pose = laser_track_->evaluate(previous_pose_time);
                new_pose.T_w = last_pose_sent_to_laser_track_.T_w *
                    previous_pose.inverse()  * current_pose.T_w ;
                new_pose.T_w = SE3(SO3::fromApproximateRotationMatrix(
                    new_pose.T_w.getRotation().getRotationMatrix()), new_pose.T_w.getPosition());
              }
            }
          }

          new_pose.time_ns = new_pose_time_ns;
          laser_track_->processPoseAndLaserScan(new_pose, new_scan,
                                                &new_factors, &new_values, &is_prior);

          last_pose_sent_to_laser_track_ = new_pose;
        }

        // Process the new values and factors.
        gtsam::Values result;
        //is_prior就是是否第一帧处理,历史轨迹是否为空
        if (is_prior) {
          result = incremental_estimator_->registerPrior(new_factors, new_values, worker_id_);
        } else {
          result = incremental_estimator_->estimate(new_factors, new_values, new_scan.time_ns);
        }

        // Update the trajectory.
        laser_track_->updateFromGTSAMValues(result);

        // Adjust the correction between the world and odom frames.
        Pose current_pose = laser_track_->getCurrentPose();
        SE3 T_odom_sensor = tfTransformToPose(tf_transform).T_w;
        SE3 T_w_sensor = current_pose.T_w;
        SE3 T_w_odom = T_w_sensor * T_odom_sensor.inverse();

        Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> matrix;

        // TODO resize needed?
        matrix.resize(4, 4);
        matrix = T_w_odom.getTransformationMatrix().cast<float>();
        
        //world_to_odom_是tf线程发布的map到odom的变换,也是建图中的定位信息
        {
          std::lock_guard<std::recursive_mutex> lock_world_to_odom(world_to_odom_mutex_);
          world_to_odom_ = PointMatcher_ros::eigenMatrixToStampedTransform<float>(
              matrix, params_.world_frame, params_.odom_frame, cloud_msg_in.header.stamp);
        }

        publishTrajectories();

        // Get the last cloud in world frame.
        DataPoints new_fixed_cloud;
        //获取矫正位姿后的局部地图
        laser_track_->getLocalCloudInWorldFrame(laser_track_->getMaxTime(), &new_fixed_cloud);

        PointCloud new_fixed_cloud_pcl = lpmToPcl(new_fixed_cloud);
        //去掉地面
        if (params_.remove_ground_from_local_map) {
          const double robot_height_m = current_pose.T_w.getPosition()(2);
          PointCloud new_fixed_cloud_no_ground;
          for (size_t i = 0u; i < new_fixed_cloud_pcl.size(); ++i) {
            if (new_fixed_cloud_pcl.points[i].z > robot_height_m -
                params_.ground_distance_to_robot_center_m) {
              new_fixed_cloud_no_ground.push_back(new_fixed_cloud_pcl.points[i]);
            }
          }
          new_fixed_cloud_no_ground.width = 1;
          new_fixed_cloud_no_ground.height = new_fixed_cloud_no_ground.points.size();
          new_fixed_cloud_pcl = new_fixed_cloud_no_ground;
        }
        
        //将这一帧加入到已建成的地图中
        // Add the local scans to the full point cloud.
        if (params_.create_filtered_map) {
          if (new_fixed_cloud_pcl.size() > 0u) {
            std::lock_guard<std::recursive_mutex> lock(local_map_mutex_);
            if (local_map_.size() > 0u) {
              local_map_ += new_fixed_cloud_pcl;
            } else {
              local_map_ = new_fixed_cloud_pcl;
            }
            //local_map_queue_是segmatch线程中不断获得的已建成的地图
            local_map_queue_.push_back(new_fixed_cloud_pcl);
          }
        }
      }
    } else {
      ROS_WARN_STREAM("[SegMapper] Timeout while waiting between " + params_.odom_frame  +
                      " and " + params_.sensor_frame  + ".");
    }
  }
}

在上述流程中,调用了关键的LaserTrack::processPoseAndLaserScan函数,它主要流程如下:

void LaserTrack::processPoseAndLaserScan(const Pose& pose, const LaserScan& in_scan,
                                         gtsam::NonlinearFactorGraph* newFactors,
                                         gtsam::Values* newValues,
                                         bool* is_prior) {
  std::lock_guard<std::recursive_mutex> lock(full_laser_track_mutex_);

  Clock scan_matching_clock;

  LaserScan scan = in_scan;

  // Apply the input filters.
  input_filters_.apply(scan.scan);

  // Save the pose measurement.
  //pose_measurements_是历史位姿的容器
  pose_measurements_.push_back(pose);

  // Compute the relative pose measurement, extend the trajectory and
  // compute the ICP transformations.
  //轨迹为空(第一帧处理)则初始化键和时间戳
  if (trajectory_.isEmpty())
  {
    scan.key = extendTrajectory(scan.time_ns, getPoseMeasurement(scan.time_ns));

    // Update the pose key and save the scan.
    setPoseKey(scan.time_ns, scan.key);
    laser_scans_.push_back(scan);

    if (newFactors != NULL)
    {
      Pose prior_pose = pose;
      // Add a prior on the first key.
      if (params_.force_priors) {
        prior_pose.T_w = SE3(SE3::Rotation(1.0, 0.0, 0.0, 0.0),
                             SE3::Position(0.0,
                                           kDistanceBetweenPriorPoses_m * laser_track_id_, 0.0));
      }

      newFactors->push_back(makeMeasurementFactor(prior_pose, prior_noise_model_));
    }
    if (is_prior != NULL)
    {
      *is_prior = true;
    }
  }
  //在日常处理中,就用上一时刻的时间和轨迹来估计当前的位姿
  else
  {
    // Evaluate the pose measurement at the last trajectory node.
    SE3 last_pose_measurement = getPoseMeasurement(trajectory_.getMaxTime());

    // Evaluate the pose measurement at the new trajectory node.
    SE3 new_pose_measurement = getPoseMeasurement(scan.time_ns);

    // Create the relative pose measurement.
    RelativePose relative_measurement;
    //T_a_b是一对相关联的位姿的变换矩阵    
    relative_measurement.T_a_b = last_pose_measurement.inverse()*new_pose_measurement;
    relative_measurement.time_a_ns = trajectory_.getMaxTime();
    relative_measurement.key_a = getPoseKey(trajectory_.getMaxTime());
    relative_measurement.time_b_ns = scan.time_ns;

    // Extend the trajectory with the new node position.
    scan.key =  extendTrajectory(scan.time_ns, trajectory_.evaluate(trajectory_.getMaxTime()) *
                                 relative_measurement.T_a_b);

    // Update the pose key and save the scan.
    setPoseKey(scan.time_ns, scan.key);
    laser_scans_.push_back(scan);

    // Complete and save the relative_measurement.
    relative_measurement.key_b = scan.key;
    odometry_measurements_.push_back(relative_measurement);

    // Compute the ICP transformations.
    if (params_.use_icp_factors) {
      computeICPTransformations();
    }

    scan_matching_clock.takeTime();
    scan_matching_times_.emplace(scan.time_ns, scan_matching_clock.getRealTime());

    if (newFactors != NULL) {
      // Add the odometry and ICP factors.
      if (params_.use_odom_factors) {
        newFactors->push_back(makeRelativeMeasurementFactor(relative_measurement,
                                                            odometry_noise_model_));
      }

      if (params_.use_icp_factors) {
        newFactors->push_back(makeRelativeMeasurementFactor(
            icp_transformations_[icp_transformations_.size()-1u], icp_noise_model_));
      }
    }
    if (is_prior != NULL) {
      *is_prior = false;
    }
  }

  if (newValues != NULL) {
    newValues->insert(scan.key, pose.T_w);
  }
}

帧与帧之间的位姿是根据ICP匹配获得,而ICP方法应该是直接使用了pointmatcher包中点到面的算法,这个包是专门提供ICP算法的,在编译segmap之前需要先安装,具体的代码实现由于没有找到pointmatcher的C++代码就先不看了。。。

另外,在laser_track中,可以看到有appendOdometryFactors、appendICPFactors、appendLoopClosureFactors,这些都是加到因子图中的噪声,分别是里程计误差模型(可以看看概率机器人)和ICP匹配误差模型,视参数配置文件而定。

根据作者设计的思路,前端部分将在线程segMatchThread与publishTfThread中进行,tf变换发布的位姿是根据位姿估计更新后的结果得到的,而segMatchThread所体现的分割与匹配的功能,更多地是结合增量估计器,为后端的全局匹配服务,当检测到回环,以及如何判断回环的处理,我们在后端部分再详细讲述。

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值