【Apollo7.0】感知模块(3):激光雷达感知中的目标跟踪算法--lidar_obstacle_tracking---multi_target_tracker

【Apollo7.0】感知模块(3):激光雷达感知中的目标跟踪算法—multi_target_tracker

模块初始化

step 1: dag

dag_streaming_perception_lidar.dag中调用组件—RecognitionComponent
包括配置文件路径输入信号

components {
    class_name : "RecognitionComponent"
    config {
      name: "RecognitionComponent"
      config_file_path: "/apollo/modules/perception/production/conf/perception/lidar/recognition_conf.pb.txt"
      readers {
        channel: "/perception/inner/DetectionObjects"
      }
    }
  }

step 2: RecognitionComponent.cc

RecognitionComponent组件的初始化中会调用参数配置文件—recognition_conf.pb.txt
位于:/production/conf/lidar/

step 3: lidar_obstacle_tracking.cc

在step 2中的InitAlgorithmPlugin()会执行LidarObstacleTracking的初始化。
LidarObstacleTracking初始化会调用配置文件—lidar_obstacle_tracking.conf
位于:/production/data/perception/lidar/models/lidar_obstacle_pipline/velodyne64/
该配置文件包含了多目标跟踪与融合分类算法的名称:

multi_target_tracker: "MlfEngine"
fusion_classifier: "FusedClassifier"
BaseMultiTargetTracker* multi_target_tracker =
      BaseMultiTargetTrackerRegisterer::
      GetInstanceByName(config.multi_target_tracker());
  CHECK_NOTNULL(multi_target_tracker);
  // 多目标跟踪算法:MlfEngine
  multi_target_tracker_.reset(multi_target_tracker);
  MultiTargetTrackerInitOptions tracker_init_options;
  ACHECK(multi_target_tracker_->Init(tracker_init_options)) <<
                              "lidar multi_target_tracker init error";

  BaseClassifier* fusion_classifier =
      BaseClassifierRegisterer::GetInstanceByName(config.fusion_classifier());
  CHECK_NOTNULL(fusion_classifier);
  // 融合分类算法:FusedClassifier
  fusion_classifier_.reset(fusion_classifier);
  ClassifierInitOptions fusion_classifier_init_options;
  ACHECK(fusion_classifier_->Init(fusion_classifier_init_options)) <<
                              "lidar classifier init error";

step 4: mlf_engine.cc

MlfEngine的初始化会调用配置文件—mlf_engine.conf
位于:/roduction/data/perception/lidar/models/multi_lidar_fusion/
该配置文件包含了MlfEngine的算法参数。

step 5: fused_classifier.cc

FusedClassifier的初始化会调用配置文件—fused_classifier.conf
位于:/roduction/data/perception/lidar/models/fused_classifier/

算法流程

lidar_obstacle_tracking的算法入口在LidarObstacleTracking::Process()中

LidarProcessResult LidarObstacleTracking::Process(
    const LidarObstacleTrackingOptions& options, LidarFrame* frame) {
  const auto& sensor_name = options.sensor_name;

  PERF_FUNCTION_WITH_INDICATOR(sensor_name);

  PERF_BLOCK_START();
  MultiTargetTrackerOptions tracker_options;
  // 执行多目标跟踪
  if (!multi_target_tracker_->Track(tracker_options, frame)) {
    return LidarProcessResult(LidarErrorCode::TrackerError,
                              "Fail to track objects.");
  }
  PERF_BLOCK_END_WITH_INDICATOR(sensor_name, "tracker");

  ClassifierOptions fusion_classifier_options;
  // 执行融合分类
  if (!fusion_classifier_->Classify(fusion_classifier_options, frame)) {
    return LidarProcessResult(LidarErrorCode::ClassifierError,
                              "Fail to fuse object types.");
  }
  PERF_BLOCK_END_WITH_INDICATOR(sensor_name, "type_fusion");

  return LidarProcessResult(LidarErrorCode::Succeed);
}

多目标跟踪—MlfEngine

该算法对经过障碍物检测的点云帧进行处理,将连续帧检测到的目标进行匹配与滤波。主要用来计算目标速度信息,同时对目标的中心点、尺寸、朝向等信息进行重新估计,达到数据平滑的效果。

bool MlfEngine::Track(const MultiTargetTrackerOptions& options,
                      LidarFrame* frame) {
  // 0. modify objects timestamp if necessary
  // 如果采用每一帧的时间戳
  // 设置每个目标的最后一次跟踪到的时间
  if (use_frame_timestamp_) {
    for (auto& object : frame->segmented_objects) {
      object->latest_tracked_time = frame->timestamp;
    }
  }

  // 1. add global offset to pose (only when no track exists)
  if (foreground_track_data_.empty() && background_track_data_.empty()) {
    // 获得世界坐标到lidar坐标系的平移
    global_to_local_offset_ = -frame->lidar2world_pose.translation();
  }
  sensor_to_local_pose_ = frame->lidar2world_pose;
  sensor_to_local_pose_.pretranslate(global_to_local_offset_);

  // 2. split fg and bg objects, and transform to tracked objects
  // 将目标转换为跟踪目标,计算每个跟踪目标的直方图特征,分离前、后景目标
  SplitAndTransformToTrackedObjects(frame->segmented_objects,
                                    frame->sensor_info);

  // 3. assign tracked objects to tracks
  // 匹配过程
  MlfTrackObjectMatcherOptions match_options;
  // 对前景点云目标进行跟踪,匹配目标添加到内存中,未匹配目标创建新的跟踪
  TrackObjectMatchAndAssign(match_options, foreground_objects_, "foreground",
                            &foreground_track_data_);
  // 对后景点云目标进行跟踪
  TrackObjectMatchAndAssign(match_options, background_objects_, "background",
                            &background_track_data_);

  // 4. state filter in tracker if is main sensor
  bool is_main_sensor =
      (main_sensor_.find(frame->sensor_info.name) != main_sensor_.end());
  if (is_main_sensor) {
    // 对跟踪目标进行滤波,用于计算速度
    TrackStateFilter(foreground_track_data_, frame->timestamp);
    TrackStateFilter(background_track_data_, frame->timestamp);
  }

  // 5. track to object if is main sensor
  frame->tracked_objects.clear();
  if (is_main_sensor) {
    // 将跟踪目标更新到主传感器的点云帧
    CollectTrackedResult(frame);
  }

  // 6. remove stale data
  // 去除消失的目标,保留0.3秒内的可视目标
  RemoveStaleTrackData("foreground", frame->timestamp, &foreground_track_data_);
  RemoveStaleTrackData("background", frame->timestamp, &background_track_data_);

  AINFO << "MlfEngine publish objects: " << frame->tracked_objects.size()
        << " sensor_name: " << frame->sensor_info.name
        << " at timestamp: " << frame->timestamp;
  return true;
}

其主要处理过程包括:

  • 获得新点云帧中的前景目标与后景目标(称之为点云目标)
  • 将点云目标与跟踪目标进行匹配
  • 对跟踪目标进行滤波
  • 将跟踪目标保存到新点云帧中
新点云帧的障碍物目标处理:SplitAndTransformToTrackedObjects

该过程主要是将frame->segmented_objects提取到foreground_objects_、**background_objects_**中。
注:前后景目标的区分在检测环节中确定的。
在该函数中还会计算每个目标点云的直方图—TrackedObject::ComputeShapeFeatures()
直方图计算:

  1. 确定点云坐标的最大最小值
  2. 根据最大最小值,计算步长为10的每步增量
  3. 将点云根据增量分布在10个直方图中
  4. 计算每个直方图中点的数量在点云总数的占比
点云目标与跟踪目标的匹配:TrackObjectMatchAndAssign

匹配方法位于类MlfTrackObjectMatcher中。
该类的初始化会调用配置文件—mlf_track_object_matcher.conf
位于:/production/data/perception/lidar/models/multi_lidar_fusion/
该配置文件将前、后景目标的匹配分为两种方法:

foreground_mathcer_method: "MultiHmBipartiteGraphMatcher"
background_matcher_method: "GnnBipartiteGraphMatcher"
bound_value: 100
max_match_distance: 4.0

匹配算法采用的是匈牙利匹配法
匹配之前需要计算关联矩阵—association_mat

common::SecureMat<float> *association_mat = matcher->cost_matrix();
  association_mat->Resize(tracks.size(), objects.size());
  // 计算点云目标与跟踪目标之间的关联矩阵,矩阵的元素表示跟踪目标与点云目标的“距离”
  ComputeAssociateMatrix(tracks, objects, association_mat);
  // 根据关联矩阵,执行匹配
  matcher->Match(matcher_options, assignments, unassigned_tracks,
                 unassigned_objects);

关联矩阵指的是点云目标与跟踪目标两两之间的距离关系。
最大距离值设为max_match_distance: 4.0
计算距离时,会计算7种特征距离位置、朝向、bbox、点数、直方图、重心、IoU

//------下面计算7个距离特征,同时乘以对应的权重,累加到distance-----//

if (weights->at(0) > delta) {
  // 计算定位距离
  distance +=
      weights->at(0) * LocationDistance(latest_object, track->predict_.state,
                                        object, time_diff);
}
if (weights->at(1) > delta) {
  // 计算朝向距离
  distance +=
      weights->at(1) * DirectionDistance(latest_object, track->predict_.state,
                                         object, time_diff);
}
if (weights->at(2) > delta) {
  // 计算bbox尺寸距离
  distance +=
      weights->at(2) * BboxSizeDistance(latest_object, track->predict_.state,
                                        object, time_diff);
}
if (weights->at(3) > delta) {
  // 计算点数量距离
  distance +=
      weights->at(3) * PointNumDistance(latest_object, track->predict_.state,
                                        object, time_diff);
}
if (weights->at(4) > delta) {
  // 计算直方图距离
  distance +=
      weights->at(4) * HistogramDistance(latest_object, track->predict_.state,
                                         object, time_diff);
}
if (weights->at(5) > delta) {
  // 计算重心距离
  distance += weights->at(5) * CentroidShiftDistance(latest_object,
                                                     track->predict_.state,
                                                     object, time_diff);
}
if (weights->at(6) > delta) {
  // 计算bbox iou
  distance += weights->at(6) *
              BboxIouDistance(latest_object, track->predict_.state, object,
                              time_diff, background_object_match_threshold_);
}

匹配完成后,将跟踪目标添加到缓存中,用于后续滤波。

// 1. for assignment, push object to cache of track_data
  // 根据匹配目标,把新目标添加到跟踪目标的内存中
  for (auto& pair : assignments) {
    const size_t track_id = pair.first;
    const size_t object_id = pair.second;
    tracks->at(track_id)->PushTrackedObjectToCache(objects[object_id]);
  }
  // 2. for unassigned_objects, create new tracks
  // 对未匹配的点云目标,创建新的跟踪目标
  for (auto& id : unassigned_objects) {
    MlfTrackDataPtr track_data = MlfTrackDataPool::Instance().Get();
    tracker_->InitializeTrack(track_data, objects[id]);
    tracks->push_back(track_data);
  }
跟踪目标的滤波:TrackStateFilter

该函数提取跟踪目标缓存中的目标数据,进行滤波操作,然后更新跟踪目标。

void MlfEngine::TrackStateFilter(const std::vector<MlfTrackDataPtr>& tracks,
                                 double frame_timestamp) {
  std::vector<TrackedObjectPtr> objects;
  for (auto& track_data : tracks) {
    // 将内存中的跟踪目标添加到objects,然后清空内存
    track_data->GetAndCleanCachedObjectsInTimeInterval(&objects);
    for (auto& obj : objects) {
      // 根据点云目标更新跟踪数据
      tracker_->UpdateTrackDataWithObject(track_data, obj);
    }
    if (objects.empty()) {
      // 根据时间戳更新跟踪数据
      tracker_->UpdateTrackDataWithoutObject(frame_timestamp, track_data);
    }
  }
}

滤波操作包括motion、shape两个环节,分别会应用不同的算法。
motion滤波中主要计算目标速度
shape滤波主要计算目标中心、尺寸、朝向

void MlfTracker::UpdateTrackDataWithObject(MlfTrackDataPtr track_data,
                                           TrackedObjectPtr new_object) {
  // 1. state filter and store belief in new_object
  // 根据滤波器的数量,对跟踪目标执行滤波
  for (auto& filter : filters_) {
    filter->UpdateWithObject(filter_options_, track_data, new_object);
  }
  // 2. push new_obect to track_data
  track_data->PushTrackedObjectToTrack(new_object);
  track_data->is_current_state_predicted_ = false;
}

在PushTrackedObjectToTrack中将最新跟踪目标添加到历史跟踪目标中。

// 当前目标最新跟踪时间
  double timestamp = obj->object_ptr->latest_tracked_time;
  if (history_objects_.find(timestamp) == history_objects_.end()) {
    // 添加历史目标
    auto pair = std::make_pair(timestamp, obj);
    history_objects_.insert(pair);
    sensor_history_objects_[obj->sensor_info.name].insert(pair);
将跟踪目标保存到新点云帧中:CollectTrackedResult
  auto latest_iter = history_objects_.rbegin();
  const double latest_time = latest_iter->first;
  const auto& latest_object = latest_iter->second;
  // 将最新跟踪目标赋予object
  latest_object->ToObject(object);
  // predict object
  double time_diff = timestamp - latest_time;
  double offset_x = time_diff * object->velocity(0) + local_to_global_offset(0);
  double offset_y = time_diff * object->velocity(1) + local_to_global_offset(1);
  double offset_z = time_diff * object->velocity(2) + local_to_global_offset(2);
  // 下面就是重新计算凸点坐标、中心坐标、bbox尺寸等
去除消失的目标:RemoveStaleTrackData

多目标跟踪的最后一步是去除跟踪目标记录中的消失目标。
reserved_invisible_time_=0.3:一个目标若持续消失,最多保留0.3秒的记忆

void MlfEngine::RemoveStaleTrackData(const std::string& name, double timestamp,
                                     std::vector<MlfTrackDataPtr>* tracks) {
  size_t pos = 0;
  for (size_t i = 0; i < tracks->size(); ++i) {
    if (tracks->at(i)->latest_visible_time_ + reserved_invisible_time_ >=
        timestamp) {
      // 如果最新可视时间+预留不可视时间>=当前帧的时间戳,可以将该跟踪目标留下,否则就自动清除了
      if (i != pos) {
        tracks->at(pos) = tracks->at(i);
      }
      ++pos;
    }
  }
  AINFO << "MlfEngine: " << name << " remove stale tracks, from "
        << tracks->size() << " to " << pos;
  tracks->resize(pos);
}
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值