apollo_目标跟踪

HM对象跟踪

from https://github.com/YannZyl/Apollo-Note

HM对象跟踪器跟踪 检测到的障碍物。通常,它通过将当前检测与现有跟踪列表相关联,来形成和更新跟踪列表,如不再存在,则删除旧的跟踪列表,并在识别出新的检测时生成新的跟踪列表。 更新后的跟踪列表的运动状态将在关联后进行估计。 在HM对象跟踪器中,匈牙利算法(Hungarian algorithm)用于检测到跟踪关联,并采用鲁棒卡尔曼滤波器(Robust Kalman Filter) 进行运动估计。

上述是Apollo官方文档对HM对象跟踪的描述,这部分意思比较明了,主要的跟踪流程可以分为:

  • 预处理。(lidar->local ENU坐标系变换、跟踪对象创建、跟踪目标保存)
  • 卡尔曼滤波器滤波,预测物体当前位置与速度(卡尔曼滤波阶段1:Predict阶段)
  • 匈牙利算法比配,关联检测物体和跟踪物体
  • 卡尔曼滤波,更新跟踪物体位置与速度信息(卡尔曼滤波阶段2:Update阶段)

进入HM物体跟踪的入口依旧在LidarProcessSubnode::OnPointCloud中:

/// file in apollo/modules/perception/obstacle/onboard/lidar_process_subnode.cc
void LidarProcessSubnode::OnPointCloud(const sensor_msgs::PointCloud2& message) {
   
  /// call hdmap to get ROI
  ...
  /// call roi_filter
  ...
  /// call segmentor
  ...
  /// call object builder
  ...
  /// call tracker
  if (tracker_ != nullptr) {
   
    TrackerOptions tracker_options;
    tracker_options.velodyne_trans = velodyne_trans;
    tracker_options.hdmap = hdmap;
    tracker_options.hdmap_input = hdmap_input_;
    if (!tracker_->Track(objects, timestamp_, tracker_options, &(out_sensor_objects->objects))) {
   
    ...
    }
  }
}

在这部分,总共有三个比较绕的对象类,分别是Object、TrackedObject和ObjectTrack,在这里统一说明一下区别:

  • Object类:常见的物体类,里面包含物体原始点云、多边形轮廓、物体类别、物体分类置信度、方向、长宽、速度等信息。全模块通用
  • TrackedObject类:封装Object类,记录了跟踪物体类属性,额外包含了中心、重心、速度、加速度、方向等信息。
  • ObjectTrack类:封装了TrackedObject类,实际的跟踪解决方案,不仅包含了需要跟踪的物体(TrackedObject),同时包含了跟踪物体滤波、预测运动趋势等函数。

所以可以看到,跟踪过程需要将原始Object封装成TrackedObject,创立跟踪对象;最后跟踪对象创立跟踪过程ObjectTrack,可以通过ObjectTrack里面的函数来对ObjectTrack所标记的TrackedObject进行跟踪。

/* /base/object.h  */
// 只列出了属性,完整实现见源码
struct alignas(16) Object {
   
  // object id per frame
  int id = 0;
  // point cloud of the object
  pcl_util::PointCloudPtr cloud;
  // convex hull of the object
  PolygonDType polygon;

  // oriented boundingbox information
  // main direction
  Eigen::Vector3d direction = Eigen::Vector3d(1, 0, 0);
  // the yaw angle, theta = 0.0 <=> direction = (1, 0, 0)
  double theta = 0.0;
  // ground center of the object (cx, cy, z_min)
  Eigen::Vector3d center = Eigen::Vector3d::Zero();
  // size of the oriented bbox, length is the size in the main direction
  double length = 0.0;
  double width = 0.0;
  double height = 0.0;
  // shape feature used for tracking
  std::vector<float> shape_features;

  // foreground score/probability
  float score = 0.0;
  // foreground score/probability type
  ScoreType score_type = ScoreType::SCORE_CNN;

  // Object classification type.
  ObjectType type = ObjectType::UNKNOWN;
  // Probability of each type, used for track type.
  std::vector<float> type_probs;

  // fg/bg flag
  bool is_background = false;

  // tracking information
  int track_id = 0;
  Eigen::Vector3d velocity = Eigen::Vector3d::Zero();
  // age of the tracked object
  double tracking_time = 0.0;
  double latest_tracked_time = 0.0;
  double timestamp = 0.0;

  // stable anchor_point during time, e.g., barycenter
  Eigen::Vector3d anchor_point;

  // noise covariance matrix for uncertainty of position and velocity
  Eigen::Matrix3d position_uncertainty;
  Eigen::Matrix3d velocity_uncertainty;

  // modeling uncertainty from sensor level tracker
  Eigen::Matrix4d state_uncertainty = Eigen::Matrix4d::Identity();
  // Tailgating (trajectory of objects)
  std::vector<Eigen::Vector3d> drops;
  // CIPV
  bool b_cipv = false;
  // local lidar track id
  int local_lidar_track_id = -1;
  // local radar track id
  int local_radar_track_id = -1;
  // local camera track id
  int local_camera_track_id = -1;

  // local lidar track ts
  double local_lidar_track_ts = -1;
  // local radar track ts
  double local_radar_track_ts = -1;
  // local camera track ts
  double local_camera_track_ts = -1;

  // sensor particular suplplements, default nullptr
  RadarSupplementPtr radar_supplement = nullptr;
  CameraSupplementPtr camera_supplement = nullptr;
};
/* /lidar/tracker/tarcked_object.h  */
// 只列出了属性,完整实现见源码
struct TrackedObject {
   
  /* NEED TO NOTICE: All the states of track would be collected mainly based on
   * the states of tracked object. Thus, update tracked object's state when you
   * update the state of track !!! */
  // cloud
  // store transformed object before tracking
  std::shared_ptr<Object> object_ptr;

  // 质心
  Eigen::Vector3f barycenter;

  // bbox
  Eigen::Vector3f center;
  Eigen::Vector3f size;
  Eigen::Vector3f direction;
  Eigen::Vector3f lane_direction;

  // states
  Eigen::Vector3f anchor_point;
  Eigen::Vector3f velocity;
  Eigen::Matrix3f velocity_uncertainty;
  Eigen::Vector3f acceleration;

  // class type
  ObjectType type;

  // association distance
  // range from 0 to association_score_maximum
  float association_score = 0.0f;
};  // struct TrackedObject
/* apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/object_track.h  
   只列出了属性,完整实现见源码
*/
class ObjectTrack {
   
 public:
  explicit ObjectTrack(std::shared_ptr<TrackedObject> obj);
  ~ObjectTrack();

  // @brief set track cached history size maximum
  // @params[IN] track_cached_history_size_maximum: track cached history size
  // maximum
  // @return true if set successfully, otherwise return false
  static bool SetTrackCachedHistorySizeMaximum(
      const int track_cached_history_size_maximum);

  // @brief set acceleration noise maximum
  // @params[IN] acceleration_noise_maximum: acceleration noise maximum
  // @return true if set successfully, otherwise return false
  static bool SetAccelerationNoiseMaximum(
      const double acceleration_noise_maximum);

  // @brief set speed noise maximum
  // @params[IN] speed noise maximum: speed noise maximum
  // @return true if set successfully, otherwise return false
  static bool SetSpeedNoiseMaximum(const double speed_noise_maximum);

  // @brief get next avaiable track id
  // @return next avaiable track id
  static int GetNextTrackId();

  // @brief predict the state of track
  // @params[IN] time_diff: time interval for predicting
  // @return predicted states of track
  Eigen::VectorXf Predict(const double time_diff);

  // @brief update track with object
  // @params[IN] new_object: recent detected object for current updating
  // @params[IN] time_diff: time interval from last updating
  // @return nothing
  void UpdateWithObject(std::shared_ptr<TrackedObject>* new_object,
                        const double time_diff);

  // @brief update track without object
  // @params[IN] time_diff: time interval from last updating
  // @return nothing
  void UpdateWithoutObject(const double time_diff);

  // @brief update track without object with given predicted state
  // @params[IN] predict_state: given predicted state of track
  // @params[IN] time_diff: time interval from last updating
  // @return nothing
  void UpdateWithoutObject(const Eigen::VectorXf& predict_state,
                           const double time_diff);

 protected:
  // @brief smooth velocity over track history
  // @params[IN] new_object: new detected object for updating
  // @params[IN] time_diff: time interval from last updating
  // @return nothing
  void SmoothTrackVelocity(const std::shared_ptr<TrackedObject>& new_object,
                           const double time_diff);

  // @brief smooth orientation over track history
  // @return nothing
  void SmoothTrackOrientation();

  // @brief check whether track is static or not
  // @params[IN] new_object: new detected object just updated
  // @params[IN] time_diff: time interval between last two updating
  // @return true if track is static, otherwise return false
  bool CheckTrackStaticHypothesis(const std::shared_ptr<Object>& new_object,
                                  const double time_diff);

  // @brief sub strategy of checking whether track is static or not via
  // considering the velocity angle change
  // @params[IN] new_object: new detected object just updated
  // @params[IN] time_diff: time interval between last two updating
  // @return true if track is static, otherwise return false
  bool CheckTrackStaticHypothesisByVelocityAngleChange(
      const std::shared_ptr<Object>& new_object, const double time_diff);

 private:
  ObjectTrack();

 public:
  // algorithm setup
  static tracker_config::ModelConfigs::FilterType s_filter_method_;
  BaseFilter* filter_;

  // basic info
  int idx_;
  int age_;
  int total_visible_count_;
  int consecutive_invisible_count_;
  double period_;

  std::shared_ptr<TrackedObject> current_object_;

  // history
  std::deque<std::shared_ptr<TrackedObject>> history_objects_;

  // states
  // NEED TO NOTICE: All the states would be collected mainly based on states
  // of tracked object. Thus, update tracked object when you update the state
  // of track !!!!!
  bool is_static_hypothesis_;
  Eigen::Vector3f belief_anchor_point_;
  Eigen::Vector3f belief_velocity_;
  Eigen::Matrix3f belief_velocity_uncertainty_;
  Eigen::Vector3f belief_velocity_accelaration_;

 private:
  // global setup
  static int s_track_idx_;
  static int s_track_cached_history_size_maximum_;
  static double s_speed_noise_maximum_;
  static double s_acceleration_noise_maximum_;

  DISALLOW_COPY_AND_ASSIGN(ObjectTrack);
};  // class ObjectTrack

Step 1. 预处理

/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
bool HmObjectTracker::Track(const std::vector<ObjectPtr>& objects,
                            double timestamp, const TrackerOptions& options,
                            std::vector<ObjectPtr>* tracked_objects) {
   
  // A. track setup
  if (!valid_) {
   
    valid_ = true;
    return Initialize(objects, timestamp, options, tracked_objects);
  }
  // B. preprocessing
  // B.1 transform given pose to local one
  TransformPoseGlobal2Local(&velo2world_pose);
  // B.2 construct objects for tracking
  std::vector<TrackedObjectPtr> transformed_objects;
  ConstructTrackedObjects(objects, &transformed_objects, velo2world_pose,options);
  ...
}

bool HmObjectTracker::Initialize(const std::vector<ObjectPtr>& objects,
                                 const double& timestamp,
                                 const TrackerOptions& options,
                                 std::vector<ObjectPtr>* tracked_objects) {
   
  global_to_local_offset_ = Eigen::Vector3d(-velo2world_pose(0, 3), -velo2world_pose(1, 3), -velo2world_pose(2, 3));
  // B. preprocessing
  // B.1 coordinate transformation
  TransformPoseGlobal2Local(&velo2world_pose);
  // B.2 construct tracked objects
  std::vector<TrackedObjectPtr> transformed_objects;
  ConstructTrackedObjects(objects, &transformed_objects, velo2world_pose, options);
  // C. create tracks
  CreateNewTracks(transformed_objects, unassigned_objects);
  time_stamp_ = timestamp;
  // D. collect tracked results
  CollectTrackedResults(tracked_objects);
  return true;
}

预处理阶段主要分两个模块:A.跟踪建立(track setup)和B.预处理(preprocess)。跟踪建立过程,主要是对上述得到的物体对象进行跟踪目标的建立,这是Track第一次被调用的时候进行的,后续只需要进行跟踪对象更新即可。建立过程相对比较简单,主要包含:

  1. 物体对象坐标系转换。(原先的lidar坐标系–>lidar局部ENU坐标系/有方向性)
  2. 对每个物体创建跟踪对象,加入跟踪列表。
  3. 记录现在被跟踪的对象

从上面代码来看,预处理阶段两模块重复度很高,这里我们就介绍Initialize对象跟踪建立函数。

(1) 第一步是进行坐标系的变换。

这里我们注意到一个平移向量global_to_local_offset_,他是lidar坐标系到世界坐标系的变换矩阵velo2world_trans的平移成分,前面高精地图ROI过滤器小节我们讲过: local局部ENU坐标系跟world世界坐标系之间只有平移成分,没有旋转。所以这里取了转变矩阵的平移成分,其实就是world世界坐标系转换到lidar局部ENU坐标系的平移矩阵(变换矩阵)。P_local = P_world + global_to_local_offset_

/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::TransformPoseGlobal2Local(Eigen::Matrix4d* pose) {
   
  (*pose)(0, 3) += global_to_local_offset_(0);
  (*pose)(1, 3) += global_to_local_offset_(1);
  (*pose)(2, 3) += global_to_local_offset_(2);
}

从上面的TransformPoseGlobal2Local函数代码我们可以得到一个没有平移成分,只有旋转成分的变换矩阵velo2world_pose,这个矩阵有什么作用?很简单,这个矩阵就是lidar坐标系到lidar局部ENU坐标系的转换矩阵。

(2) 第二步中需要根据前面CNN检测到的物体来创建跟踪对象。

也就是将Object包装到TrackedObject中,那我们先来看一下TrackedObject类里面的成分:

名称 备注
ObjectPtr object_ptr Object对象指针
Eigen::Vector3f barycenter 重心,取该类所有点云xyz的平均值得到
Eigen::Vector3f center 中心, bbox4个角点外加平均高度计算得到
Eigen::Vector3f velocity 速度,卡尔曼滤波器预测得到
Eigen::Matrix3f velocity_uncertainty 不确定速度
Eigen::Vector3f acceleration 加速度
ObjectType type 物体类型,行人、自行车、车辆等
float association_score

从上面表格可以看到,TrackedObject封装了Object,并且只增加了少量速度,加速度等额外信息。

/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::ConstructTrackedObjects(
    const std::vector<ObjectPtr>& objects,
    std::vector<TrackedObjectPtr>* tracked_objects, const Eigen::Matrix4d& pose,
    const TrackerOptions& options) {
   
  int num_objects = objects.size();
  tracked_objects->clear();
  tracked_objects->resize(num_objects);
  for (int i = 0; i < num_objects; ++i) {
   
    ObjectPtr obj(new Object());
    obj->clone(*objects[i]);
    (*tracked_objects)[i].reset(new TrackedObject(obj));                  // create new TrackedObject with object
    // Computing shape featrue
    if (use_histogram_for_match_) {
   
      ComputeShapeFeatures(&((*tracked_objects)[i]));                     // compute shape feature
    }
    // Transforming all tracked objects
    TransformTrackedObject(&((*tracked_objects)[i]), pose);               // transform coordinate from lidar frame to local ENU frame
    // Setting barycenter as anchor point of tracked objects
    Eigen::Vector3f anchor_point = (*tracked_objects)[i]->barycenter;
    (*tracked_objects)[i]->anchor_point = anchor_point;
    // Getting lane direction of tracked objects
    pcl_util::PointD query_pt;                                            // get lidar's world coordinate equals lidar2world_trans's translation part  
    query_pt.x = anchor_point(0) - global_to_local_offset_(0);
    query_pt.y = anchor_point(1) - global_to_local_offset_(1);
    query_pt.z = anchor_point(2) - global_to_local_offset_(2);
    Eigen::Vector3d lane_dir;
    if (!options.hdmap_input->GetNearestLaneDirection(query_pt, &lane_dir)) {
   
      lane_dir = (pose * Eigen::Vector4d(1, 0, 0, 0)).head(3);            // get nearest line direction from hd map
    }
    (*tracked_objects)[i]->lane_direction = lane_dir.cast<float>();
  }
}

ConstructTrackedObjects是由物体对象来创建跟踪对象的代码,这个过程相对来说比较简单易懂,没大的难点,下面就解释一下具体的功能。

  • 针对vector<ObjectPtr>& objects中的每个对象,创建对应的TrackedObject,并且计算他的shape feature,这个特征计算比较简单,先计算物体xyz三个坐标轴上的最大和最小值,分别将其划分成10等份,对每个点xyz坐标进行bins投影与统计。最后的到的特征就是[x_bins,y_bins,z_bins]一共30维,归一化(除点云数量)后得到最终的shape feature。
  • TransformTrackedObject函数进行跟踪物体的方向、中心、原始点云、多边形角点、重心等进行坐标系转换。lidar坐标系变换到local ENU坐标系。
  • 根据lidar的世界坐标系坐标查询高精地图HD map计算车道线方向

(3) 对新建的跟踪对象(TrackedObject)建立跟踪,正式进行跟踪(加入进ObjectTrack)。

/// file in apollo/modules/perception/obstacle/lidar/tracker/hm_tracker/hm_tracker.cc
void HmObjectTracker::CreateNewTracks(
    const std::vector<TrackedObjectPtr>& new_objects,
    const std::vector<int>& unassigned_objects) {
   
  // Create new tracks for objects without matched tracks
  for (size_t i = 0; i < unassigned_objects.size(); i++) {
   
    int obj_id = unassigned_objects[i];
    ObjectTrackPtr track(new ObjectTrack(new_objects[obj_id]));
    object_tracks_.AddTrack(track);
  }
}

同时函数CollectTrackedResults会将当前正在跟踪的对象(世界坐标系坐标形式)保存到向量中,该部分代码比较简单就不贴出来了。

Step 2. 卡尔曼滤波,跟踪物体对象(卡尔曼滤波阶段1: Predict)

在预处理阶段,每个物体Object类经过封装以后,产生一个对应的ObjectTrack过程类,里面封装了对应要跟踪的物体(TrackedObject,由Object封装而来)。这个阶段的工作就是对跟踪物体TrackedObject进行卡尔曼滤波并预测其运动方向。

首先,在这里我们简单介绍一下卡尔曼滤波的一些基础公式,方便下面理解。


一个系统拥有一个状态方程和一个观测方程。观测方程是我们能宏观看到的一些属性,在这里比如说汽车重心xyz的位置和速度;而状态方程是整个系统里面的一些状态,包含能观测到的属性(如汽车重心xyz的位置和速度),也可能包含其他一些看不见的属性,这些属性甚至我们都不能去定义它的物理意义。因此观测方程的属性是状态方程的属性的一部分现在有:

状态方程: X t = A t , t − 1 X t − 1 + W t X_t = A_{t,t-1}X_{t-1} + W_t Xt=At,t1Xt1+Wt, 其中$W_t \to N(0,Q) $

观测方程: Z t = C t X t + V t Z_t = C_tX_t + V_t Zt=CtXt+Vt, 其中$V_t \to N(0,R) $

卡尔曼滤波分别两个阶段,分别是预测Predict与更新Update:

  • Predict预测阶段
    • 利用上时刻t-1最优估计 X t − 1 X_{t-1} Xt1预测当前时刻状态 X t , t − 1 = A t , t − 1 X t − 1 X_{t,t-1} = A_{t,t-1}X_{t-1} Xt,t1=At,t1Xt1,这个 X t , t − 1 X_{t,t-1} Xt,
  • 7
    点赞
  • 33
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值