【论文阅读】Online multi-sensor calibration based on moving object tracking

Online multi-sensor calibration based on moving object tracking

Abstract

propose an online calibration method based on detection and tracking of moving objects.

resource inexpensive solution

The methods consists of a calibration-agnostic track to track association, computationally lightweight decalibration detection, and a graph-based rotation calibration.

1. Introduction

DATMO: detection and tracking of moving objects

sensor calibration consists of finding the intrinsic, extrinsic and temporal parameters.

the online approaches use information from the environment during the regular system operation, thus enabling long term robustness of the autonomous system.

Online calibration methods can be divided into feature-based and motion-based methods.

In this paper, we leverage current state of the art in DATMO and propose an online calibration methods based on it. Our motivation is to enable decalibration detection and recalibration based on the information which is already present in an autonomous system pipeline without adding significant computational overhead.

Our method provides a full pipeline which includes:

  1. DATMO algorithm for each sensor modality
  2. track-to-track association based on a calibration invariant measure
  3. efficient decalibration detection
  4. a graph-based calibration handling multiple heterogeneous sensors simultaneously

Refer to Figure1.

在这里插入图片描述

Our method only estimates rotational component of the extrinsic calibration.

Our method assumes that translational calibration is obtained using either target-based or sensor-specific methods.

2. Proposed Method

2.1 Object Detection

radars provide a list of detected objects consisting of the following measured information: range, azimuth angle, range-rate, and radar corrs-section(RCS). clusters of radars

Lidar’s and camera’s raw data… use the MEGVII network based on sparse 3D convolution, which is currently the best performing method for object detection on the nuScenes challenge.

for the object detection from images, use CenterNet, but velocity information is not provided.

use the network weights trained on the KITTI dataset and determined the range scale factor by comparing CenterNet detections to the MEGVII detections.

2.2 Tracking of moving objects

associate them between different time frames and provide estimates of their states, which are later used as inputs for subsequent steps.

2.3 Track-to-track association

observe 2 criteria for each track pair candidates through their common history:

  1. mean of the velocity norm difference
  2. mean of the position norm difference

The track pair has to satisfy both criteria and not surpass predefined thresholds.

time-varying SE(3) transform

2.4 Decalibration(解关联) detection

propose a computationally inexpensive decalibration detection methods, which is based on the data already present in the system.

form a 3*3 data matrix…

When the criterion (12) surpasses a predefined threshold, the system proceeds to the complete graph-based sensor calibration. The magnitude of the minimal decalibration that can be detected is limited by the predefined threshold and the horizon defined with the Tw. Longer horizon enables detection of smaller calibraiton changes, but with slower convergence.

2.5 Graph-based extrinsic calibration

graph-based optimization

to ensure and speed up the convergence, we use the results of the previous step as an initialization.

one sensor is chosen as an anchor and aligned with the Fe for convenience. We then seasrch for the poses of other sensors with respect to the anchor sensor by minimizing the following criterion(13)(14)

total least squares approache

if a sensor does not have a direct link with the anchor sensor, obtain (i,j)R by multiplying the corresponding series of rotation matrices to obtain the final rotation between the i-th and j-th sensor. This approach enables the estimation of all parameters with a single optimization, while ensuring consistency between sensor transforms.

3. Experimental results

real world data-> nuScenes dataset

3.1 Experimental setup

1000 scenes that are 20s long

a roof-mounted 3D lidar, 5 radars, 6 cameras. But focus only on the top lidar, front radar and front camera which all share a common FOV.

speed of ego vehicle: stationary at first 5s; 40km/h after.

17 moving vechicles

8 stationary vehicles in the detectable area for all the sensors.

3.2 Results

In comparison to the camera, lidar provided significantly more detections with frequent false positives(误报) which we successfully filtered by setting a threshold on their detection scores.

MEGVII network detects and classifies the same object as both car and truck…

the radar provide many false positives and multiple detections of the same vehicles.

RCS is not a reliable measure for vehicle classification…

the success rate for each sensor pair was as follows:

  • lidar-radar 93%

  • lidar-camera 94%

  • radar-camera 94%

Average time for 2 tracks is 1.5s for every sensor combination. decalibration did not lead to any noticeable difference in results.

criterion for each sensor pair is below 1° throughout the scene.

artificial decalibration of 3° in the yaw angle…

We can notice a significant increase in the criterion for the sensor pairs involving the camera, while the criterion for lidar-radar remained the same.

able to assess which sensor change its orientation by simply comparing the sensor-pairwise criteria.

3.3 Comparison with odometry-based calibration

tested SRRG method in [36]

4. Conclusion

  1. proposed an online multi=sensor calibration method based on detection and tracking of moving objects.
  2. on a moving platform without relying on a known target; does not assume a constant and known sensor calibration.
  3. proposed track to track association…
  4. graph based optimization

limited to rotation calibration only. Nevertheless, it was able to estimate rotation parameters with an approximate error of 0.2◦ from a 20 s long scene.

Comprehension

目标关联使用位置和速度的相关性。我们之前做过后融合部分,根据目标在84坐标系的位置进行关联,这个方法要人为设置一个门限,关联后,就找到了匹配的几个点,然后就是3d-2d转换。
declibration部分,意思是开始关联上了,一路记录关联,当解关联的时候就开始利用关联的数据做标定,解关联可能是遮挡或者目标出了范围,解关联开始出发计算calibration。可能是想积累更多的匹配数据,理论上数据在FOV分布越均匀,标定精度越高。

Words

WordDefinitionphonetic symbol
pipeline管道
ego-motion自我情感
rotational旋转的
heterogeneous异质的
leverage杠杆作用v. n.
exteroceptive外感
stationary稳定的
thrust推力 v. n.
FOVField of View
degradation降解
mutually相互
substantial重大的
association组合
conservative保守
compromise妥协,折中
loose松动的
trivial琐碎的,细小的
discrete离散的
accommodate容纳
disturbancen. 干扰,扰乱,骚扰
coincide重合
convergence收敛
magnitude大小,震级,量级
paradigm范例
isotropic各向同性
parentheses括号
Nevertheless尽管如此
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
在信号处理领域,DOA(Direction of Arrival)估计是一项关键技术,主要用于确定多个信号源到达接收阵列的方向。本文将详细探讨三种ESPRIT(Estimation of Signal Parameters via Rotational Invariance Techniques)算法在DOA估计中的实现,以及它们在MATLAB环境中的具体应用。 ESPRIT算法是由Paul Kailath等人于1986年提出的,其核心思想是利用阵列数据的旋转不变性来估计信号源的角度。这种算法相比传统的 MUSIC(Multiple Signal Classification)算法具有较低的计算复杂度,且无需进行特征值分解,因此在实际应用中颇具优势。 1. 普通ESPRIT算法 普通ESPRIT算法分为两个主要步骤:构造等效旋转不变系统和估计角度。通过空间平移(如延时)构建两个子阵列,使得它们之间的关系具有旋转不变性。然后,通过对子阵列数据进行最小二乘拟合,可以得到信号源的角频率估计,进一步转换为DOA估计。 2. 常规ESPRIT算法实现 在描述中提到的`common_esprit_method1.m`和`common_esprit_method2.m`是两种不同的普通ESPRIT算法实现。它们可能在实现细节上略有差异,比如选择子阵列的方式、参数估计的策略等。MATLAB代码通常会包含预处理步骤(如数据归一化)、子阵列构造、旋转不变性矩阵的建立、最小二乘估计等部分。通过运行这两个文件,可以比较它们在估计精度和计算效率上的异同。 3. TLS_ESPRIT算法 TLS(Total Least Squares)ESPRIT是对普通ESPRIT的优化,它考虑了数据噪声的影响,提高了估计的稳健性。在TLS_ESPRIT算法中,不假设数据噪声是高斯白噪声,而是采用总最小二乘准则来拟合数据。这使得算法在噪声环境下表现更优。`TLS_esprit.m`文件应该包含了TLS_ESPRIT算法的完整实现,包括TLS估计的步骤和旋转不变性矩阵的改进处理。 在实际应用中,选择合适的ESPRIT变体取决于系统条件,例如噪声水平、信号质量以及计算资源。通过MATLAB实现,研究者和工程师可以方便地比较不同算法的效果,并根据需要进行调整和优化。同时,这些代码也为教学和学习DOA估计提供了一个直观的平台,有助于深入理解ESPRIT算法的工作原理。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值