论文简介
- 20191019,Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles文章链接:
- 自动驾驶汽车实时混合多传感器感知融合框架
摘要
- 论文提出了一种新的用于自动驾驶感知融合的多传感器框架。该融合框架采用一种基于编码器-解码器的全卷积神经网络(FCNX)和传统的扩展卡尔曼滤波(EKF)非线性状态估计方法。
- 融合系统使用了摄像头、激光雷达、雷达传感器。
- 在嵌入式计算机能实时处理。
- 传感器融合算法一般分为两类:一种是使用状态估计器进行传感器融合,卡尔曼滤波器、粒子滤波器等;另一种是基于机器学习的方法,DNN,贝叶斯,极大似然估计。
感知融合系统参考文献综述
融合架构:
- [1] Xiao, L.; Wang, R.; Dai, B.; Fang, Y.; Liu, D.; Wu, T. Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 2018, 432, 543–558. CrossRef
- [2] Xiao, L.; Dai, B.; Liu, D.; Hu, T.; Wu, T. Crf based road detection with multi-sensor fusion. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 192–198.
- [3] Broggi, A. Robust real-time lane and road detection in critical shadow conditions. In Proceedings of the International Symposium on Computer Vision-ISCV, Coral Gables, FL, USA, 21–23 November 1995;pp. 353–358.
- [4] Teichmann, M.; Weber, M.; Zoellner, M.; Cipolla, R.; Urtasun, R. Multinet: Real-time joint semantic reasoning for autonomous driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1013–1020.
- [5] Sobh, I.; Amin, L.; Abdelkarim, S.; Elmadawy, K.; Saeed, M.; Abdeltawab, O.; Gamal, M.; El Sallab, A.End-To-End multi-modal sensors fusion system for urban automated driving. In Proceedings of the 2018NIPS MLITS Workshop: Machine Learning for Intelligent Transportation Systems, Montreal, QC, Canada,3–8 December 2018.
- [6] Aeberhard, M.; Kaempchen, N. High-level sensor data fusion architecture for vehicle surround environment perception. In Proceedings of the 8th International Workshop on Intelligent Transportation (WIT 2011), Hamburg, Germany, 22–23 March 2011.
相机+激光雷达+雷达: - [1] Garcia, F.; Martin, D.; De La Escalera, A.; Armingol, J.M. Sensor fusion methodology for vehicle detection. IEEE Intell. Transp. Syst. Mag. 2017, 9, 123–133. CrossRef
- [2] Nada, D.; Bousbia-Salah, M.; Bettayeb, M. Multi-sensor data fusion for wheelchair position estimation with unscented Kalman Filter. Int. J. Autom. Comput. 2018, 15, 207–217. CrossRef
- [3] Jagannathan, S.; Mody, M.; Jones, J.; Swami, P.; Poddar, D. Multi-sensor fusion for Automated Driving: Selecting model and optimizing on Embedded platform. In Proceedings of the Autonomous Vehicles and Machines 2018, Burlingame, CA, USA, 28 January–2 February 2018; pp. 1–5
毫米波+摄像头 - [1] Wang, X.; Xu, L.; Sun, H.; Xin, J.; Zheng, N. On-road vehicle detection and tracking using MMW radar and monovision fusion. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2075–2084. CrossRef
传感器
融合算法综述
多传感器融合算法流程图:
- camera+Lidar融合进行高分辨率的目标分类、定位、道路语义分割等工作;特征级融合,RGBD(D激光雷达深度),FCNx网络
- 激光雷达和雷达传感器进行目标探测和跟踪工作;目标级融合,扩展卡尔曼滤波。
camera+Lidar融合
FCNx网络结构:
将环境分割成可行驶和不可行驶区域。
Radar+Lidar融合
- Radar滤波:小区平均CFAR算法
- Lidar滤波:ROI + 体素滤波 + 道路面分割RANSCA
- 扩展卡尔曼滤波EKF:状态向量(px,py,vx,xy),使用匀速运动模型,雅可比矩阵: