#每周一篇论文1#[数据融合篇] Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicle

论文简介

  • 20191019,Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles文章链接:
  • 自动驾驶汽车实时混合多传感器感知融合框架

摘要

  • 论文提出了一种新的用于自动驾驶感知融合的多传感器框架。该融合框架采用一种基于编码器-解码器的全卷积神经网络(FCNX)和传统的扩展卡尔曼滤波(EKF)非线性状态估计方法。
  • 融合系统使用了摄像头、激光雷达、雷达传感器。
  • 在嵌入式计算机能实时处理。
  • 传感器融合算法一般分为两类:一种是使用状态估计器进行传感器融合,卡尔曼滤波器、粒子滤波器等;另一种是基于机器学习的方法,DNN,贝叶斯,极大似然估计。

感知融合系统参考文献综述

融合架构

  • [1] Xiao, L.; Wang, R.; Dai, B.; Fang, Y.; Liu, D.; Wu, T. Hybrid conditional random field based camera-LIDAR fusion for road detection. Inf. Sci. 2018, 432, 543–558. CrossRef
  • [2] Xiao, L.; Dai, B.; Liu, D.; Hu, T.; Wu, T. Crf based road detection with multi-sensor fusion. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 192–198.
  • [3] Broggi, A. Robust real-time lane and road detection in critical shadow conditions. In Proceedings of the International Symposium on Computer Vision-ISCV, Coral Gables, FL, USA, 21–23 November 1995;pp. 353–358.
  • [4] Teichmann, M.; Weber, M.; Zoellner, M.; Cipolla, R.; Urtasun, R. Multinet: Real-time joint semantic reasoning for autonomous driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1013–1020.
  • [5] Sobh, I.; Amin, L.; Abdelkarim, S.; Elmadawy, K.; Saeed, M.; Abdeltawab, O.; Gamal, M.; El Sallab, A.End-To-End multi-modal sensors fusion system for urban automated driving. In Proceedings of the 2018NIPS MLITS Workshop: Machine Learning for Intelligent Transportation Systems, Montreal, QC, Canada,3–8 December 2018.
  • [6] Aeberhard, M.; Kaempchen, N. High-level sensor data fusion architecture for vehicle surround environment perception. In Proceedings of the 8th International Workshop on Intelligent Transportation (WIT 2011), Hamburg, Germany, 22–23 March 2011.
    相机+激光雷达+雷达
  • [1] Garcia, F.; Martin, D.; De La Escalera, A.; Armingol, J.M. Sensor fusion methodology for vehicle detection. IEEE Intell. Transp. Syst. Mag. 2017, 9, 123–133. CrossRef
  • [2] Nada, D.; Bousbia-Salah, M.; Bettayeb, M. Multi-sensor data fusion for wheelchair position estimation with unscented Kalman Filter. Int. J. Autom. Comput. 2018, 15, 207–217. CrossRef
  • [3] Jagannathan, S.; Mody, M.; Jones, J.; Swami, P.; Poddar, D. Multi-sensor fusion for Automated Driving: Selecting model and optimizing on Embedded platform. In Proceedings of the Autonomous Vehicles and Machines 2018, Burlingame, CA, USA, 28 January–2 February 2018; pp. 1–5
    毫米波+摄像头
  • [1] Wang, X.; Xu, L.; Sun, H.; Xin, J.; Zheng, N. On-road vehicle detection and tracking using MMW radar and monovision fusion. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2075–2084. CrossRef

传感器

自动驾驶传感器装载图

融合算法综述

多传感器融合算法流程图:

  • camera+Lidar融合进行高分辨率的目标分类、定位、道路语义分割等工作;特征级融合,RGBD(D激光雷达深度),FCNx网络
  • 激光雷达和雷达传感器进行目标探测和跟踪工作;目标级融合,扩展卡尔曼滤波。
    多传感器融合流程图

camera+Lidar融合

FCNx网络结构:
将环境分割成可行驶和不可行驶区域。
FCNx全卷积神经网络目标检测和道路分割体系结构

Radar+Lidar融合

  • Radar滤波:小区平均CFAR算法
  • Lidar滤波:ROI + 体素滤波 + 道路面分割RANSCA
  • 扩展卡尔曼滤波EKF:状态向量(px,py,vx,xy),使用匀速运动模型,雅可比矩阵:
    eskf雅可比矩阵

模型测试结果

Comparison of our FCNx architecture with FCN8 and U-Net architectures

Comparison of our architecture performance with FCN8 and U-Net networks.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
Introduction With the advancement of technology, the development of intelligent vehicles has become an important research area. Multi-sensor fusion technology is one of the key technologies in the development of intelligent vehicles. Through the fusion of different sensors, the accuracy and reliability of the vehicle's perception and control can be improved. In this paper, we will review ten English-language articles on multi-sensor fusion for intelligent vehicles. Article 1: "Multi-Sensor Fusion for Autonomous Driving: A Review" by Fabio Toth and Felipe Jimenez This article provides an overview of the different types of sensors used in autonomous driving and the methods used to fuse sensor data. The authors discuss the advantages and disadvantages of each sensor and fusion method and provide examples of real-world applications. Article 2: "Multi-Sensor Fusion for Road Detection in Autonomous Vehicles" by Jing Guo, et al. This article presents a multi-sensor fusion approach for road detection in autonomous vehicles. The approach combines data from a camera, a LIDAR sensor, and a GPS receiver to achieve accurate and robust road detection. The authors also demonstrate the effectiveness of their approach through experimental results. Article 3: "Multi-Sensor Fusion for Pedestrian Detection in Autonomous Vehicles" by Weihua Li, et al. This article proposes a multi-sensor fusion approach for pedestrian detection in autonomous vehicles. The approach combines data from a camera, a LIDAR sensor, and a radar sensor to achieve accurate and robust pedestrian detection. The authors also demonstrate the effectiveness of their approach through experimental results. Article 4: "Multi-Sensor Fusion for Lane Change Detection in Autonomous Driving" by Hui Chen, et al. This article presents a multi-sensor fusion approach for lane change detection in autonomous driving. The approach combines data from a camera, a LIDAR sensor, and a radar sensor to achieve accurate and reliable lane change detection. The authors also demonstrate the effectiveness of their approach through experimental results. Article 5: "A Multi-Sensor Fusion Approach for Vehicle Detection and Tracking in Urban Environments" by Yanyan Li, et al. This article proposes a multi-sensor fusion approach for vehicle detection and tracking in urban environments. The approach combines data from a camera, a LIDAR sensor, and a radar sensor to achieve accurate and robust vehicle detection and tracking. The authors also demonstrate the effectiveness of their approach through experimental results. Article 6: "Multi-Sensor Fusion for Obstacle Detection and Avoidance in Autonomous Vehicles" by Yan Bai, et al. This article presents a multi-sensor fusion approach for obstacle detection and avoidance in autonomous vehicles. The approach combines data from a camera, a LIDAR sensor, and a radar sensor to achieve accurate and reliable obstacle detection and avoidance. The authors also demonstrate the effectiveness of their approach through experimental results. Article 7: "A Multi-Sensor Fusion Approach for Vehicle Localization and Mapping" by Lingyun Meng, et al. This article proposes a multi-sensor fusion approach for vehicle localization and mapping. The approach combines data from a camera, a LIDAR sensor, and a GPS receiver to achieve accurate and robust vehicle localization and mapping. The authors also demonstrate the effectiveness of their approach through experimental results. Article 8: "Multi-Sensor Fusion for Traffic Sign Recognition in Autonomous Vehicles" by Yan Zhang, et al. This article presents a multi-sensor fusion approach for traffic sign recognition in autonomous vehicles. The approach combines data from a camera, a LIDAR sensor, and a GPS receiver to achieve accurate and robust traffic sign recognition. The authors also demonstrate the effectiveness of their approach through experimental results. Article 9: "Multi-Sensor Fusion for Lane Departure Warning System" by Zhehao Chen, et al. This article proposes a multi-sensor fusion approach for lane departure warning system. The approach combines data from a camera, a LIDAR sensor, and a GPS receiver to achieve accurate and reliable lane departure warning. The authors also demonstrate the effectiveness of their approach through experimental results. Article 10: "Multi-Sensor Fusion for Autonomous Parking" by Kai Hu, et al. This article presents a multi-sensor fusion approach for autonomous parking. The approach combines data from a camera, a LIDAR sensor, and a radar sensor to achieve accurate and reliable autonomous parking. The authors also demonstrate the effectiveness of their approach through experimental results. Conclusion The ten articles reviewed in this paper demonstrate the importance and effectiveness of multi-sensor fusion for intelligent vehicles. Through the fusion of data from different sensors, the accuracy and reliability of the vehicle's perception and control can be improved. The approaches presented in these articles can be applied to various applications in autonomous driving, such as road detection, pedestrian detection, lane change detection, vehicle detection and tracking, obstacle detection and avoidance, vehicle localization and mapping, traffic sign recognition, lane departure warning, and autonomous parking.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

听雨听风眠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值