ADAS 相关收藏

Software/Dataset


Information Gain Based Active Reconstruction Framework


src="https://www.youtube.com/embed/ZcJcsoGGqbA" allowfullscreen="" frameborder="0" height="225" width="300">

The Information Gain Based Active Reconstruction Framework is a modular, robot-agnostic, software package for performing next-best-view planning for volumetric object reconstruction using a range sensor. Our implementation can be easily adapted to any mobile robot equipped with any camera-based range sensor (e.g stereo camera, structured light sensor) to iteratively observe an object to generate a volumetric map and a point cloud model. The algorithm allows the user to define the information gain metric for choosing the next best view, and many formulations for these metrics are evaluated and compared in our ICRA paper. This framework is released open source as a ROS-compatible package for autonomous 3D reconstruction tasks.

Download the code from GitHub.

Check out a video of the system in action on YouTube.


References
ICRA2016_Isler

S. Isler, R. Sabzevari, J. Delmerico, D. Scaramuzza

An Information Gain Formulation for Active Volumetric 3D Reconstruction

IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.

PDF YouTube Software


Fisheye and Catadioptric Synthetic Datasets for Visual Odometry


src="http://www.youtube.com/embed/6KXBoprGaR0" allowfullscreen="" frameborder="0" height="225" width="300">

We provide two synthetic scenes (vehicle moving in a city, and flying robot hovering in a confined room). For each scene, three different optics were used (perspective, fisheye and catadioptric), but the same sensor is used (keeping the image resolution constant). These datasets were generated using Blender, using a custom omnidirectional camera model, which we release as an open-source patch for Blender.

Download the datasets from here.


References
ICRA16_Zhang

Z. Zhang, H. Rebecq, C. Forster, D. Scaramuzza

Benefit of Large Field-of-View Cameras for Visual Odometry

IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.

PDF YouTube Research page (datasets and software)



Indoor Dataset of Quadrotor with Down-Looking Camera


This dataset contains the recording of the raw images, IMU measurements as well as the ground truth poses of a quadrotor flying a circular trajectory in a office size environment.

Download dataset


REMODE: Real-time, Probabilistic, Monocular, Dense Reconstruction


src="http://www.youtube.com/embed/QTKd5UWCG0Q" allowfullscreen="" frameborder="0" height="225" width="300">

REMODE is a novel method to estimate dense and accurate depth maps from a single moving camera. A probabilistic depth measurement is carried out in real time on a per-pixel basis and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress.REMODE uses a novel approach to depth map computation that combines Bayesian estimation and recent development on convex optimization for image processing.In the reference paper below, we demonstrate that our method outperforms state-of-the-art techniques in terms of accuracy, while exhibiting high efficiency in memory usage and computing power.Our CUDA-based implementation runs at 50Hz on a laptop computer and is released as open-source software (code here).

Download the code from GitHub.


References
ICRA2014_Pizzoli

M. Pizzoli, C. Forster, D. Scaramuzza

REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time

IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.

PDFYouTubeSoftware


SVO: Semi-direct Visual Odometry


src="http://www.youtube.com/embed/2YnIMfw6bJY" allowfullscreen="" frameborder="0" height="225" width="300">

SVO is a Semi-direct, monocular Visual Odometry algorithm that is precise, robust, and faster than currentstate-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matchingtechniques for motion estimation. SVO operates directly on pixel intensities, which results in subpixel precisionat high frame-rates. A probabilistic mapping method that explicitly models outlier and depth uncertainty is used to estimate 3Dpoints, which results in fewer outliers and more reliable points.Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 400 frames per second on anm i7 consumer laptop and more than 70 frames per second on a smartphone computer (e.g., Odroid or Samsung Galaxy phones).

Download the code from GitHub.


Reference
ICRA2014_Forster

C. Forster, M. Pizzoli, D. Scaramuzza

SVO: Fast Semi-Direct Monocular Visual Odometry

IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.

PDFYouTubeSoftware


ROS Driver and Calibration Tool for the Dynamic Vision Sensor (DVS)


The RPG DVS ROS Package allow to use the Dynamic Vision Sensor (DVS) within the Robot Operating System (ROS). It also contains a calibration tool for intrinsic and stereo calibration using a blinking pattern.

The code with instructions on how to use it is hosted on GitHub.

Authors: Elias Mueggler, Basil Huber, Luca Longinotti, Tobi Delbruck

References

E. Mueggler, B. Huber, D. Scaramuzza Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, 2014. [ PDF ]

A. Censi, J. Strubel, C. Brandli, T. Delbruck, D. Scaramuzza Low-latency localization by Active LED Markers tracking using a Dynamic Vision Sensor, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, 2013. (PDF) [ PDF ]

P. Lichtsteiner, C. Posch, T. Delbruck A 128x128 120dB 15us Latency Asynchronous Temporal Contrast Vision Sensor, IEEE Journal of Solid State Circuits, Feb. 2008, 43(2), 566-576. [ PDF ]



A Monocular Pose Estimation System based on Infrared LEDs


src="http://www.youtube.com/embed/8Ui3MoOxcPQ" allowfullscreen="" frameborder="0" height="225" width="300">

Mutual localization is a fundamental component for multi-robot missions. Our monocular pose estimation system consists of multiple infrared LEDs and a camera with an infrared-pass filter. The LEDs are attached to the robot that we want to track, while the observing robot is equipped with the camera.

The code with instructions on how to use it is hosted on GitHub.


Reference

Matthias Faessler, Elias Mueggler, Karl Schwabe and Davide Scaramuzza, A Monocular Pose Estimation System based on Infrared LEDs, Proc. IEEE International Conference on Robotics and Automation (ICRA), 2014, Hong Kong. [ PDF ]



Torque Control of a KUKA youBot Arm


src="http://www.youtube.com/embed/OMZ1XVXErKY" allowfullscreen="" frameborder="0" height="225" width="300">

Existing control schemes for the KUKA youBot arm, such as directly controlling joint positions or velocities, are not suited for close tracking of end effector trajectories. A torque controller, based on the dynamical model of the youBot arm, was implemented to overcome this limitation. Complementary to the controller, a framework to automatically generate trajectories was developed.

The code with instructions on how to use it is hosted on GitHub. Details are provided in the Master Thesis of Benjamin Keiser.

Authors: Benjamin Keiser, Matthias Faessler, Elias Mueggler

Reference

B. Keiser, E. Mueggler, M. Faessler, D. Scaramuzza Torque Control of a KUKA youBot Arm, Master Thesis, University of Zurich, September, 2013. [ PDF ]



Dataset: Air-Ground Matching of Airborne images with Google Street View data


Matching airborne images to ground level ones is a challenging problem since in this case extreme changes in viewpoint and scale can be found between the aerial Micro Aerial Vehicle (MAV) images and the ground-level images, aside the challenges present in ground visual search algorithms used in UGV applications, such as illumination, lens distortions, over season variation of the vegetation, and scene changes between the query and the database images.

Our dataset consists of image data captured with a small quadroctopter flying in the streets of Zurich (up to 15 meters from the ground), along a path of 2km, including: (1) aerial MAV Images, (2) ground-level Google Street View Images, (3) ground-truth confusion matrix, and (4) GPS data (geotags) for every database image.

Download dataset.

Authors: Andras Majdik and Yves Albers-Schoenberg


Reference

A.L. Majdik, D. Verda, Y. Albers-Schoenberg, D. ScaramuzzaAir-ground Matching: Appearance-based GPS-denied Urban Localization of Micro Aerial VehiclesJournal of Field Robotics, 2015. [ PDF ]

A. L. Majdik, D. Verda, Y. Albers-Schoenberg, D. ScaramuzzaMicro Air Vehicle Localization and Position Tracking from Textured 3D Cadastral ModelsIEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.[ PDF ]

A. Majdik, Y. Albers-Schoenberg, D. Scaramuzza. MAV Urban Localization from Google Street View Data, IROS'13, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS'13, 2013. [ PDF ] [ PPT ]


Perspective 3-Point (P3P) Algorithm


The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of a camera in the world reference frame from three 2D-3D point correspondences. Most solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the point aligning transformation between the camera and the world frame. In contrast, this work proposes a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution.

Download C/C++ code

Author: Laurent Kneip


Reference

L. Kneip, D. Scaramuzza, R. Siegwart. A Novel Parameterization of the Perspective-Three-Point Problem for a Direct Computation of Absolute Camera Position and Orientation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, USA, 2011.[ PDF ]


OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab


Omnidirectional Camera Calibration Toolbox for Matlab (for Windows, MacOS, and Linux) for catadioptric and fisheye cameras up to 195 degrees.

Code, tutorials, and datasets can be found here.

Author: Davide Scaramuzza


Reference

D. Scaramuzza, A. Martinelli, R. Siegwart.A Toolbox for Easily Calibrating Omnidirectional Cameras. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, October 2006.[ PDF ]

D. Scaramuzza, A. Martinelli, R. Siegwart.A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion.IEEE International Conference on Computer Vision Systems (ICVS 2006), New York, USA, January 2006.[ PDF ]


ADAS系统设计是现代汽车安全技术的一个重要组成部分,它通过尽可能实现对驾驶员行为的感知,进而提高驾驶员对行驶环境的感知、提高驾驶安全系数。ADAS系统的设计需要满足一系列的标准,包括以下几点: 1. 安全性标准:ADAS系统是为了提高汽车行驶安全性而设计,因此安全性标准是设计过程中的最重要一点。该标准侧重于系统的稳定性、可靠性、可用性等方面,同时要求车辆在故障状态下也能够保持一定的安全性。 2. 性能标准:ADAS系统要尽可能地精准反应驾驶员当前行驶状态,因此性能标准侧重于系统反应速度和准确性,同时要求系统具备一定的自适应能力和学习能力。 3. 数据安全标准:ADAS系统采集的数据是关键性数据,应该具备一定的隐私保护措施。数据安全标准主要控制数据的采集、传输、处理和存储等全过程,确保数据不被泄漏、被篡改或被滥用。 4. 多航道标准:ADAS系统的多航道数据是为了提高系统的可靠性与准确性而集成的一个关键技术。该标准要求系统能够适应多路数据处理、多数据融合、多级任务设定等多种需求,并能够保持系统的实时性和准确性。 5. 硬件标准:ADAS系统的硬件标准是为了保证系统具备一定的可靠性和耐久性。该标准要求系统采用优质的元器件和材料,同时要求系统的设计具备一定的通用性和兼容性,以便适应不同车型和场景下的需求。 总之,ADAS系统设计相关标准的完善程度对于现代汽车安全技术的发展具有非常重要的推动作用,而ADAS系统的标准化设计是保证其可靠稳定的关键。随着科技的不断进步,ADAS系统设计相关的标准将会不断地更新和完善,为汽车行业和智能驾驶技术的发展提供有力保障。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值