Lane Detection

from https://towardsdatascience.com/finding-lane-lines-simple-pipeline-for-lane-detection-d02b62e7572b

Identifying lanes of the road is very common task that human driver performs. This is important to keep the vehicle in the constraints of the lane. This is also very critical task for an autonomous vehicle to perform. And very simple Lane Detection pipeline is possible with simple Computer Vision techniques. This article will describe simple pipeline that can be used for simple lane detection using Python and OpenCV.

Note that this pipeline comes with its own limitations (described below) and can be improved. Improvements will be described in further articles.

Lane Detection Pipeline:

  1. Convert original image to grayscale.
  2. Darkened the grayscale image (this help in reducing contrast from discoloured regions of road)
  3. Convert original image to HLS colour space.
  4. Isolate yellow from HLS to get yellow mask. ( for yellow lane markings)
  5. Isolate white from HLS to get white mask. (for white lane markings)
  6. Bit-wise OR yellow and white masks to get common mask.
  7. Bit-wise AND mask with darkened image .
  8. Apply slight Gaussian Blur.
  9. Apply canny Edge Detector (adjust the thresholds — trial and error) to get edges.
  10. Define Region of Interest. This helps in weeding out unwanted edges detected by canny edge detector.
  11. Retrieve Hough lines.
  12. Consolidate and extrapolate the Hough lines and draw them on original image.

Original Test Images

Original Test Images

Convert to grayscale

Converting original image to grayscale has its benefit. We have to find yellow and white lanes , and converting original image to grayscale increases the contrast of lanes with respect to road.

Original Images to Grayscale

Darken the grayscale image

This is done with the intent of reducing contrast of discoloured patches of the road.

Darkened grayscale images

Convert original image to HLS colour space

Original images are in RGB, but we should also explore other colour spaces like HSV and HLS. When looked at side-by-side, one can easily see that we can get better colour contrast in HLS colour space from road. This may help in better colour selection and in turn lane detection.

RGB vs HSV vs HLS

Colour Selection

Here we use OpenCV’s inRange to get mask between thresh hold value. After some trial and error, we can find out range for threshold.

For yellow mask:

  1. Hue value was used between 10 and 40.
  2. We use higher saturation value(100–255) to avoid yellow from hills.

For white mask:

  1. We use higher lightness value (200–255) for white mask.

We bit-wise OR both mask to get combined mask.

Below images show combined mask being bit-wise AND with darkened image.

Gaussian Blur

Gaussian blur (Gaussian smoothing) is pre-processing step used to reduce the noise from image ( or to smooth the image). We use this pre-processing step to remove many detected edges and only keep the most prominent edges from the image.

GaussianBlur from OpenCV expect kernel size(odd value) for blurring. After trying some values, We used 7.

Gaussian Blur

Apply Canny Edge Detection

Now we apply Canny edge detection to these Gaussian blurred images. Canny Edge Detection is algorithm that detects edges based on gradient change. Not that the first step of Canny Edge detection is image smoothing with default kernel size 5, we still apply explicit Gaussian blur in previous step. The other steps in Canny Edge detection include:

  • Finding Intensity Gradient of the Image
  • Non-maximum Suppression
  • Hysteresis Thresholding

This link provides very good explanation of Canny Edge Detection.

Select Region of Interest

Even after applying Canny Edge Detection, there are still many edges that are detected which are not lanes. Region of Interest is a polygon that defines area in the image, from where edges we are interested.

Note that , the co-ordinate origin in the image is top-left corner of image. rows co-ordinates increase top-down and column co-ordinates increase left-right.

Assumption here is camera remains in constant place and lanes are flat, so that we can “guess” region of interest.

Region of Interest applied to Canny Edge Detected Images.

Hough Transformation Lines Detection

Hough Transform is the technique to find out lines by identifying all points on the line. This is done by representing a line as point. And points are represented as lines/sinusoidal(depending on Cartesian / Polar co-ordinate system). If multiple lines/sinusoidal pass through the point , we can deduce that these points lie on the same line.

Line in polar co-ordinates and sinusoidal that represent point , intersection point represent line.

More information can be found here.

After finding out Hough lines from Region Of Interest images and then drawn on original images.

Hough Lines

Consolidation and extrapolation of the Hough lines

We need to trace complete lane markings. For this the first thing we need to distinguish between left lane and right lane. There is easy way of identifying left lane and right lane.

  • Left Lane: As the values of column co-ordinate increases , the values of rows co-ordinate decreases. So the gradient must be negative.
  • Right Lane: As the values of column co-ordinate increases , the values of rows co-ordinate increases. So the gradient must be positive.
  • We’ll ignore the vertical lines.

After identifying left lane and right lane Hough lines , we’ll extrapolate these lines. There are 2 things we do:

  1. There are many lines detected for Lane, we’ll average the lines
  2. There are some partially detected lines , we’ll extrapolate them.

Applying Pipeline to videos

Now let’s apply this pipeline for videos.

Pipeline works pretty good for straight lanes.

 

But this doesn’t work well for curved lanes.

 

Shortcomings

  • Hough Lines based on straight lines do not work good for curved road/lane.
  • There are many trial-and-error to get hyper parameters correct. Also Region of Interest assumes that camera stays at same location and lanes are flat. So there is “guess” work or hard coding involved in deciding polygon vertices.
  • In general, there are many roads which might not be with lane markings where this won’t work.

Future Improvements

  • Instead of line , it would be beneficial to use higher degree curve that will be useful on curved road.
  • Even when we used information from previous frames, just averaging lanes might not be very good strategy. may be weight average or some form of priority value might work.

The code is available on GitHub

# Author: Mohamed Aly <malaa@caltech.edu> # Date: 10/7/2010 ============================================================================ REAL TIME LANE DETECTOR SOFTWARE ============================================================================ This package contains source code and dataset that implements the work in the paper [1]. ========= Contents ========= src/: contains the C/C++ source files |_ CameraInfo.conf: contains the camera calibration ifo |_ CameraInfoOpt.*: contain gengetopt files for parsing the camera info files |_ cmdline.*: contains gengetopt files for parsing command lines |_ InversePerspectiveMapping.*: code for obtainig the IPM of an image |_ LaneDetector.*: code for the bulk of the algorithm, including Hough Transforms, Spline fitting, Post processing, ... |_ Lanes.conf: the typical configuration file for lane detection |_ main.*: code for the main binary |_ Makefile: the Make file |_ mcv.*: contain utility functions |_ ranker.h: code for obtaining the median of a vector |_ run.sh: Shell script for running the detector on the four clips in Caltech Lanes Dataset matlab/: contains the Matlab source files |_ ccvCheckMergeSplines.m: checks if two splines are matching |_ ccvEvalBezSpline.m: returns points on a spline given its control points |_ ccvGetLaneDetectionStats.m: computes stats from detections and ground truth |_ ccvLabel.m: handles the ground truth labels |_ ccvReadLaneDetectionResultsFile.m: reads a detection file output from the binary file LaneDetector32/64 |_ Stats.m: computes stats for the detections on the Caltech Lanes Dataset and its ground truth labels ============== Prerequisites ============== 1. OpenCV 2.0 or higher http://sourceforge.net/projects/opencvlibrary/ 3. (Optional) Gengetopt http://www.gnu.org/software/gengetopt/ =========== Compiling =========== Unzip the archive somewhere, let's say ~/lane-detector: unzip lane-detector.zip -d ~/lane-detector cd ~/lane-detector/src make release This will generate LaneDetector32 or LaneDetector64 depending on your system. ====================== Caltech Lanes Dataset ====================== To view the lane detector in action, you can download the Caltech Lanes Dataset available at http://www.vision.caltech.edu/malaa/datasets/caltech-lanes =========== Running =========== To run the detector on the Caltech Lanes dataset, which might be in ~/caltech-lanes/ cd ~/lane-detector/ ln -s ~/caltech-lanes/ clips cd ~/lane-detector/src/ bash run.sh This will create the results files inside ~/caltech-lanes/*/list.txt_results.txt To view the statistics of the results, open Matlab and run the file: cd ~/lane-detector/matlab/ matlab& >>Stats ====================== Command line options ====================== LinePerceptor 1.0 Detects lanes in street images. Usage: LinePerceptor [OPTIONS]... [FILES]... -h, --help Print help and exit -V, --version Print version and exit Basic options: --lanes-conf=STRING Configuration file for lane detection (default=`Lanes.conf') --stoplines-conf=STRING Configuration file for stopline detection (default=`StopLines.conf') --no-stoplines Don't detect stop lines (default=on) --no-lanes Don't detect lanes (default=off) --camera-conf=STRING Configuration file for the camera paramters (default=`CameraInfo.conf') --list-file=STRING Text file containing a list of images one per line --list-path=STRING Path where the image files are located, this is just appended at the front of each line in --list-file (default=`') --image-file=STRING The path to an image Debugging options: --wait=INT Number of milliseconds to show the detected lanes. Put 0 for infinite i.e. waits for keypress. (default=`0') --show Show the detected lines (default=off) --step Step through each image (needs a keypress) or fall through (waits for --wait msecs) (default=off) --show-lane-numbers Show the lane numbers on the output image (default=off) --output-suffix=STRING Suffix of images and results (default=`_results') --save-images Export all images with detected lanes to the by appending --output-suffix + '.png' to each input image (default=off) --save-lanes Export all detected lanes to a text file by appending --output-suffix + '.txt' to --list-file (default=off) --debug Show debugging information and images (default=off) =========== References =========== [1] Mohamed Aly, Real time Detection of Lane Markers in Urban Streets, IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, June 2008.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值