当我们想要一辆自动驾驶汽车安全的行驶在公路上就要和普通车辆一样遵守各项交通规则,其中最基本的一条就是车辆顺着车道线在车道内行驶。自动驾驶汽车通过摄像头记录前方道路的图像,再通过对这些图像的一系列处理标记出前方车道,限制汽车行驶的范围。前面在《标记车道》中给出了通过candy边缘检测和霍夫曼变换的方式标记车道线的方法,这种方法的缺陷是只能标记直线,不能处理道路转弯的情况,本章给出一种更健壮的标记车道的方法。
本章给出的标记车道的方法可以概述为以下几步:
- 消除图像畸变
- 用sobel算子提取车道
- 将图像转换为鸟瞰图
- 对探测到的车道多项式拟合,计算车道曲率,得到平滑的车道
- 将平滑的车道逆转换得到标记出的车道
下面描述每一步的内容并给出源码和输出
消除图像畸变
摄像头是将3D空间的影像转换到2D的平面上,转换的过程引起失真,如果直接对失真图片提取车道,可能将笔直车道当成弯曲的车道从而计算出错误的曲率,所以在第一步就要消除摄像头引起的畸变。国际象棋的棋盘黑白相间对比度高,容易检测,而且opencv库有针对棋盘处理的内建函数可以直接用来消除图像畸变。
其原理是给出一组图片,利用棋盘高对比度的特点,找出棋盘内的交点,得到一个由交点组成的列表,再构建一个和交点列表相规模的列表。通过计算交点列表和构建列表的差异得到摄像头失真的参数,再利用这些参数消除畸变。
找到棋盘交点
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
#%matplotlib qt
first_frame = 1
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9, 0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/cali*.jpg')
sorted(images)
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (9,6), corners, ret)
write_name = 'camera_cal/corners_found'+str(idx)+'.jpg'
cv2.imwrite(write_name, img)
img = cv2.imread('camera_cal/calibration3.jpg')
dst = cv2.imread("camera_cal/corners_found4.jpg")
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Find Corner Image', fontsize=30)
消除畸变
import pickle
%matplotlib inline
# Test undistortion on an image
img = cv2.imread('camera_cal/calibration1.jpg')
def undistort_image(img,objpoints,imgpoints):
# Do camera calibration given object points and image points
#def undistort_image(img,objpoints,imgpoints):
img_size = (img.shape[1], img.shape[0])
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
return mtx,dist,dst
mtx,dist,dst = undistort_image(img,objpoints,imgpoints)
cv2.imwrite('camera_cal/test_undist.jpg',dst)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,1