基于ROS的车道线检测项目记录

2020.09.06

任务:传统OpenCV方法ROS版本改造

选取大神陈光的高级车道线检测方法进行改造,总体逻辑是:
第一、创建订阅者,接收摄像头发布的数据,用cv_bridgeopencv格式的数据转换为ROS的消息格式数据。
第二、创建发布者,将检测到的数据以特定的消息类型发布出去
传统方法的难点
(1)对光照、明暗、车道线磨损、非常敏感
(2)在十字路口转弯时,摄像头检测不到前方车道线,会造成绿色区域“变幻莫测”地跳动
(3)总会出现x expect non zero vector错误,导致程序退出
计划对程序的改进
(1)在十字路口等工况下检测不到车道线时,直接returnspin()函数,等待下一帧数据
(2)在进入np.polyfit()函数进行二次曲线拟合时,先判断参数是否为空
(3)在画面上打印FPS本车距离坐车道线的距离

问题记录

#!/usr/bin/env python
# coding=utf-8
import os
import cv2
import matplotlib.pyplot as plt
import numpy as np
from moviepy.editor import VideoFileClip
import glob
import time
import math
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError

#################################################################
# Step 3 : Warp image based on src_points and dst_points(透视变换)
#################################################################
# The type of src_points & dst_points should be like
# np.float32([ [0,0], [100,200], [200, 300], [300,400]])
def warpImage(image, src_points, dst_points):
    image_size = (image.shape[1], image.shape[0])
    M = cv2.getPerspectiveTransform(src_points, dst_points)#src_points到dst_points的投影矩阵
    Minv = cv2.getPerspectiveTransform(dst_points, src_points)#逆投影矩阵
    #flags=cv2.INTER_LINEAR对远离摄像头的部分进行线性插值填充像素点
    warped_image = cv2.warpPerspective(image, M,image_size, flags=cv2.INTER_LINEAR)
    return warped_image, M, Minv
#################################################################
# Step 4 : 提取车道线
#################################################################
def hlsLSelect(img, thresh=(220, 255)):  #HLS通道图,对L(亮度)处理,提取白色车道线
    hls = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)  #print(hls.shape),(1080,1920,3)
    l_channel = hls[:,:,1]  #print(l_channel.shape),(1080,1920)
    #np.max(l_channel),求矩阵中最大的元素,125.#数据的缩放
    l_channel = 255*(l_channel/np.max(l_channel))  #np.max(l_channel)=255.0
    binary_output = np.zeros_like(l_channel)  #创建一个空矩阵,黑图片
    binary_output[(l_channel > thresh[0]) & (l_channel <= thresh[1])] = 1  #在阈值范围内的点亮
    ###cv2.imshow("whitelane",binary_output)  
    ###cv2.waitKey(0) 
    return binary_output
def labBSelect(img, thresh=(195, 255)):#转为Lab通道的图,随后对b通道进行分割处理,提取图像中黄色的车道线
    # 1) Convert to LAB color space
    lab = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)
    lab_b = lab[:,:,2]
    # don't normalize if there are no yellows in the image
    if np.max(lab_b) > 100:#162
        lab_b = 255*(lab_b/np.max(lab_b))#print(np.max(lab_b)),255.0
    # 2) Apply a threshold to the L channel
    #print(lab_b)
    binary_output = np.zeros_like(lab_b)
    binary_output[((lab_b > thresh[0]) & (lab_b <= thresh[1]))] = 1
    ###cv2.imshow("yellolane",binary_output)
    ###cv2.waitKey(0)
    # 3) Return a binary image of threshold result
    return binary_output
#################################################################
# Step 5 : Detect lane lines through moving window(检测车道线)
#################################################################
def find_lane_pixels(binary_warped, nwindows, margin, minpix):
    # Create an output image to draw on and visualize the result,可视化
    out_img = np.dstack((binary_warped, binary_warped, binary_warped))
    # Take a histogram of the bottom half of the image,计算图像下半部分每列上白色像素点之和,shape[0]=1080,//是整除,图像的左上角是坐标原点
    histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
    # Find the peak of the left and right halves of the histogram
    # These will be the starting point for the left and right lines
    midpoint = np.int(histogram.shape[0]//2)##midpoint = 960
    leftx_base = np.argmax(histogram[:midpoint])#左车道线起始点,0到960范围内,像素之和最大的横坐标
    rightx_base = np.argmax(histogram[midpoint:]) + midpoint#右车道线起始点
    
    # Set height of windows - based on nwindows above and image shape
    window_height = np.int(binary_warped.shape[0]//nwindows)
    # Identify the x and y positions of all nonzero pixels in the image返回非0元素的索引值
    nonzero = binary_warped.nonzero()
    nonzeroy = np.array(nonzero[0])
    nonzerox = np.array(nonzero[1])
    # Current positions to be updated later for each window in nwindows
    leftx_current = leftx_base
    rightx_current = rightx_base

    # Create empty lists to receive left and right lane pixel indices
    left_lane_inds = []
    right_lane_inds = []

    # Step through the windows one by one
    for window in range(nwindows):
        # Identify window boundaries in x and y (and right and left)
        win_y_low = binary_warped.shape[0] - (window+1)*window_height
        win_y_high = binary_warped.shape[0] - window*window_height
        win_xleft_low = leftx_current - margin##margin是滑动窗口宽度的一半
        win_xleft_high = leftx_current + margin
        win_xright_low = rightx_current - margin
        win_xright_high = rightx_current + margin
        
        # Draw the windows on the visualization image
        cv2.rectangle(out_img,(win_xleft_low,win_y_low),
        (win_xleft_high,win_y_high),(0,255,0), 2) 
        cv2.rectangle(out_img,(win_xright_low,win_y_low),
        (win_xright_high,win_y_high),(0,255,0), 2) 
        #cv2.imshow("out_img", out_img)
        #cv2.waitKey(0)
        
        # Identify the nonzero pixels in x and y within the window 
        good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & 
        (nonzerox >= win_xleft_low) &  (nonzerox < win_xleft_high)).nonzero()[0]
        good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & 
        (nonzerox >= win_xright_low) &  (nonzerox < win_xright_high)).nonzero()[0]
        #print(good_left_inds)
        #print(good_right_inds)
        # Append these indices to the lists
        left_lane_inds.append(good_left_inds)
        right_lane_inds.append(good_right_inds)
        
        # If you found > minpix pixels, recenter next window on their mean position
        if len(good_left_inds) > minpix:
            leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
        if len(good_right_inds) > minpix:        
            rightx_current = np.int(np.mean(nonzerox[good_right_inds]))

    # Concatenate the arrays of indices (previously was a list of lists of pixels)
    try:
        left_lane_inds = np.concatenate(left_lane_inds)
        right_lane_inds = np.concatenate(right_lane_inds)
    except ValueError:
        # Avoids an error if the above is not implemented fully
        pass

    # Extract left and right line pixel positions
    leftx = nonzerox[left_lane_inds]
    lefty = nonzeroy[left_lane_inds] 
    rightx = nonzerox[right_lane_inds]
    righty = nonzeroy[right_lane_inds]

    return leftx, lefty, rightx, righty, out_img

def fit_polynomial(binary_warped, nwindows=9, margin=100, minpix=50):
    # Find our lane pixels first
    leftx, lefty, rightx, righty, out_img = find_lane_pixels(
        binary_warped, nwindows, margin, minpix)
    print('leftx:', leftx)
    print('lefty:', lefty)
    print('rightx:', rightx)
    print('righty:', righty)
    # Fit a second order polynomial to each using `np.polyfit`
    if len(leftx) == 0 or len(rightx) == 0:
        out_img=0
        left_fit=np.array([0, 0, 0])
        right_fit=np.array([0, 0, 0])
        ploty=0
        return out_img, left_fit, right_fit, ploty
    left_fit = np.polyfit(lefty, leftx, 2)
    right_fit = np.polyfit(righty, rightx, 2)

    # Generate x and y values for plotting
    ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
    #try:
    ##    left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    ##    right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    ##except TypeError:
    ##    # Avoids an error if `left` and `right_fit` are still none or incorrect
    ##    print('The function failed to fit a line!')
    ##    left_fitx = 1*ploty**2 + 1*ploty
    ##    right_fitx = 1*ploty**2 + 1*ploty

    ## Visualization ##
    # Colors in the left and right lane regions
    out_img[lefty, leftx] = [255, 0, 0]
    out_img[righty, rightx] = [0, 0, 255]

    # Plots the left and right polynomials on the lane lines
    #plt.plot(left_fitx, ploty, color='yellow')
    #plt.plot(right_fitx, ploty, color='yellow')
    print('left shape:', left_fit.shape)
    print('left type:', type(left_fit))
    return out_img, left_fit, right_fit, ploty
#################################################################
# Step 6 : Track lane lines based the latest lane line result(跟踪车道线)
#################################################################
def fit_poly(img_shape, leftx, lefty, rightx, righty):
     ### TO-DO: Fit a second order polynomial to each with np.polyfit() ###
    left_fit = np.polyfit(lefty, leftx, 2)
    right_fit = np.polyfit(righty, rightx, 2)
    # Generate x and y values for plotting
    ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
    ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ###
    left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    
    return left_fitx, right_fitx, ploty, left_fit, right_fit

def search_around_poly(binary_warped, left_fit, right_fit):
    # HYPERPARAMETER
    # Choose the width of the margin around the previous polynomial to search
    # The quiz grader expects 100 here, but feel free to tune on your own!
    margin = 60

    # Grab activated pixels
    nonzero = binary_warped.nonzero()
    nonzeroy = np.array(nonzero[0])
    nonzerox = np.array(nonzero[1])
    
    ### TO-DO: Set the area of search based on activated x-values ###
    ### within the +/- margin of our polynomial function ###
    ### Hint: consider the window areas for the similarly named variables ###
    ### in the previous quiz, but change the windows to our new search area ###
    left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + 
                    left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + 
                    left_fit[1]*nonzeroy + left_fit[2] + margin)))
    right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + 
                    right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + 
                    right_fit[1]*nonzeroy + right_fit[2] + margin)))
    
    # Again, extract left and right line pixel positions
    leftx = nonzerox[left_lane_inds]
    lefty = nonzeroy[left_lane_inds] 
    rightx = nonzerox[right_lane_inds]
    righty = nonzeroy[right_lane_inds]

    # Fit new polynomials
    left_fitx, right_fitx, ploty, left_fit, right_fit = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
    
    ## Visualization ##
    # Create an image to draw on and an image to show the selection window
    out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
    window_img = np.zeros_like(out_img)
    # Color in left and right line pixels
    out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]
    out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]

    # Generate a polygon to illustrate the search window area
    # And recast the x and y points into usable format for cv2.fillPoly()
    left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))])
    left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, 
                              ploty])))])
    left_line_pts = np.hstack((left_line_window1, left_line_window2))
    right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))])
    right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, 
                              ploty])))])
    right_line_pts = np.hstack((right_line_window1, right_line_window2))

    # Draw the lane onto the warped blank image
    cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0))
    cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0))
    result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)
    
    # Plot the polynomial lines onto the image
    #plt.plot(left_fitx, ploty, color='yellow')
    #plt.plot(right_fitx, ploty, color='yellow')
    ## End visualization steps ##
    
    return result, left_fit, right_fit, ploty

#################################################################
# Step 7 : CalculateDistFromCenter
#################################################################
def CalculateDistFromCenter(binary_image, left_fit, right_fit):
    img_size = (binary_image.shape[1], binary_image.shape[0])
    dist_from_center = 0.0
    # assume the camera is centered in the vehicle
    ###camera_pos = img_size[1] / 2
    if right_fit is not None:
        if left_fit is not None:
            # 摄像头位于图像中间,也是本车的中心
            camera_pos = img_size[0] / 2
            ###RESUBMIT - END
            
            # find where the right and left lanes intersect the bottom of the frame          
            # 左右车道线最底端x坐标
            left_lane_pix = np.polyval(left_fit, binary_image.shape[0])
            right_lane_pix = np.polyval(right_fit, binary_image.shape[0])
            # 左右车道线中点x坐标
            center_of_lane_pix = (left_lane_pix + right_lane_pix) / 2
            # 摄像头(本车中心)与车道线中心的距离
            dist_from_center = (camera_pos - center_of_lane_pix) * 3.7/1280
            #print(dist_from_center, 'm')

    return  dist_from_center

#################################################################
# Step 8 : Draw lane line result on undistorted image(逆透视变换)
#################################################################
def drawing(undist, bin_warped, color_warp, left_fitx, right_fitx, ploty, Minv):
    # Create an image to draw the lines on
    warp_zero = np.zeros_like(bin_warped).astype(np.uint8)
    color_warp = np.dstack((warp_zero, warp_zero, warp_zero))

    # Recast the x and y points into usable format for cv2.fillPoly()
    pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
    pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
    pts = np.hstack((pts_left, pts_right))

    # Draw the lane onto the warped blank image
    cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))

    # Warp the blank back to original image space using inverse perspective matrix (Minv)
    newwarp = cv2.warpPerspective(color_warp, Minv, (undist.shape[1], undist.shape[0])) 
    # Combine the result with the original image
    result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)
    return result

#################################################################
# Step 9 : show text
#################################################################
def overlay_text_on_image (image, dist_from_center, fps):
    
    new_img = np.copy(image)
    
    font = cv2.FONT_HERSHEY_SIMPLEX
    font_color = (255,255,255)
    num_format = '{:04.2f}'

    text = 'FPS: ' + str(fps)
    cv2.putText(new_img, text, (40,70), font, 1.5, font_color, 2, cv2.LINE_AA)
        
    direction = 'left'
    if dist_from_center > 0:
        direction = 'right'
    abs_dist = abs(dist_from_center)
    text = 'Vehicle is ' + num_format.format(abs_dist) + ' m ' + direction + ' of center'
    cv2.putText(new_img, text, (40,120), font, 1.5, font_color, 2, cv2.LINE_AA)
    
    return new_img

###-------------------------------------------------------------------------------------------###
# 左图梯形区域的四个端点,从左上端点开始顺时针方向
src = np.float32([[603, 342], [727, 342], [1150, 720], [225, 720]])
wrap_offset = 150
# 右图矩形区域的四个端点
dst = np.float32([[225+wrap_offset, 0], [1150-wrap_offset, 0], [1150-wrap_offset, 720], [225+wrap_offset, 720]])

bridge = CvBridge()
detected = False
##left_fit = []
##right_fit = []
##ploty = []


def callbackFunc(image):
    rospy.loginfo('receive frame success')
    global bridge, src, dst, detected
    try:
	undistort_image = bridge.imgmsg_to_cv2(image, "bgr8")
    except CvBridgeError as e:
        print(e) 

    start = time.time()
    #步骤3-透视变换
    warp_image, M, Minv = warpImage(undistort_image, src, dst)
    ###cv2.imshow("warp_image",warp_image)
    ###cv2.waitKey(0)
    #步骤4-提取车道线
    hlsL_binary = hlsLSelect(warp_image)
    labB_binary = labBSelect(warp_image, (205, 255))
    combined_binary = np.zeros_like(hlsL_binary)
    combined_binary[(hlsL_binary == 1) | (labB_binary == 1)] = 1
    ###cv2.imshow("combined_binary",combined_binary)
    ###cv2.waitKey(0)
    left_fit = []
    right_fit = []
    ploty = []
    if detected == False:#步骤5-滑动窗口检测车道线
        out_img, left_fit, right_fit, ploty = fit_polynomial(combined_binary, nwindows=9, margin=80, minpix=40)
        ###cv2.imshow("out_img",out_img)
        ###cv2.waitKey(0)
        ###if (len(left_fit) > 0 & len(right_fit) > 0) :
        if left_fit[0] == 0 and left_fig[1] == 0:
            detected = False
            return
        else :
            detected = True
    else:#步骤6-追踪车道线
        track_result, left_fit, right_fit, ploty,  = search_around_poly(combined_binary, left_fit, right_fit)
        if (len(left_fit) > 0 & len(right_fit) > 0) :
            detected = True
        else :
            detected = False
            return

    end = time.time()
    fps = math.floor(1 / (end - start))

    # step 7-CalculateDistFromCenter
    dist_from_center = CalculateDistFromCenter(warp_image, left_fit, right_fit)

    #步骤8-逆透视变换 and drawing 
    left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
    right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
    result = drawing(undistort_image, combined_binary, warp_image, left_fitx, right_fitx, ploty, Minv)

    #step 9-show text 
    result = overlay_text_on_image (result, dist_from_center, fps)
    
    cv2.imshow("result",result)
    cv2.waitKey(40)
         
def msg_subscriber():
    rospy.init_node('msg_subscriber', anonymous=True)
    rospy.Subscriber('/miivii_gmsl_ros/camera3', Image, callbackFunc)
    rospy.spin()

if __name__ == '__main__':
    msg_subscriber()

(1)追踪车道线的函数中,计算left_lane_inds时出现数组下标溢出错误
(2)提取车道线的颜色空间阈值还没有调整

2020.09.07

任务:深度学习方法检测车道线

1、AdamShan大神将MaybeShewill-CV/lanenet-lane-detection改造成为ROS 版本。

2、权重文件有2版:MaybeShewill作者发的一版,在网上下载的一版,tensorflow模型介绍了怎么加载模型

3、执行程序前要修改launch文件里的权重路径,只包含.index .data .meta等后缀前面的前缀。

问题记录

1、执行程序时由于tensorflow等软件的版本不匹配,出现各种warning:...已过期,用...代替,照着一个一个修改,最后也没全部改完,不改了。。。

2、如图,还没解决

2020.09.08

任务:英伟达JetsonPak车道线检测例程,Autoware

1、看了例程视频,是在车道线上画点,有的稀疏有的稠密,也会在路两边栏杆、汽车上画点,效果不好,不用。

2、Autoware还没有安装

2020.09.09

任务:安装Autoware

1、总体安装参考官网教程,其中安装qt5参考教程1教程2

2、安装配置eigen参考教程,其中下载安装包的步骤参考教程

2020.09.19

问题记录

1、autoware暂时没有安装成功

2、看了autoware的感知和路径规划框架,它用的不是从摄像头读取数据进行车道线检测的,摄像头只用来做物体检测。

成果

1、与论文对应的ultra-fast-lane-detection项目跑通了,在miivii域控制器(基于Xavier)上复现Ultra-Fast-Lane-Detection源论文项目

计划

2020.10.15

痛苦如此持久,像蜗牛充满耐心地移动;快乐如此短暂,像兔子的尾巴掠过秋天的草原。

国庆节前一天刷了JetPack4.4,终于运行通了车道线检测的tensorRT加速项目,惊喜之余心凉凉,发现darknet_ros因为OpenCV版本的问题编译不通,于是对OpenCV千般改造miivii域控制器(Xavier)配置ROS与OpenCV3.x.x,运行成功darknet_ros后,发现域控制器自带的GMSL摄像头驱动又编译不通,昨天休整了半天,对目前的困境进行了小结:
JetPack4.4下有4项任务:(1)启动GMSL摄像头(2)demo检测行人、车辆和自行车(3)检测交通标志(4)检测车道线
(1)和(2)的程序依赖刷机时提供的动态库,因此必须用初始的默认版本OpenCV4.1.1。昨天验证了(4),在OpenCV3.2.0和4.1.1下都能运行。我查了一些资料后,发现(3)darknet_ros只在OpenCV3.x.x下运行正常,4.x.x下问题很多,昨晚上github,发现有人尝试在4.x.x下编译darknet_ros,Opencv4, now working #202,结论是将darknet_ros/下的darknet替换为AlexeyAB/darknet,后者在OpenCV2/3/4下都能编译。于是将OpenCV改回4.1.1,尝试编译AlexeyAB/darknet。

2020.10.16

AlexeyAB/darknet编译运行成功,替换darknet_ros下的darknet后,catkin_make失败。
kunaltyagi/darknet替换darknet_ros下的darknet,参照OpenCV4 compilation success修改相应文件,catkin_make仍失败。

  • 7
    点赞
  • 57
    收藏
    觉得还不错? 一键收藏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值