Finding Lane Lines on the Road

Finding Lane Lines on the Road


Finding Lane Lines on the Road

The goals / steps of this project are the following:

  • Make a pipeline that finds lane lines on the road
  • Reflect on your work in a written report

Reflection

1. Describe your pipeline. As part of the description, explain how you modified the draw_lines() function.

We first import packages and load a test image. Notice that matplotlib read image as RGB, however the cv2 read the image as GBR.

import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline

image = cv2.imread('test.jpg')

The pipeline mainly contains the following steps:

1). Build a filter to find the white and yellow parts:

The lane lines are always white and yellow. Hence, we first extract all the white and yellow parts from the image by filter_color function. We use the RGB, HSV and HLS to do that.

kernel_size = 5
filtered_image = filter_colors(image)
plt.figure()
plt.imshow(filtered_image)
def filter_colors(image):
    # Filter the white and yellow lines in images.
    # To reinforce the performance, we use both hls and hsv to find the yellow and white lines.
    
    # white BGR
    lower_white = np.array([200, 200, 200])
    upper_white = np.array([255, 255, 255])
    white_mask = cv2.inRange(image, lower_white, upper_white)
    white_RGB_image = cv2.bitwise_and(image, image, mask=white_mask)

    # yellow hsv
    hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
    lower_yellow = np.array([90,100,100])
    upper_yellow = np.array([110,255,255])
    # lower_yellow = np.array([20,100,100])
    # upper_yellow = np.array([30,255,255])
    
    yellow_mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
    yellow_hsv_image = cv2.bitwise_and(image, image, mask=yellow_mask)

    hsv_image = weighted_img(white_RGB_image, 1., yellow_hsv_image, 1., 0.)
    
    hls = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
    
    # white hls
    lower = np.uint8([  0, 200,   0])
    upper = np.uint8([255, 255, 255])
    
    white_hls_mask = cv2.inRange(hls, lower, upper)
    white_hls_image = cv2.bitwise_and(image, image, mask=white_hls_mask)

    # yellow hls
    lower = np.uint8([ 10,   0, 100])
    upper = np.uint8([ 40, 255, 255])
    yellow_hls_mask = cv2.inRange(hls, lower, upper)
    yellow_hls_image = cv2.bitwise_and(image, image, mask=yellow_hls_mask)

    hls_image = weighted_img(white_hls_image, 1., yellow_hls_image, 1., 0.)

    final_image = weighted_img(hls_image, 1., hsv_image, 1., 0.)
    
    return final_image
Filter white and yellow parts
2). Convert image to grey image and remove the noises

cv2.GaussianBlur is used to remove the noises from the gray image. Removing the noise is very important. In the next step, we will show the differences.

gray = cv2.cvtColor(filtered_image, cv2.COLOR_RGB2GRAY)
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
Convert image to grey image and remove the noises
3). Canny edges detection

In this part, we use canny edges detection method to find all the lines in images. With noises, the lines make a real mess.
Without noises, the edges of lane line is very clear.

# Canny and apply on original image
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 150

edges = cv2.Canny(image, low_threshold, high_threshold)
plt.figure()
plt.imshow(edges)
Canny edges detection on original image
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
plt.figure()
plt.imshow(edges)
Canny edges detection on image without noise
4). Region of interest

In the previous step, we find that most of the edges are not lane lines edges. And the lane lines edges always locate in a trapezoid. Hence, we define a trapezoid to restrict the region of interest.

# Next we'll create a masked edges image using cv2.fillPoly()
mask = np.zeros_like(edges)   
ignore_mask_color = 255  

# This time we are defining a four sided polygon to mask
imshape = image.shape
vertices = np.array([[(imshape[1]*0.15,imshape[0]*0.95),(imshape[1]*0.45, imshape[0]*0.6), (imshape[1]*0.55, imshape[0]*0.6), (imshape[1]*0.9,imshape[0]*0.95)]], dtype=np.int32)


cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_edges = cv2.bitwise_and(edges, mask)
plt.figure()
plt.imshow(masked_edges)
Region of interest: a trapezoid
5). Filter edges and generate lines by HoughLinesP function

Not all edges in the trapezoid are useful. We use HoughLinesP with specific parameters to remove some short and useless edges.

# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 3 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15     # minimum number of votes (intersections in Hough grid cell)
min_line_length = 5 #minimum number of pixels making up a line
max_line_gap = 25    # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on

# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),min_line_length, max_line_gap)

6. Find the lane line related edges

We deal with the left and right lane line, separately.
Based on the observation, the slope of lines in left part are larger than 0.5, while the slope of lines in right part are less than -0.5.



# Iterate over the output "lines" and draw lines on a blank image
left_lane_lines_x = []
left_lane_lines_y = []
right_lane_lines_x = []
right_lane_lines_y = []
    
x_size = image.shape[1]
y_size = image.shape[0]
for line in lines:
    x1, y1, x2, y2 = line[0]
    if abs(x1-x2) == 0:
        slope = float("inf")
    else:
        slope = (y2 - y1)/(x2 - x1)

    if slope > 0.5 and x1 > x_size/2 and x2 > x_size/2:
        right_lane_lines_x.append(x1)
        right_lane_lines_x.append(x2)
        right_lane_lines_y.append(y1)
        right_lane_lines_y.append(y2)
    elif slope < -0.5 and x1 < x_size/2 and x2 < x_size/2:
        left_lane_lines_x.append(x1)
        left_lane_lines_x.append(x2)
        left_lane_lines_y.append(y1)
        left_lane_lines_y.append(y2)
7). Fit the lane line and generate the start and end point of the fitted line

Fit the lane line with several points. Then we get the slope and bias of the line. We calculate the start and end point, and draw lines in the image.

r_m, r_b = np.polyfit(right_lane_lines_x, right_lane_lines_y, 1)
l_m, l_b = np.polyfit(left_lane_lines_x, left_lane_lines_y, 1)
        
y1 = image.shape[0]
y2 = image.shape[0] * (1 - 0.35)


r_x1 = (y1 - r_b) / r_m
r_x2 = (y2 - r_b) / r_m
    
l_x1 = (y1 - l_b) / l_m
l_x2 = (y2 - l_b) / l_m
8). Draw it on the image
color = [255,0,0]
thickness = 10
cv2.line(image, (int(r_x1), y1), (int(r_x2), int(y2)), color, thickness)
cv2.line(image, (int(l_x1), y1), (int(l_x2), int(y2)), color, thickness)

color_edges = np.dstack((edges, edges, edges)) 

lines_edges = cv2.addWeighted(image, 1, line_image, 1, 0) 
plt.figure()
plt.imshow(lines_edges)
The lane line on image.

In order to draw a single line on the left and right lanes, I modified the draw_lines() function by the following steps:

a. Deal with left and right edges separately, based on
  i. the absolute value of the slope is larger than 0.5
 ii. the left lane edges should lie in the left part of the image; the right lane edges should lie in the right part of the image
b. fit the points with np.polyfit
c. smooth the slope and bias with a size=10 queue. The idea is that the slope and bias should be similar in the past 10 frames. The experimental results are very good. After smoothing the slope and bias, the lines won't shake like an earthquake.
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
    """
    NOTE: this is the function you might want to use as a starting point once you want to 
    average/extrapolate the line segments you detect to map out the full
    extent of the lane (going from the result shown in raw-lines-example.mp4
    to that shown in P1_example.mp4).  
    
    Think about things like separating line segments by their 
    slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
    line vs. the right line.  Then, you can average the position of each of 
    the lines and extrapolate to the top and bottom of the lane.
    
    This function draws `lines` with `color` and `thickness`.    
    Lines are drawn on the image inplace (mutates the image).
    If you want to make the lines semi-transparent, think about combining
    this function with the weighted_img() function below
    """
#     for line in lines:
#         for x1,y1,x2,y2 in line:
#             cv2.line(img, (x1, y1), (x2, y2), color, thickness)
      
    left_lane_lines_x = []
    left_lane_lines_y = []
    right_lane_lines_x = []
    right_lane_lines_y = []

    x_size = img.shape[1]
    y_size = img.shape[0]
    for line in lines:
        if len(line[0])>3:
            x1, y1, x2, y2 = line[0]
        else:
            continue
        if abs(x1-x2) == 0:
            slope = float("inf")
        else:
            slope = (y2 - y1)/(x2 - x1)

        if slope > 0.5 and x1 > x_size/2 and x2 > x_size/2:
            right_lane_lines_x.append(x1)
            right_lane_lines_x.append(x2)
            right_lane_lines_y.append(y1)
            right_lane_lines_y.append(y2)
        elif slope < -0.5 and x1 < x_size/2 and x2 < x_size/2:
            left_lane_lines_x.append(x1)
            left_lane_lines_x.append(x2)
            left_lane_lines_y.append(y1)
            left_lane_lines_y.append(y2)
    right_exist = False
    left_exist = False
    if right_lane_lines_x != [] and right_lane_lines_y!= []:
        r_m, r_b = np.polyfit(right_lane_lines_x, right_lane_lines_y, 1)
        right_exist = True
        
    if left_lane_lines_x != [] and left_lane_lines_y!= []:
        l_m, l_b = np.polyfit(left_lane_lines_x, left_lane_lines_y, 1)
        left_exist = True
    
    # Smoothing the slope and bias to make the lane line detection roust.
    if left_exist:
        if left_q_m.size() < 10:
            left_q_m.put(l_m)
        else:
            left_q_m.get()
            left_q_m.put(l_m)

        if left_q_b.size() < 10:
            left_q_b.put(l_b)
        else:
            left_q_b.get()
            left_q_b.put(l_b)
            
    if right_exist:      
        if right_q_m.size() < 10:
            right_q_m.put(r_m)
        else:
            right_q_m.get()
            right_q_m.put(r_m)

        if right_q_b.size() < 10:
            right_q_b.put(r_b)
        else:
            right_q_b.get()
            right_q_b.put(r_b)
    
    r_m = right_q_m.avg()
    l_m = left_q_m.avg()
    r_b = right_q_b.avg()
    l_b = left_q_b.avg()
    
    y1 = img.shape[0]
    y2 = img.shape[0] * (1 - 0.35)


    r_x1 = (y1 - r_b) / r_m
    r_x2 = (y2 - r_b) / r_m

    l_x1 = (y1 - l_b) / l_m
    l_x2 = (y2 - l_b) / l_m

    cv2.line(img, (int(r_x1), y1), (int(r_x2), int(y2)), color, thickness)
    cv2.line(img, (int(l_x1), y1), (int(l_x2), int(y2)), color, thickness)

The queue is defined as follows. We can also use the class Queue in built-in functions.

class Queue:
    """
    Define a queue class for smoothing the slope and bias of the lane line.
    Smoothing is necessary for this task. 
    """
    def __init__(self):
        self.items = []

    def isEmpty(self):
        return self.items == []
    
    def Empty(self):
        while self.isEmpty() == False:
            self.items.get()
        return self.items
    
    def put(self, item):
        self.items.insert(0,item)
        
    def avg(self):
        return np.mean(self.items)

    def get(self):
        return self.items.pop()

    def size(self):
        return len(self.items)

2. Identify potential shortcomings with your current pipeline

Shortcoming:
In the challenge.mp4, the algorithm works well on almost all the frames except frame 113 and 114. I run the pipeline on the two frames, and it works well. However, it doesn't work well on the video. I spent 2 days to improve the performance on the two frames. Failed...

Potential shortcoming:

  1. The test videos are limited. The algorithm may fail on many other situations. For instance, night, rainy, snow and sand storm.

  2. Many of the parameters are fixed, which may have great limitations. As the development of machine learning, is it possible for the ML algorithm to learn these automatically?

3. Suggest possible improvements to your pipeline

I really want to test the algorithm on a S-Bend road. I hope that the length of the annotated lane line is adaptive. We use the linear regression to fit the lane line. It's more reasonable to fit the lane line as a curve.

所有代碼詳見:
https://github.com/fighting41love/Udacity_Lane_line_detection/blob/master/P1.ipynb

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末句伶,一起剝皮案震驚了整個濱河市赛不,隨后出現(xiàn)的幾起案子媳友,更是在濱河造成了極大的恐慌噪奄,老刑警劉巖旬盯,帶你破解...
    沈念sama閱讀 221,273評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件粥鞋,死亡現(xiàn)場離奇詭異冰更,居然都是意外死亡北滥,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,349評論 3 398
  • 文/潘曉璐 我一進(jìn)店門恬汁,熙熙樓的掌柜王于貴愁眉苦臉地迎上來伶椿,“玉大人,你說我怎么就攤上這事氓侧〖沽恚” “怎么了?”我有些...
    開封第一講書人閱讀 167,709評論 0 360
  • 文/不壞的土叔 我叫張陵约巷,是天一觀的道長偎痛。 經(jīng)常有香客問我,道長载庭,這世上最難降的妖魔是什么看彼? 我笑而不...
    開封第一講書人閱讀 59,520評論 1 296
  • 正文 為了忘掉前任,我火速辦了婚禮囚聚,結(jié)果婚禮上靖榕,老公的妹妹穿的比我還像新娘。我一直安慰自己顽铸,他們只是感情好茁计,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,515評論 6 397
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著谓松,像睡著了一般星压。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上鬼譬,一...
    開封第一講書人閱讀 52,158評論 1 308
  • 那天娜膘,我揣著相機(jī)與錄音,去河邊找鬼优质。 笑死竣贪,一個胖子當(dāng)著我的面吹牛军洼,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播演怎,決...
    沈念sama閱讀 40,755評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼匕争,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了爷耀?” 一聲冷哼從身側(cè)響起甘桑,我...
    開封第一講書人閱讀 39,660評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎歹叮,沒想到半個月后跑杭,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,203評論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡咆耿,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,287評論 3 340
  • 正文 我和宋清朗相戀三年艘蹋,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片票灰。...
    茶點(diǎn)故事閱讀 40,427評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖宅荤,靈堂內(nèi)的尸體忽然破棺而出屑迂,到底是詐尸還是另有隱情,我是刑警寧澤冯键,帶...
    沈念sama閱讀 36,122評論 5 349
  • 正文 年R本政府宣布惹盼,位于F島的核電站,受9級特大地震影響惫确,放射性物質(zhì)發(fā)生泄漏手报。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,801評論 3 333
  • 文/蒙蒙 一改化、第九天 我趴在偏房一處隱蔽的房頂上張望掩蛤。 院中可真熱鬧,春花似錦陈肛、人聲如沸揍鸟。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,272評論 0 23
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽阳藻。三九已至,卻和暖如春谈撒,著一層夾襖步出監(jiān)牢的瞬間腥泥,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,393評論 1 272
  • 我被黑心中介騙來泰國打工啃匿, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留蛔外,地道東北人蛆楞。 一個月前我還...
    沈念sama閱讀 48,808評論 3 376
  • 正文 我出身青樓,卻偏偏與公主長得像冒萄,于是被迫代替她去往敵國和親臊岸。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,440評論 2 359

推薦閱讀更多精彩內(nèi)容