Seok Lee blog

Self Driving Car

self_drivng

Summary:

Simulation result #1 (path following):

Watch the video

Simulation result #2 (classifiation):

Watch the video2

Submission note:

Path following functionality is not working as expected when the camera is enabled. The issue does not occur when the camera is disabled.

To address potential latency issues, I adjusted the rospy rates for tl_detector to 5Hz and waypoint_updater to 15Hz. Additionally, I implemented a strategy to process only one image every three frames for classification. Unfortunately, these measures did not resolve the problem.

Troubleshooting the issue has been time-consuming. I’ve also reached out to Udacity to inquire about potential changes to the simulator.

Until a resolution is found, I can demonstrate the following:

Path following without camera: The path following algorithm works as expected when the camera is disabled. (See Simulation Result #1) Traffic light detection: The classifier accurately detects red stop signals at steady state in manual mode. (See Simulation Result #2)

Simulation environment:

Procedure

   def get_light_state(self, light):
        """Determines the current color of the traffic light

        Args:
            light (TrafficLight): light to classify

        Returns:
            int: ID of traffic light color (specified in styx_msgs/TrafficLight)

        """
        #print(light.state)
        #return light.state # testing to see if the vehicle stop at the closest stop light when it turns to red
        if(not self.has_image):
            self.prev_light_loc = None # what's this TODO
            #return False
            return False # return red (False:0)
        elif(0 < self.dist_2closest < 80):
            data_collection = 0
            cv_image = self.bridge.imgmsg_to_cv2(self.camera_image, "bgr8")
            if data_collection:
                
                path = os.getcwd()
                image_folder = path +'/sim_data'
                timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]
                image_filename = os.path.join(image_folder, timestamp+'_'+str(light.state))
                #print(image_filename)
                cv2.imwrite('{}.jpg'.format(image_filename),cv_image)
                print('closest_light_location:'+str(light.pose.pose.position.x))
                print('simulation(given)_classification:'+str(light.state))
            print('model classification return to tl_detector:'+str(self.light_classifier.get_classification(cv_image)))
            return self.light_classifier.get_classification(cv_image)
        else:
            return 4 #unknown
                  image_filename = os.path.join(image_folder, timestamp+'_'+str(light.state))  
   import os
   import csv
   
   def strip_end(text, suffix):
      if not text.endswith(suffix):
        return text
    return text[:len(text)-len(suffix)]
    
    
  with open("csv_write_test_00.csv","r+") as f:
    w = csv.writer(f, delimiter=',', lineterminator='\n')
    #print(w)
    for path,dirs,files in os.walk(r"C:\Users\SLEE194\Downloads\Bosch-TL-Dataset-master\data\tl-extract-train"):
        #print(path)
        #print(files)
        for filename in files:
            #print(path)
            
            start = filename.find('_')+1
            end = filename.find('.png')
            color = filename[start:end]
            color = strip_end(color,'left')
            color = strip_end(color,'straight')
            w.writerow([filename,color])
  

However, the classification with traffic light in simulation doesn’t work well.

The options which I had implemented are,

a. (intermediate method) Use opencv color circle detection to detect only red circle to stop the vehicle at stop light. The rest of color of traffic light, set the traffic light state to unknown. This is a lot lighter algorithm so it may work while camera is turned on for classification.

#from styx_msgs.msg import TrafficLight
import rospy
import keras
import tensorflow as tf
import numpy as np
from tensorflow.contrib.layers import flatten
from sensor_msgs.msg import Image
#from styx_msgs.msg import TrafficLightArray, TrafficLight
import cv2
from keras.models import load_model
import h5py
from keras import backend as K
#from datetime import datetime
#import os
from datetime import datetime
import os

class TLClassifier(object):
    def __init__(self):
        
        model = None       
        self.idx = 0
        self.light_state = 4 #unkown
    def get_classification(self, image):
        """Determines the color of the traffic light in the image
        Args:
            image (cv::Mat): image containing the traffic light

        Returns:
            int: ID of traffic light color (specified in styx_msgs/TrafficLight)
        """
        while(self.idx < 4):
            print('camera shoot: '+ str(self.idx))
            
            if (self.idx == 0):

                bgr_image = cv2.medianBlur(image,3)
                hsv_image = cv2.cvtColor(bgr_image,cv2.COLOR_BGR2HSV)
                lower_red_hue_range = cv2.inRange(hsv_image, (0, 100, 100), (10, 255, 255))
                upper_red_hue_range = cv2.inRange(hsv_image, (160, 100, 100), (179, 255, 255))

                red_hue_image = cv2.addWeighted(lower_red_hue_range, 1.0, upper_red_hue_range, 1.0, 0.0)
                red_hue_image = cv2.GaussianBlur(red_hue_image, (9, 9), 2, 2)
                
                # Use the Hough transform to detect circles in the combined threshold image
                circles_r = cv2.HoughCircles(red_hue_image, cv2.HOUGH_GRADIENT, 1, red_hue_image.shape[0] / 8.0, 100, 20, 1, 1)

                print('circles_r_detected: ' + str(len(circles_r)))
                      
                if circles_r is None or len(circles_r[0,:]) < 4:
                    self.light_state = 4 # unknown
                else:
                    self.light_state = 0 # red

                    
                    path = os.getcwd()
                    image_folder = path +'/sim_data'
                    timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]
                    image_filename = os.path.join(image_folder, timestamp+'_'+str(self.light_state)+'_'+str(len(circles_r)))
                    #print(image_filename)
                    
                    cv2.imwrite('{}.jpg'.format(image_filename),red_hue_image)
                
                #return t.state
                self.idx += 1
                #prev_state = self.state
                #return self.state
                return self.light_state
                
            elif(self.idx == 1):
                self.idx += 1
                print('no training' + str(self.idx))
                #self.state = prev_state
                return self.light_state
            elif(self.idx == 2):
                self.idx += 1
                print('no training' +str(self.idx))
                #self.state = prev_state
                return self.light_state
            elif(self.idx == 3):
                self.idx = 0
                print('idx reset')
                #self.state = prev_state
                return self.light_state

b. Check You Only Look Once (YOLO) Keras implementation with Bosch dataset for classification

Self Driving Car capstone

References:

How a robot works:

ROS_overview

Nomenclature (Node,topics,messages,package):

ROS_Nodes

Project overview:

project_overview

self driving car nodes:

1.tl_detector (traffic light detection)

tl_detector

2.waypoint_updater (update target velocity based on traffic light/obstacle data)

tl_detector

3.twist_controller (responsible for control the vehicle)

twist_controller

  1. classifier
    • Frame work: a. ** revised step1 (image detection using opencv) for a intermediate solution:
    • As fore-mentioned, The Bosch dataset method doesn’t detect the traffic signal image correctly using CNN. The reason is, the difference from the trained images to captured simulation image makes result in wrong prediction (classificaiton)
    • How about automatically crop the image at certain range where the vehicle approaches to the nearest traffic signal. However, when camera is on, path following is not working so manually driving and collecting data is painful. Therefore, tried to look for the simplest way.
    • I used a method to detect a red circle (only) when the vehicle approaches 80m before the nearest traffic signal

    ** future improvement (using YOLO to classify light, custom training data set either bosch dataset or annotated simulation image

b. step 2 (integration to tl_classifier node): - embed classification algorithm and transmit the signal color as an output.

  1. Things to note:
    • path following doesn’t work when camera is on. rospy rate is adjusted @ tl_detector as 5hz, but no luck.
    • in order to mitigate the latency issue when camera is on, I had added a method to take only 1 image out of 3 for classification, but no luck.
    • It took so long time to troubleshoot and mitigate the latency. I asked Udacity to look at if simulator has changed at some point.
    • Until the discussion happens with Udacity, what I can demonstrate for the final submission are,
      • without camera on, path following works.
      • at steady state, it detects the red stop signal.

Before classification/detection: original

After classification/detection: opencvdetection opencvdetection

  1. Tips:
    • rostopic list: look up all topics ros_topic_list
    • rostopic info /current_pose: check the mesage type on a “/current_pose” topic ros_topic_list_info
    • rosmsg info geometry_msgs/PoseStamped: check the message contents ros_msg_list
  2. Issues: no_ros_msg