This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
| Name | |
|---|---|
| M Usman Afzal | [email protected] |
| Patrick Sai Ho Poon | [email protected] |
| Deepesh Dongre | [email protected] |
| James Neville | [email protected] |
| Abhinav Eramally | [email protected] |
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstoneRun the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstoneTo set up port forwarding, please refer to the instructions from term 2
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch- Confirm that traffic light detection works on real life images
ROS Architecture is chosen for the project. It consists of nodes that communicate via messaging service. Information about the car's state (vehicle's current position, velocity and camera readings) and control (steering, braking and throttle) are captured and shared among different nodes. The information are used in Perception, Planning and Control.
Camera readings get processed in the traffic light classifier with a neural network in order to detect traffic lights. The result of the classifier, the vehicle's current position and a set of base waypoints are passed to the traffic light detector. The data that is collected will predict the next traffic light and detect whether the car should come to a stop on a red light.
The output of the traffic light detector will trigger the waypoint updater acceleration or deceleration and will publish to the DBW (Drive by wire) node that will drive the vehicle.
Waypoint Updater - the hub of waypoint planning. in the WaypointUpdater class, base waypoints (output of waypoint loader), the vehicle's position and the traffic waypoint from traffic detector are repeatedly used to drive the vehicle forward. Decelerate_waypoints is called when red traffic light is detected and the vehicle needs to come to a stop.
Traffic Light Detection - function picks up the current position base waypoints, the given traffic light array and the traffic light status. The traffic light classifer function returns the current traffic light status (colour) and gets processed in the pipelineand will trigger waypoint updater to prompt the vehicle to stop in the red light. The traffic light classification model is built using Tensorflow. The classification output choices are: Red, Green, Yellow and off.
Drive-By-Wire(DBW) Node - responsible for steering the car with a twist controller which manages throttle, brake and steering actions with a PID-controller and communicate with the simulator



