Software apps and online services
This is the final project of a Mobile and Embedded Systems course - making an autonomous car drive along a track using vision processing. The track is made of parallel strips of blue painter's tape (for ease of processing), and there are other elements along the track such as stop signs (red paper on the floor) and stop lights (colored LED lights) to increase the difficulty of the scenario. The project is heavily inspired from raja_961's Autonomous Lane Keeping Car project which used a Raspberry Pi; our car uses a similar setup but with a BeagleBone Black, with a webcam mounted at the front of the car, a USB WiFi adapter to remotely access the BeagleBone, and a USB Hub to plug both parts into the BeagleBone.Variable Selection
In our PD steering control loop, the "error" measurement was the angle offset from the desired path (as determined by the vision processing) measured in degrees. Since our output was a range of PWM values from 6 to 9, we scaled the proportion such that the car would reach its maximum turning value at an error of about 45 degrees. We chose a P value of 0.085, and initially set the derivative coefficient to 65% of P as an arbitrary guess. After further testing, we significantly decreased the derivative coefficient to 10% of P, as the high coefficient was smoothing the turns into nonexistence. We also set the resolution of the camera to 160x120px as a balance between maintaining image quality and reducing the amount of pixels that had to be processed each frame.Handling Stop Signs and Traffic Lights
We used the following process to handle the stop light:
First, we scan each frame returned by the camera in HSV space, looking for a particular red that composed the red stop light. To handle noise, we made sure that only a certain percentage of the frame (over 0.6% of the image for red light, and 1% for the green light) was made up of this red before triggering our traffic light logic. If we have detected the stop light, we stop the motors of the car and continuously repeat the above process to look for the green light instead. Once we have found a green light, the rest of the code continues to run and the motors are turned back on to accelerate forward.
Similarly, in order to ensure our vehicle stopped at stop signs, we tested multiple boundary and threshold values to find HSV values for which the vehicle’s camera would detect the red stop sign. We eventually settled on hue values between 140 and 200, saturation values between 90 and 255, and brightness values between 200 and 250. If the camera’s image was more than 0.003% between those boundary values, we considered the view as a stop sign. We stopped the vehicle for 2 seconds on the first stop sign and stopped the vehicle entirely on the second stop sign. Since we encounter a traffic light before either stop sign, we do not begin to check for stop signs until after we have crossed the traffic light.
The HSV bounds that we used for each object are listed below:
Lower: (140, 20, 100)Upper: (179, 80, 255)
Lower: (40, 50, 84)Upper: (90, 255, 255)
Lower: (140, 90, 200)Upper: (200, 255, 250)Plots