Abstract
The field of autonomous vehicle research is growing rapidly. The Congressional mandate for the military to use unmanned vehicles has, in large part, sparked this growth. In conjunction with this mandate, DARPA sponsored the Urban Challenge, a competition to create fully autonomous vehicles that can operate in urban settings. An extremely important feature of autonomous vehicles, especially in urban locations, is their ability to perceive their environment. The research presented in this thesis is directed toward providing an autonomous vehicle with real-time data that efficiently and compactly represents its forward environment as it navigates an urban area. The information extracted from the environment for this application consists of stop line locations, lane information, and obstacle locations, using a single camera and LIDAR scanner. A road/non-road binary mask is first segmented. From the road information in the mask, the current traveling lane of the vehicle is detected using a minimum distance transform and tracked between frames. The stop lines and obstacles are detected from the non-road information in the mask. Stop lines are detected using a variation of vertical profiling, and obstacles are detected using shape descriptors. A laser rangefinder is used in conjunction with the camera in a primitive form of sensor fusion to create a list of obstacles in the forward environment. Obstacle boundaries, lane points, and stop line centers are then translated from image coordinates to UTM coordinates using a homography transform created during the camera calibration procedure. A novel system for rapid camera calibration was also implemented. Algorithms investigated during the development phase of the project are included in the text for the purposes of explaining design decisions and providing direction to researchers who will continue the work in this field. The results were promising, performing the tasks fairly accurately at a rate of about 20 frames per second, using an Intel Core2 Duo processor with 2 GB RAM.
Degree
MS
College and Department
Ira A. Fulton College of Engineering and Technology; Electrical and Computer Engineering
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Greco, Christopher Richard, "Real-Time Forward Urban Environment Perception for an Autonomous Ground Vehicle Using Computer Vision and LIDAR" (2008). Theses and Dissertations. 1344.
https://scholarsarchive.byu.edu/etd/1344
Date Submitted
2008-03-17
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd2314
Keywords
autonomous vehicle, computer vision, image processing, DARPA Urban Challenge, robot, camera, LIDAR, sensor fusion
Language
English