Abstract
Ground robotic vehicles are used in many different applications. Many of these uses include tele-operation of the robot. This allows the robot to be deployed in locations that are too difficult or are unsafe for human access. The ability of a ground robot to autonomously navigate to a desired location without a-priori map information and without using GPS would allow robotic vehicles to be used in many of these situations and would free the operator to focus on other more important tasks. The purpose of this research is to develop algorithms that enable a ground robot to autonomously navigate to a user-selected location. The goal is selected from a video feed from the robot and the robot drives to the goal location while avoiding obstacles. The method uses a monocular camera for measuring the locations of the goal and landmarks. The method is validated in simulation and through experiments on an iRobot Packbot platform. A novel goal-based robocentric mapping algorithm is derived in Chapter 3. This map is created using an extended Kalman filter (EKF) by tracking the position of the goal along with other available landmarks surrounding the robot as it drives towards the goal. The mapping is robocentric, meaning that the map is a local map created in the robot-body frame. A unique state definition of the goal states and additional landmarks is presented that improves the estimate of the goal location. An improved 3D model is derived and used to allow the robot to drive on non-flat terrain while calculating the position of the goal and other landmarks. The observability and consistency of the proposed method are shown in Chapter 4. The visual tracking algorithm is explained in Chapter 5. This tracker is used with the EKF to improve tracking performance and to allow the objects to be tracked even after leaving the camera field of view for significant periods of time. This problem presents a difficult challenge for visual tracking because of the drastic change in size of the goal object as the robot approaches the goal. The tracking method is validated through experiments in real-world scenarios. The method of planning and control is derived in Chapter 6. A Model Predictive Control (MPC) formulation is designed that explicitly handles the sensor constraints of a monocular camera that is rigidly mounted to the vehicle. The MPC uses an observability-based cost function to drive the robot along a path that minimizes the position error of the goal in the robot-body frame. The MPC algorithm also avoids obstacles while driving to the goal. The conditions are explained that guarantee the robot will arrive within some specified distance of the goal. The entire system is implemented on an iRobot Packbot and experiments are conducted and presented in Chapter 7. The methods described in this work are shown to work on actual hardware allowing the robot to arrive at a user-selected goal in real-world scenarios.
Degree
PhD
College and Department
Ira A. Fulton College of Engineering and Technology; Mechanical Engineering
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Ferrin, Jeffrey L., "Autonomous Goal-Based Mapping and Navigation Using a Ground Robot" (2016). Theses and Dissertations. 6190.
https://scholarsarchive.byu.edu/etd/6190
Date Submitted
2016-12-01
Document Type
Dissertation
Handle
http://hdl.lib.byu.edu/1877/etd9004
Keywords
GPS-denied localization, vision-based navigation, MPC, robocentric mapping
Language
english