Abstract

This paper describes methods to track a user-defined point in the vision of a robot as it drives forward. This tracking allows a robot to keep itself directed at that point while driving so that it can get to that user-defined point. I develop and present two new multi-scale algorithms for tracking arbitrary points between two frames of video, as well as through a video sequence. The multi-scale algorithms do not use the traditional pyramid image, but instead use a data structure called an integral image (also known as a summed area table). The first algorithm uses edge-detection to track the movement of the tracking point between frames of video. The second algorithm uses a modified version of the Moravec operator to track the movement of the tracking point between frames of video. Both of these algorithms can track the user-specified point very quickly. Implemented on a conventional desktop, tracking can proceed at a rate of at least 20 frames per second.

Degree

MS

College and Department

Physical and Mathematical Sciences; Computer Science

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2004-10-11

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd564

Keywords

computer, machine vision, machine learning, video, point tracking, visual servoing, integral image, summed area table, minimal edit distance, Moravec operator

Share

COinS