Abstract

In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.

Degree

MS

College and Department

Ira A. Fulton College of Engineering and Technology; Electrical and Computer Engineering

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2019-07-01

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd10922

Keywords

FPGA vision, visual servoing, low latency video processing, robotic vision

Language

english

Included in

Engineering Commons

Share

COinS