structure from motion, geotechnical, UAV, computer vision
Structure from motion (SfM) computer vision is a relatively new technology that allows engineers to reconstruct a three-dimensional (3D) model of a given scene using twodimensional digital photographs captured from a single, moving camera. SfM computer vision provides an economic and user-friendly alternative to other 3D scene-capture and modeling tools such as light distance and ranging (LiDAR). Although the resolution and accuracy of laser-based modeling methods are generally superior to vision-based modeling methods, the economic advantages associated with the latter may make it a useful and practical alternative for many geotechnical engineering applications. Although other engineering disciplines have investigated the potential usefulness of SfM computer vision for years, its application to geotechnical engineering generally remains unexplored. Researchers are currently investigating the application of this technology to select full-scale geotechnical field experiments and assessing its potential usefulness as a high-resolution instrumentation/monitoring tool. This paper presents preliminary computer vision results and findings from these studies. The field experiments, as well as the hardware and software details used to develop 3D SfM computer models of the experiments are summarized. The developed 3D models are presented, and displacements measured in the models are compared against ground truth to evaluate accuracy. Observed advantages and limitations of SfM computer vision are discussed, and several potentially useful applications of the technology in geotechnical engineering are listed.
BYU ScholarsArchive Citation
Palmer, L.; Franke, Kevin W.; Martin, R. Abraham; Sines, B. E.; Rollins, Kyle M.; and Hedengren, John, "The Application and Accuracy of Structure from Motion Computer Vision Models with Full-Scale Geotechnical Field Tests" (2015). Faculty Publications. 1693.
Ira A. Fulton College of Engineering and Technology
Copyright ASCE, 2016, Author's submitted version of this article.
Copyright Use Information