Abstract
Using recently popularized invertible neural network We predict future video frames from complex dynamic scenes. Our invertible linear embedding (ILE) demonstrates successful learning, prediction and latent state inference. In contrast to other approaches, ILE does not use any explicit reconstruction loss or simplistic pixel-space assumptions. Instead, it leverages invertibility to optimize the likelihood of image sequences exactly, albeit indirectly.Experiments and comparisons against state of the art methods over synthetic and natural image sequences demonstrate the robustness of our approach, and a discussion of future work explores the opportunities our method might provide to other fields in which the accurate analysis and forecasting of non-linear dynamic systems is essential.
Degree
MS
College and Department
Physical and Mathematical Sciences; Computer Science
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Pottorff, Robert Thomas, "Video Prediction with Invertible Linear Embeddings" (2019). Theses and Dissertations. 7577.
https://scholarsarchive.byu.edu/etd/7577
Date Submitted
2019-06-01
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd10917
Keywords
system identification, invertible neural networks, Hammerstein-Wiener, video, extrapolation
Language
english