LIVEcut: Learning-based Interactive Video Segmentation by Evaluation of Multiple Propagated Cues
video segmentation, interactive, learning-based
Video sequences contain many cues that may be used to segment objects in them, such as color, gradient, color adjacency, shape, temporal coherence, camera and object motion, and easily-trackable points. This paper introduces LIVEcut, a novel method for interactively selecting objects in video sequences by extracting and leveraging as much of this information as possible. Using a graph-cut optimization framework, LIVEcut propagates the selection forward frame by frame, allowing the user to correct any mistakes along the way if needed. Enhanced methods of extracting many of the features are provided. In order to use the most accurate information from the various potentially-conflicting features, each feature is automatically weighted locally based on its estimated accuracy using the previous implicitly-validated frame. Feature weights are further updated by learning from the user corrections required in the previous frame. The effectiveness of LIVEcut is shown through timing comparisons to other interactive methods, accuracy comparisons to unsupervised methods, and qualitatively through selections on various video sequences.
Original Publication Citation
B. Price, B. Morse, and S. Cohen, "LIVEcut: Learning-based interactive video segmentation by evaluation of multiple propagated cues," in IEEE International Conference on Computer Vision (ICCV), pp. 779-786, October 29.
BYU ScholarsArchive Citation
Morse, Bryan S.; Price, Brian L.; and Cohen, Scott, "LIVEcut: Learning-based Interactive Video Segmentation by Evaluation of Multiple Propagated Cues" (2009). Faculty Publications. 121.
Physical and Mathematical Sciences
© 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information