This thesis introduces methods for adding user-guided depth-aware effects to images captured with a consumer-grade stereo camera with minimal user interaction. In particular, we present methods for highlighted depth-of-field, haze, depth-of-field, and image relighting. Unlike many prior methods for adding such effects, we do not assume prior scene models or require extensive user guidance to create such models, nor do we assume multiple input images. We also do not require specialized camera rigs or other equipment such as light-field camera arrays, active lighting, etc. Instead, we use only an easily portable and affordable consumer-grade stereo camera. The depth is calculated from a stereo image pair using an extended version of PatchMatch Stereo designed to compute not only image disparities but also normals for visible surfaces. We also introduce a pipeline for rendering multiple effects in the order they would occur physically. Each can be added, removed, or adjusted in the pipeline without having to reapply subsequent effects. Individually or in combination, these effects can be used to enhance the sense of depth or structure in images and provide increased artistic control. Our interface also allows editing the stereo pair together in a fashion that preserves stereo consistency, or the effects can be applied to a single image only, thus leveraging the advantages of stereo acquisition even to produce a single photograph.
College and Department
Physical and Mathematical Sciences; Computer Science
BYU ScholarsArchive Citation
Abbott, Joshua E., "Interactive Depth-Aware Effects for Stereo Image Editing" (2013). All Theses and Dissertations. 3712.
depth-aware effects, stereo, surface normals, normal calculation, quotient image, image-based relighting, depth-of-field, highlighted depth-of-field, haze, disparity, depth pipeline, patchmatch stereo