Abstract
The goal of learning transfer is to apply knowledge gained from one problem to a separate related problem. Transformation learning is a proposed approach to computational learning transfer that focuses on modeling high-level transformations that are well suited for transfer. By using a high-level representation of transferable data, transformation learning facilitates both shallow transfer (intra-domain) and deep transfer (inter-domain) scenarios. Transformations can be discovered in data using manifold learning to order data instances according to the transformations they represent. For high-dimensional data representable with coordinate systems, such as images and sounds, data instances can be decomposed into small sub-instances based on coordinates. Coordinate-based transformation models trained using these sub-instances can effectively approximate transformations from very small amounts of input data compared to the naive transformation modeling approach. In addition, these models are well suited for deep transfer scenarios.
Degree
MS
College and Department
Physical and Mathematical Sciences; Computer Science
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Wilson, Christopher R., "Transformation Learning: Modeling Transferable Transformations In High-Dimensional Data" (2010). Theses and Dissertations. 2334.
https://scholarsarchive.byu.edu/etd/2334
Date Submitted
2010-05-25
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd3600
Keywords
transformation learning, learning transfer, machine learning
Language
English