Abstract
Neural Processes (NPs) are a class of regression models that learn a map from a set of input-output pairs to a distribution over functions. NPs are computationally tractable and provide a number of benefits over traditional nonlinear regression models. Despite these benefits, there are two main domains where NPs fail. This thesis is focused on presenting extensions of the Neural Process to these two areas. The first of these is the extension of Neural Processes graph and network data which we call Graph Neural Processes (GNP). A Graph Neural Process is defined as a Neural Process that operates on graph data. It takes spectral information from the graph Laplacian as inputs and then outputs a distribution over values. We demonstrate Graph Neural Processes in edge value imputation and discuss benefits and drawbacks of the method for other application areas. The second extension of Neural Processes comes in the fundamental training mechanism. NPs are traditionally trained using maximum likelihood, a probabilistic technique. We show that there are desirable classes of problems where NPs fail to learn. We also show that this drawback is solved by using approximations of the Wasserstein distance. We give experimental justification for our method and demonstrate its performance. These Wasserstein Neural Processes (WNPs) maintain the benefits of traditional NPs while being able to approximate new classes of function mappings.
Degree
MS
College and Department
Physical and Mathematical Sciences; Computer Science
Rights
https://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Carr, Andrew Newberry, "Geometric Extensions of Neural Processes" (2020). Theses and Dissertations. 8394.
https://scholarsarchive.byu.edu/etd/8394
Date Submitted
2020-05-18
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd11146
Keywords
optimal transport, neural processes, graph laplacian, non-linear regression
Language
English