Abstract

When using Stochastic Gradient Descent (SGD) to train Artificial Neural Networks, gradient variance comes from two sources: differences in the weights of the network when each batch gradient is estimated and differences between the input values in each batch. Some architectural traits, like skip-connections and batch-normalization, allow much deeper networks to be trained by reducing each type of variance and improving the conditioning of the network gradient with respect to both the weights and the input. It is still unclear to which degree each property is responsible for these dramatic stability improvements when training deep networks. This thesis summarizes previous findings related to gradient conditioning in each case, demonstrates efficient methods by which each can be measured independently, and investigates the contribution each makes to the stability and speed of SGD in various architectures as network depth increases.

Degree

MS

College and Department

Physical and Mathematical Sciences; Mathematics

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2022-08-04

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd12491

Keywords

gradient conditioning, gradient step-consistency, gradient batch-dissonance, gradient whitening, gradient confusion, gradient coherence, gradient diversity

Language

english

Share

COinS