Abstract

We investigate the coherent gradient hypothesis and show that the coherence measurements are different on real and random data regardless of the network's initialization. We introduce "diffs," an attempt at an element-wise approximation at coherence, and investigate their properties. We study how coherence is affected by increasing the width of simple fully-connected networks. We then prune those fully-connected networks and find that sparse networks outperform dense networks with the same number of nonzero parameters. In addition, we show that it is possible to increase the performance of a sparse network by scaling the size of the dense parent network it is derived from. Finally we apply our pruning methods to ResNet50 and ViT and find that diff-based pruning can be competitive with other methods.

Degree

MS

College and Department

Physical and Mathematical Sciences; Mathematics

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2024-04-29

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd13183

Keywords

neural network, pruning, generalization, gradient coherence

Language

english

Share

COinS