Keywords
artificial neural networks, predictive model, training sets, speed training, generalization
Abstract
Artificial neural networks provide an effective empirical predictive model for pattern classification. However, using complex neural networks to learn very large training sets is often problematic, imposing prohibitive time constraints on the training process. We present four practical methods for dramatically decreasing training time through dynamic stochastic sample presentation, a technique we call speed training. These methods are shown to be robust to retaining generalization accuracy over a diverse collection of real world data sets. In particular, the SET technique achieves a training speedup of 4278% on a large OCR database with no detectable loss in generalization.
Original Publication Citation
Andersen, T. L., Martinez, T. R., and Rimer, M. E., "Speed Training: Improving the Rate of Backpropagation Learning through Stochastic Sample Presentation", Proceedings of the International Joint Conference on Neural Networks IJCNN'1, pp. 2661-2666, 21.
BYU ScholarsArchive Citation
Andersen, Timothy L.; Martinez, Tony R.; and Rimer, Michael E., "Speed Training: Improving the Rate of Backpropagation Learning through Stochastic Sample Presentation" (2001). Faculty Publications. 1092.
https://scholarsarchive.byu.edu/facpub/1092
Document Type
Peer-Reviewed Article
Publication Date
2001-07-19
Permanent URL
http://hdl.lib.byu.edu/1877/2441
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/