lazy training, overfit, generalization, neural networks
Multi-layer backpropagation, like most learning algorithms that can create complex decision surfaces, is prone to overfitting. We present a novel approach, called lazy training, for reducing the overfit in multiple-layer networks. Lazy training consistently reduces generalization error of optimized neural networks by more than half on a large OCR dataset and on several real world problems from the UCI machine learning database repository. Here, lazy training is shown to be effective in a multi-layered adaptive learning system, reducing the error of an optimized backpropagation network in a speech recognition system by 50.0% on the TIDIGITS corpus.
Original Publication Citation
Rimer, M., Martinez, T. R., and D. R. Wilson, "Improving Speech Recognition Learning through Lazy Training", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'2, pp. 2568-2573, 22.
BYU ScholarsArchive Citation
Martinez, Tony R.; Rimer, Michael E.; and Wilson, D. Randall, "Improving Speech Recognition Learning through Lazy Training" (2002). All Faculty Publications. 1071.
Physical and Mathematical Sciences
© 2002 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information