Keywords

lazy training, overfit, generalization, neural networks

Abstract

Multi-layer backpropagation, like most learning algorithms that can create complex decision surfaces, is prone to overfitting. We present a novel approach, called lazy training, for reducing the overfit in multiple-layer networks. Lazy training consistently reduces generalization error of optimized neural networks by more than half on a large OCR dataset and on several real world problems from the UCI machine learning database repository. Here, lazy training is shown to be effective in a multi-layered adaptive learning system, reducing the error of an optimized backpropagation network in a speech recognition system by 50.0% on the TIDIGITS corpus.

Original Publication Citation

Rimer, M., Martinez, T. R., and D. R. Wilson, "Improving Speech Recognition Learning through Lazy Training", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'2, pp. 2568-2573, 22.

Document Type

Peer-Reviewed Article

Publication Date

2002-05-17

Permanent URL

http://hdl.lib.byu.edu/1877/2427

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS