Keywords

backpropagation, learning algorithms, softprop, generalization, lazy training

Abstract

Multi-layer backpropagation, like many learning algorithms that can create complex decision surfaces, is prone to overfitting. Softprop is a novel learning approach presented here that is reminiscent of the softmax explore-exploit Q-learning search heuristic It fits the problem while delaying settling into error minima to achieve better generalization and more robust learning. This is accomplished by blending standard SSE optimization with lazy training, a new objective function well suited to learning classification tasks, to form a more stable learning model. Over several machine learning data sets, softprop reduces classification error by 17.1 percent and the variance in results by 38.6 percent over standard SSE minimization.

Original Publication Citation

Rimer, M., and Martinez, T. R., "Softprop: Softmax Neural Network Backpropagation Learning", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'4, pp. 979-984, 24.

Document Type

Peer-Reviewed Article

Publication Date

2004-07-29

Permanent URL

http://hdl.lib.byu.edu/1877/2440

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS