Keywords

backpropagation, learning algorithms, interactive training

Abstract

Backpropagation, similar to most high-order learning algorithms, is prone to overfitting. We address this issue by introducing interactive training (IT), a logical extension to backpropagation training that employs interaction among multiple networks. This method is based on the theory that centralized control is more effective for learning in deep problem spaces in a multi-agent paradigm. IT methods allow networks to work together to form more complex systems while not restraining their individual ability to specialize. Lazy training, an implementation of IT that minimizes misclassification error, is presented. Lazy training discourages overfitting and is conducive to higher accuracy in multiclass problems than standard backpropagation. Experiments on a large, real world OCR data set have shown interactive training to significantly increase generalization accuracy, from 97.86% to 99.11%. These results are supported by theoretical and conceptual extensions from algorithmic to interactive training models.

Original Publication Citation

Rimer, M., Andersen, T., and Martinez, T. R., "Lazy Training: Improving Backpropagation Learning through Network Interaction", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'1, pp. 27-212, 21.

Document Type

Peer-Reviewed Article

Publication Date

2001-07-19

Permanent URL

http://hdl.lib.byu.edu/1877/2429

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS