backpropagation, learning algorithms, interactive training
Backpropagation, similar to most high-order learning algorithms, is prone to overfitting. We address this issue by introducing interactive training (IT), a logical extension to backpropagation training that employs interaction among multiple networks. This method is based on the theory that centralized control is more effective for learning in deep problem spaces in a multi-agent paradigm. IT methods allow networks to work together to form more complex systems while not restraining their individual ability to specialize. Lazy training, an implementation of IT that minimizes misclassification error, is presented. Lazy training discourages overfitting and is conducive to higher accuracy in multiclass problems than standard backpropagation. Experiments on a large, real world OCR data set have shown interactive training to significantly increase generalization accuracy, from 97.86% to 99.11%. These results are supported by theoretical and conceptual extensions from algorithmic to interactive training models.
Original Publication Citation
Rimer, M., Andersen, T., and Martinez, T. R., "Lazy Training: Improving Backpropagation Learning through Network Interaction", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'1, pp. 27-212, 21.
BYU ScholarsArchive Citation
Andersen, Timothy L.; Martinez, Tony R.; and Rimer, Michael E., "Lazy Training: Improving Backpropagation Learning through Network Interaction" (2001). All Faculty Publications. 1090.
Physical and Mathematical Sciences
© 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information