Keywords

perceptrons, generalization, SLP, weight averaging

Abstract

SLPs (single layer perceptrons) oflen exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs called "wagging" (weight averaging). This method involves training several different SLPs on the same training data, and then averaging their weights to obtain a single SLP. The performance of the wagged SLP is compared with other more complex learning algorithms (bp, c4.5, ibl, MML, etc) on 15 data sets from real world problem domains. Surprisingly, the wagged SLP has better average generalization performance than any of the other learning algorithms on the problems tested. This result is explained and analyzed. The analysis includes looking at the performance characteristics of the standard delta rule training algorithm for SLPs and the correlation between training and test set scores as training progresses.

Original Publication Citation

Andersen, T. L. and Martinez, T. R., "The Little Neuron that Could", Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN'99, CD paper #191, 1999.

Document Type

Peer-Reviewed Article

Publication Date

1999-07-16

Permanent URL

http://hdl.lib.byu.edu/1877/2444

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS