Keywords
perceptron networks, binary trees, genetic algorithm
Abstract
This paper presents a new method for training multi-layer perceptron networks called DMP1 (Dynamic Multilayer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a genetic algorithm. The method is capable of handling real-valued inputs and a proof is given concerning its convergence properties of the basic model. Simulation results show that DMP1 performs favorably in comparison with other learning algorithms.
Original Publication Citation
Andersen, T. and Martinez, T. R., "A Provably Convergent Dynamic Training Method for Multi-layer Perceptron Networks", Proceedings of the 2nd International Symposium on Neuroinformatics and Neurocomputers, pp. 77-84, 1995.
BYU ScholarsArchive Citation
Andersen, Timothy L. and Martinez, Tony R., "A Provably Convergent Dynamic Training Method for Multi-layer Perceptron Networks" (1995). Faculty Publications. 1158.
https://scholarsarchive.byu.edu/facpub/1158
Document Type
Peer-Reviewed Article
Publication Date
1995-09-23
Permanent URL
http://hdl.lib.byu.edu/1877/2411
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 1995 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/