Keywords

perceptron networks, binary trees, genetic algorithm

Abstract

This paper presents a new method for training multi-layer perceptron networks called DMP1 (Dynamic Multilayer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a genetic algorithm. The method is capable of handling real-valued inputs and a proof is given concerning its convergence properties of the basic model. Simulation results show that DMP1 performs favorably in comparison with other learning algorithms.

Original Publication Citation

Andersen, T. and Martinez, T. R., "A Provably Convergent Dynamic Training Method for Multi-layer Perceptron Networks", Proceedings of the 2nd International Symposium on Neuroinformatics and Neurocomputers, pp. 77-84, 1995.

Document Type

Peer-Reviewed Article

Publication Date

1995-09-23

Permanent URL

http://hdl.lib.byu.edu/1877/2411

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS