Abstract

The following work presents a new set of general methods for improving neural network accuracy on classification tasks, grouped under the label of classification-based methods. The central theme of these approaches is to provide problem representations and error functions that more directly improve classification accuracy than conventional learning and error functions. The CB1 algorithm attempts to maximize classification accuracy by selectively backpropagating error only on misclassified training patterns. CB2 incorporates a sliding error threshold to the CB1 algorithm, interpolating between the behavior of CB1 and standard error backpropagation as training progresses in order to avoid prematurely saturated network weights. CB3 learns a confidence threshold for each combination of training pattern and output class. This models an error function based on the performance of the network as it trains in order to avoid local overfit and premature weight saturation. PL1 is a point-wise local binning algorithm used to calibrate a learning model to output more accurate posterior probabilities. This algorithm is used to improve the reliability of classification-based networks while retaining their higher degree of classification accuracy. These approaches are demonstrated to be robust to a variety of learning parameter settings and have better classification accuracy than standard approaches on a variety of applications, such as OCR and speech recognition.

Degree

PhD

College and Department

Physical and Mathematical Sciences; Computer Science

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2007-09-05

Document Type

Dissertation

Handle

http://hdl.lib.byu.edu/1877/etd2094

Keywords

machine learning, artificial neural networks, back-propagation, classification, objective functions, learning algorithms

Share

COinS