The following work presents a new set of general methods for improving neural network accuracy on classification tasks, grouped under the label of classification-based methods. The central theme of these approaches is to provide problem representations and error functions that more directly improve classification accuracy than conventional learning and error functions. The CB1 algorithm attempts to maximize classification accuracy by selectively backpropagating error only on misclassified training patterns. CB2 incorporates a sliding error threshold to the CB1 algorithm, interpolating between the behavior of CB1 and standard error backpropagation as training progresses in order to avoid prematurely saturated network weights. CB3 learns a confidence threshold for each combination of training pattern and output class. This models an error function based on the performance of the network as it trains in order to avoid local overfit and premature weight saturation. PL1 is a point-wise local binning algorithm used to calibrate a learning model to output more accurate posterior probabilities. This algorithm is used to improve the reliability of classification-based networks while retaining their higher degree of classification accuracy. These approaches are demonstrated to be robust to a variety of learning parameter settings and have better classification accuracy than standard approaches on a variety of applications, such as OCR and speech recognition.
College and Department
Physical and Mathematical Sciences; Computer Science
BYU ScholarsArchive Citation
Rimer, Michael Edwin, "Improving Neural Network Classification Training" (2007). Theses and Dissertations. 1194.
machine learning, artificial neural networks, back-propagation, classification, objective functions, learning algorithms