Keywords
learning algorithm, generalization, bias
Abstract
In order to be useful, a learning algorithm must be able to generalize well when faced with inputs not previously presented to the system. A bias is necessary for any generalization, and as shown by several researchers in recent years, no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. This paper provides examples to illustrate this fact, but also explains how a bias or learning algorithm can be “better” than another in practice when the probability of the occurrence of functions is taken into account. It shows how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization, and identifies several of the conditions that should be considered when attempting to select an appropriate bias for a particular problem.
Original Publication Citation
Wilson, D. R. and Martinez, T. R., "Bias and the Probability of Generalization", Proceedings of the 1997 International Conference on Intelligent Information Systems, pp. 18-114, 1997.
BYU ScholarsArchive Citation
Martinez, Tony R. and Wilson, D. Randall, "Bias and the Probability of Generalization" (1997). Faculty Publications. 1138.
https://scholarsarchive.byu.edu/facpub/1138
Document Type
Peer-Reviewed Article
Publication Date
1997-12-10
Permanent URL
http://hdl.lib.byu.edu/1877/2415
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 1997 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/