Keywords
task localization, Bayesian, machine learning
Abstract
In a reinforcement learning task library system for Multiple Goal Markov Decision Process (MGMDP), localization in the task space allows the agent to determine whether a given task is already in its library in order to exploit previously learned experience. Task localization in MGMDPs can be accomplished through a Bayesian approach, however a trivial approach fails when the rewards are not distributed normally. This can be overcome through our Bayesian Task Localization Technique (BTLT).
Original Publication Citation
James L. Carroll and Kevin D. Seppi. "A Bayesian Technique for Task Localization in Multiple Goal Markov Decision Processes." In Proceedings of the International Conference on Machine Learning and Applications, Louisville, Kentucky, 24.
BYU ScholarsArchive Citation
Carroll, James and Seppi, Kevin, "A Bayesian Technique for Task Localization in Multiple Goal Markov Decision Processes" (2004). Faculty Publications. 1034.
https://scholarsarchive.byu.edu/facpub/1034
Document Type
Peer-Reviewed Article
Publication Date
2004-12-18
Permanent URL
http://hdl.lib.byu.edu/1877/2597
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/