task localization, Bayesian, machine learning
In a reinforcement learning task library system for Multiple Goal Markov Decision Process (MGMDP), localization in the task space allows the agent to determine whether a given task is already in its library in order to exploit previously learned experience. Task localization in MGMDPs can be accomplished through a Bayesian approach, however a trivial approach fails when the rewards are not distributed normally. This can be overcome through our Bayesian Task Localization Technique (BTLT).
Original Publication Citation
James L. Carroll and Kevin D. Seppi. "A Bayesian Technique for Task Localization in Multiple Goal Markov Decision Processes." In Proceedings of the International Conference on Machine Learning and Applications, Louisville, Kentucky, 24.
BYU ScholarsArchive Citation
Carroll, James and Seppi, Kevin, "A Bayesian Technique for Task Localization in Multiple Goal Markov Decision Processes" (2004). All Faculty Publications. 1034.
Physical and Mathematical Sciences
© 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information