Keywords

task localization, Bayesian, machine learning

Abstract

In a reinforcement learning task library system for Multiple Goal Markov Decision Process (MGMDP), localization in the task space allows the agent to determine whether a given task is already in its library in order to exploit previously learned experience. Task localization in MGMDPs can be accomplished through a Bayesian approach, however a trivial approach fails when the rewards are not distributed normally. This can be overcome through our Bayesian Task Localization Technique (BTLT).

Original Publication Citation

James L. Carroll and Kevin D. Seppi. "A Bayesian Technique for Task Localization in Multiple Goal Markov Decision Processes." In Proceedings of the International Conference on Machine Learning and Applications, Louisville, Kentucky, 24.

Document Type

Peer-Reviewed Article

Publication Date

2004-12-18

Permanent URL

http://hdl.lib.byu.edu/1877/2597

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS