Keywords
reinforcement learning, continuous domain, control
Abstract
We present JoSTLe, an algorithm that performs value iteration on control problems with continuous actions, allowing this useful reinforcement learning technique to be applied to problems where a priori action discretization is inadequate. The algorithm is an extension of a variable resolution technique that works for problems with continuous states and discrete actions. Results are given that indicate that JoSTLe is a promising step toward reinforcement learning in a fully continuous domain.
Original Publication Citation
Christopher K. Monson, David Wingate, Kevin D. Seppi, and Todd S. Peterson. "Variable Resolution Discretization in the Joint Space." In Proceedings of the International Conference on Machine Learning and Applications, Louisville, Kentucky, 24.
BYU ScholarsArchive Citation
Monson, Christopher K.; Seppi, Kevin; Wingate, David; and Peterson, Todd S., "Variable Resolution Discretization in the Joint Space" (2004). Faculty Publications. 1036.
https://scholarsarchive.byu.edu/facpub/1036
Document Type
Peer-Reviewed Article
Publication Date
2004-12-18
Permanent URL
http://hdl.lib.byu.edu/1877/2607
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/