Keywords
equilibrium selection algorithm, reinforcement learning
Abstract
We present an equilibrium selection algorithm for reinforcement learning agents that incrementally adjusts the probability of executing each action based on the desirability of the outcome obtained in the last time step. The algorithm assumes that at least one coordination equilibrium exists and requires that the agents have a heuristic for determining whether or not the equilibrium was obtained. In deterministic environments with one or more strict coordination equilibria, the algorithm will learn to play an optimal equilibrium as long as the heuristic is accurate. Empirical data demonstrate that the algorithm is also effective in stochastic environments and is able to learn good joint policies when the heuristic’s parameters are estimated during learning, rather than known in advance.
Original Publication Citation
Nancy Fulda and Dan Ventura, "Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests", Proceedings of the International Joint Conference on Neural Networks, pp. 1121-1126, July 24.
BYU ScholarsArchive Citation
Fulda, Nancy and Ventura, Dan A., "Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests" (2004). Faculty Publications. 432.
https://scholarsarchive.byu.edu/facpub/432
Document Type
Peer-Reviewed Article
Publication Date
2004-07-01
Permanent URL
http://hdl.lib.byu.edu/1877/2524
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/