Keywords

equilibrium selection algorithm, reinforcement learning

Abstract

We present an equilibrium selection algorithm for reinforcement learning agents that incrementally adjusts the probability of executing each action based on the desirability of the outcome obtained in the last time step. The algorithm assumes that at least one coordination equilibrium exists and requires that the agents have a heuristic for determining whether or not the equilibrium was obtained. In deterministic environments with one or more strict coordination equilibria, the algorithm will learn to play an optimal equilibrium as long as the heuristic is accurate. Empirical data demonstrate that the algorithm is also effective in stochastic environments and is able to learn good joint policies when the heuristic’s parameters are estimated during learning, rather than known in advance.

Original Publication Citation

Nancy Fulda and Dan Ventura, "Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests", Proceedings of the International Joint Conference on Neural Networks, pp. 1121-1126, July 24.

Document Type

Peer-Reviewed Article

Publication Date

2004-07-01

Permanent URL

http://hdl.lib.byu.edu/1877/2524

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS