Keywords

Q-learning, reinforcement learning, multiagent systems

Abstract

Q-learning is a reinforcement learning algorithm that learns expected utilities for state-action transitions through successive interactions with the environment. The algorithm's simplicity as well as its convergence properties have made it a popular algorithm for study. However, its non-parametric representation of utilities limits its effectiveness in environments with large amounts of perceptual input. For example, in multiagent systems, each agent may need to consider the action selections of its counterparts in order to learn effective behaviors. This creates a joint action space which grows exponentially with the number of agents in the system. In such situations, the Q-learning algorithm quickly becomes intractable. This paper presents a new algorithm, Dynamic Joint Action Perception, which addresses this problem by allowing each agent to dynamically perceive only those joint action distinctions which are relevant to its own payoffs. The result is a smaller joint action space and improved scalability of Q-learning to systems with many agents.

Original Publication Citation

Nancy Fulda and Dan Ventura, "Dynamic Joint Action Perception for Q-Learning Agents", Proceedings of the International Conference on Machine Learning and Applications, pp. 73-78, June 23.

Document Type

Peer-Reviewed Article

Publication Date

2003-06-01

Permanent URL

http://hdl.lib.byu.edu/1877/2553

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS