Keywords

learning agents, optimal behavior, dynamic joint action perception

Abstract

Groups of reinforcement learning agents interacting in a common environment often fail to learn optimal behaviors. Poor performance is particularly common in environments where agents must coordinate with each other to receive rewards and where failed coordination attempts are penalized. This paper studies the effectiveness of the Dynamic Joint Action Perception (DJAP) algorithm on a grid-world rendezvous task with this characteristic. The effects of learning rate, exploration strategy, and training time on algorithm effectiveness are discussed. An analysis of the types of tasks for which DJAP learning is appropriate is also presented.

Original Publication Citation

Nancy Fulda and Dan Ventura, "Learning a Rendezvous Task with Dynamic Joint Action Perception", Proceedings of the International Joint Conference on Neural Networks, pp. 627-632, July 26.

Document Type

Peer-Reviewed Article

Publication Date

2006-07-01

Permanent URL

http://hdl.lib.byu.edu/1877/2525

Publisher

IEEE

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS