Keywords

autonomous agents, user modeling, agent modeling, action prediction, plan recognition

Abstract

The ability for a given agent to adapt on-line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, since humans learn quickly and behave non-deterministically. In this paper we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case-based approach, where the behavior of the other agent is learned in the form of state-action pairs. We generalize these cases either through continuous k-nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships.

Original Publication Citation

Jonathan Dinerstein, Dan Ventura, and Parris K. Egbert, "Fast and Robust Incremental Action Prediction for Interactive Agents," Computational Intelligence, 21:1, pp. 9-11, (25).

Document Type

Peer-Reviewed Article

Publication Date

2005-02-01

Permanent URL

http://hdl.lib.byu.edu/1877/2398

Publisher

Wiley-Blackwell

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS