autonomous agents, user modeling, agent modeling, action prediction, plan recognition
The ability for a given agent to adapt on-line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, since humans learn quickly and behave non-deterministically. In this paper we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case-based approach, where the behavior of the other agent is learned in the form of state-action pairs. We generalize these cases either through continuous k-nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships.
Original Publication Citation
Jonathan Dinerstein, Dan Ventura, and Parris K. Egbert, "Fast and Robust Incremental Action Prediction for Interactive Agents," Computational Intelligence, 21:1, pp. 9-11, (25).
BYU ScholarsArchive Citation
Dinerstein, Jonathan; Egbert, Parris K.; and Ventura, Dan A., "Fast and Robust Incremental Action Prediction for Interactive Agents" (2005). All Faculty Publications. 1003.
Physical and Mathematical Sciences
© 2005 Wiley-Blackwell. The definitive version is available at www3.interscience.wiley.com.
Copyright Use Information