Keywords

artificial intelligence, machine learning, virtual agents, lifelike behavior

Abstract

Although many powerful AI and machine learning techniques exist, it remains difficult to quickly create AI for embodied virtual agents that produces visually lifelike behavior. This is important for applications (e.g., games, simulators, interactive displays) where an agent must behave in a manner that appears human-like. We present a novel technique for learning reactive policies that mimic demonstrated human behavior. The user demonstrates the desired behavior by dictating the agent’s actions during an interactive animation. Later, when the agent is to behave autonomously, the recorded data is generalized to form a continuous state-to-action mapping. Combined with an appropriate animation algorithm (e.g., motion capture), the learned policies realize stylized and natural-looking agent behavior. We empirically demonstrate the efficacy of our technique for quickly producing policies which result in lifelike virtual agent behavior.

Original Publication Citation

Jonathan Dinerstein, Parris K. Egbert, Dan Ventura, "Learning Policies for Embodied Virtual Agents Through Demonstration", proceedings of IJCAI 28.

Document Type

Peer-Reviewed Article

Publication Date

2008-01-01

Permanent URL

http://hdl.lib.byu.edu/1877/2401

Publisher

IJCAI

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS