Abstract
Language is critical to establishing long-term cooperative relationships among intelligent agents (including people), particularly when the agents' preferences are in conflict. In such scenarios, an agent uses speech to coordinate and negotiate behavior with its partner(s). While recent work has shown that neural language modeling can produce effective speech agents, such algorithms typically only accept previous text as input. However, in relationships among intelligent agents, not all relevant context is expressed in conversation. Thus, in this paper, we propose and analyze an algorithm, called Llumi, that incorporates other forms of context to learn to speak in long-term relationships modeled as repeated games with cheap talk. Llumi combines models of intentionality with neural language modeling techniques to learn speech from data that is relevant to the agent's current context. A user study illustrates that, while imperfect, Llumi does learn context-aware speech repeated games with cheap talk when partnered with people, including games in which it was not trained. We believe these results are useful in determining how autonomous agents can learn to use speech to facilitate successful human-agent teaming.
Degree
MS
College and Department
Physical and Mathematical Sciences; Computer Science
Rights
https://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Skaggs, Jonathan Berry, "Language Learning Using Models of Intentionality in Repeated Games with Cheap Talk" (2022). Theses and Dissertations. 9534.
https://scholarsarchive.byu.edu/etd/9534
Date Submitted
2022-05-31
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd12365
Keywords
natural language generation, nlp, machine learning, deep learning, natural language processing, chat bot, multi agent systems
Language
english