Mental models and expectation violations in conversational AI interactions
Keywords
Conversational AI, Chatbots, Conversational agents, Engagement
Abstract
Artificial Intelligence is increasingly becoming integrated in many aspects of human life. One particular AI comes in the form of conversational agents (CAs) such as Siri, Alexa, and chatbots used for customer service on websites and other information systems. It is widely accepted that humans treat systems as social actors. Leveraging this bias, companies sometimes attempt to masquerade a CA as a human customer service representative. In addition to the ethical and legal questions around this practice, the benefits and drawbacks of a CA pretending to be human are unclear due to a lack of study. While more human-like interactions can improve outcomes, when users find out that the CA is not human, they may have a negative reaction that may cause reputation harm in the company. In this research we use Expectation Violation Theory to explain what happens when users have high or low expectations of a conversation. We conducted an experiment with 175 participants where some participants were told they were interacting with a CA while others were told they were interacting with a human. We further divided the groups so that some participants interacted with a CA with low conversational capability while others interacted with a CA with high conversational capability. The results show that expectations formed by the user before the interaction change how the user evaluates the CA beyond the actual performance of the CA. These findings provide guidance to developers not just of conversational agents, but also for other technologies where users may be uncertain of a system's capabilities.
Original Publication Citation
Grimes, G. M., Schuetzler, R. M., & Giboney, J.S. (2021). Mental Models and Expectation Violations in Conversational AI Interactions. Decision Support Systems, 144.
BYU ScholarsArchive Citation
Schurtzler, Ryan M.; Grimes, G. Mark; and Giboney, Justin Scott, "Mental models and expectation violations in conversational AI interactions" (2021). Faculty Publications. 5654.
https://scholarsarchive.byu.edu/facpub/5654
Document Type
Peer-Reviewed Article
Publication Date
2021-5
Permanent URL
http://hdl.lib.byu.edu/1877/8384
Publisher
Decision Support Systems
Language
English
College
Marriott School of Business
Department
Information Systems
Copyright Status
© 2021 Published by Elsevier B.V.
Copyright Use Information
https://lib.byu.edu/about/copyright/