This thesis work examines how people accomplish annotation tasks (i.e., labelling data based on content) while working with an artificial intelligence (AI) system. When people and AI systems work together to accomplish a task, this is referred to as human-AI teaming. This study reports on the results of an interview and observation study of 15 volunteers from the Washington DC area as the volunteers annotated Twitter messages (tweets) about the COVID-19 pandemic. During the interviews, researchers observed the volunteers as they annotated tweets, noting any needs, frustrations, or confusion that the volunteers expressed about the task itself or when working with the AI. This research provides the following contributions: 1) an examination of annotation work in a human-AI teaming context; 2) the HATA (human-AI teaming annotation) framework with five key factors that affect the way people annotate while working with AI systems--background, task interpretation, training, fatigue, and the annotation system; 3) a set of questions that will help guide users of the HATA framework as they create or assess their own human-AI annotation teams; 4) design recommendations that will give future researchers, designers, and developers guidance for how to create a better environment for annotators to work with AI; and 5) HATA framework implications when it is put into practice.
College and Department
BYU ScholarsArchive Citation
Stevens, Suzanne Ashley, "A Framework for Assessing and Designing Human Annotation Practices in Human-AI Teaming" (2021). Theses and Dissertations. 9128.
HATA framework, framework, human-AI teaming, artificial intelligence, collaboration, annotation