Author Date

2024-03-14

Degree Name

BS

Department

Computer Science

College

Physical and Mathematical Sciences

Defense Date

2024-02-26

Publication Date

2024-03-14

First Faculty Advisor

Jacob Crandall

First Faculty Reader

Chris Archibald

Honors Coordinator

Seth Holladay

Keywords

AI, Assumption-Alignment Tracking, AAT, AlegAATr, Explainable AI

Abstract

Autonomous robots are becoming increasingly prevalent in our everyday lives. In order for humans to effectively take advantage of automated systems, robots need to understand their limitations across different environment states, recognize when they need user assistance, and effectively communicate their assessments to human users. The ability of AI systems to communicate their decision process is referred to as Explainable AI.

Research has already been done on how a robot should evaluate its proficiency to predict how well it can perform a task [1]. This thesis explores current Social Science research on how humans explain their own behaviors. These insights are then applied to extend existing work on Assumption-Alignment Tracking (AAT). Insights are generated, filtered, and then presented to the user in a way that reflects the Social Science understanding of explanations. A discussion is included at the end to determine how well these insights align with human explanations.

Share

COinS