Abstract

In strategically rich settings in which machines and people do not fully share the same preferences, machines must learn to cooperate and compromise with people to establish mutually successful relationships. However, designing machines that effectively cooperate with people in these settings is difficult due to a variety of technical and psychological challenges. To better understand these challenges, we conducted a series of user studies in which we investigated human-human, robot-robot, and human-robot cooperation in a simple, yet strategically rich, resource-sharing scenario called the Block Dilemma, a game in which players must balance fairness, efficiency, and risk. While both human-human and robot-robot pairs typically learn fair and cooperative solutions over time, our results show that these solutions tend to be different when communication is permitted versus when it is not. While people followed a less risky and less efficient solution, pairs of robots followed a more risky but more efficient solution. This difference in humans’ and machines’ behavior appears to influence human-robot cooperation negatively, as our studies show that human-robot pairs did not frequently produce either form of cooperation without communication. These results speak to the need for machine behavior to be better aligned with human behavior. While machines may behave more efficiently and produce better results than people when following their own calculations, machines may often better facilitate human-machine cooperation by aligning their behavior with human behavior rather than expecting human behavior to become more efficient.

Degree

MS

College and Department

Physical and Mathematical Sciences; Computer Science

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2022-03-21

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd12014

Keywords

Cooperation, Block Dilemma, Repeated Games, S#, Communication, Robot

Language

english

Share

COinS