Degree Name
BS
Department
Computer Science
College
Physical and Mathematical Sciences
Defense Date
2024-03-06
Publication Date
2024-03-14
First Faculty Advisor
Christophe Giraud-Carrier
Second Faculty Advisor
Quinn Snell
First Faculty Reader
Carl Hanson
Honors Coordinator
Seth Holladay
Keywords
Deepfake, misinformation, deception, fake videos, generative AI
Abstract
This thesis is focused on deepfakes, a new term given to fake videos and images generated by deep learning algorithms and models. Deepfakes pose a considerable threat to society by raising the bar for quality in misinformation while also lowering the amount of skill and effort required. Deepfakes threaten to undermine democratic societies by swaying public opinion through misinformation. While many researchers are working hard to develop automated tools to combat deepfakes, this thesis used a 10-item IRB approved survey to examine whether two separate interventions could successfully improve an individual’s ability to recognize deepfakes. Demographic differences in recognizing deepfakes was also explored. The results of the survey found that while younger participants responded positively to interventions, older participants reacted adversely to interventions. Older participants also performed significantly worse at recognizing deepfakes.
BYU ScholarsArchive Citation
Mumford, Jeremy, "IMPROVING HUMAN RECOGNITION OF DEEPFAKES" (2024). Undergraduate Honors Theses. 346.
https://scholarsarchive.byu.edu/studentpub_uht/346