Abstract

Emotion recognition from facial expressions has been thoroughly explored and explained through decades of research, but emotion recognition from vocal expressions has yet to be fully explained. This project builds on previous experimental approaches to create a large audio corpus of acted vocal emotion. With a large enough sample size, both in number of speakers and number of recordings per speaker, new hypotheses can be explored for differentiating emotions. Recordings from 131 subjects were collected and made available in an online corpus under a Creative Commons license. Thirteen acoustic features from 120 subjects were used as dependent variables in a MANOVA model to differentiate emotions. As a comparison, a simple neural network model was evaluated for its predictive power. Additional recordings intended to exhaust possible ways to express emotion are also explored. This new corpus matches some features found in previous studies for each of the four emotions included (anger, fear, happiness, and sadness).

Degree

PhD

College and Department

Family, Home, and Social Sciences; Psychology

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2021-04-01

Document Type

Dissertation

Handle

http://hdl.lib.byu.edu/1877/etd11561

Keywords

emotion, vocal emotion, acting, psychology, neural networks, artificial intelligence, voice

Language

english

Share

COinS