Emotion recognition from facial expressions has been thoroughly explored and explained through decades of research, but emotion recognition from vocal expressions has yet to be fully explained. This project builds on previous experimental approaches to create a large audio corpus of acted vocal emotion. With a large enough sample size, both in number of speakers and number of recordings per speaker, new hypotheses can be explored for differentiating emotions. Recordings from 131 subjects were collected and made available in an online corpus under a Creative Commons license. Thirteen acoustic features from 120 subjects were used as dependent variables in a MANOVA model to differentiate emotions. As a comparison, a simple neural network model was evaluated for its predictive power. Additional recordings intended to exhaust possible ways to express emotion are also explored. This new corpus matches some features found in previous studies for each of the four emotions included (anger, fear, happiness, and sadness).
College and Department
Family, Home, and Social Sciences
BYU ScholarsArchive Citation
Kowallis, Logan Ricks, "How Many Ways Can You Vocalize Emotion? Introducing an Audio Corpus of Acted Emotion" (2021). Theses and Dissertations. 8921.
emotion, vocal emotion, acting, psychology, neural networks, artificial intelligence, voice