Abstract
Most research investigating vocal expressions of emotion has focused on one or more of three questions: whether there exist unique acoustic profiles of individual encoded emotions, whether the nature of emotion expression is universal across cultures, and how accurately decoders can identify expressed emotions. This dissertation begins to answer a fourth question, whether there exist unique patterns in the types of acoustic properties persons focus on to identify vocalized emotions. Three hypotheses were tested: first, whether acoustic patterns are interpreted idiographically or nomothetically as reflected in a comparison of individual vs. group lens model identification ratios; second, whether there exists a decoder by emotion interaction for scores of accuracy; and third, whether such an interaction is mediated by the acoustic properties of the vocalized emotions. Results from hypothesis one indicate there is no difference between individual and group identification ratios, demonstrating that vocalized emotions are decoded nomothetically. Results from hypothesis two indicate there is not a significant decoder by emotion interaction on scores of accuracy, demonstrating that decoders who are generally good (or bad) at identifying some vocalized emotions tend to be generally good (or bad) at identifying all vocalized emotions. There are, however, significant main effects for both emotion and decoder. Anger and happiness are more accurately decoded than fear and sadness. Perhaps most importantly, multivariate results from hypothesis three indicate strong and consistent differences across the four emotions in the way they are identified acoustically. Specifically, decoders identify anger by primarily focusing on spectral characteristics, fear by primarily focusing on frequency (F0), happiness by primarily focusing on rate, and sadness by focusing on both intensity and rate. These acoustic mediation differences across the emotions are also shown to be nomothetic, that is, they are surprisingly consistent across decoders.
Degree
PhD
College and Department
Family, Home, and Social Sciences; Psychology
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Lauritzen, Michael Kenneth, "Acoustic Mediation of Vocalized Emotion Identification: Do Decoders Identify Emotions Idiographically or Nomothetically?" (2009). Theses and Dissertations. 1993.
https://scholarsarchive.byu.edu/etd/1993
Date Submitted
2009-12-14
Document Type
Dissertation
Handle
http://hdl.lib.byu.edu/1877/etd3352
Keywords
Emotion, Vocal, Decoding, Acoustic, Properties, Identification
Language
English