Keywords
automatic music generation, emotive response
Abstract
We present a system that generates original music designed to match a target emotion. It creates n-gram models, Hidden Markov Models, and other statistical distributions based on musical selections from a corpus representing a given emotion and uses these models to probabilistically generate new musical selections with similar emotional content. This system produces unique and often remarkably musical selections that tend to match a target emotion, performing this task at a level that approaches human competency for the same task.
Original Publication Citation
Kristine Perry, Tony Martinez and Dan Ventura, "Automatic Generation of Music for Inducing Emotive Response", Proceedings of the International Conference on Computational Creativity, pp. 14-149, 21.
BYU ScholarsArchive Citation
Martinez, Tony R.; Monteith, Kristine; and Ventura, Dan A., "Automatic Generation of Music for Inducing Emotive Response" (2010). Faculty Publications. 830.
https://scholarsarchive.byu.edu/facpub/830
Document Type
Peer-Reviewed Article
Publication Date
2010-01-01
Permanent URL
http://hdl.lib.byu.edu/1877/2646
Publisher
Computational Creativity
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2008 Dan Ventura et al.
Copyright Use Information
http://lib.byu.edu/about/copyright/