Abstract
Procedural generation algorithms can infer rules based on a dataset of examples when each example is made up of labeled components. Unfortunately, musical sequences resist potential inclusion in these kinds of datasets because they lack explicit structural semantics. In order to algorithmically transform a musical sequence into a sequence of labeled components, a segmentation process is needed. We outline a solution to the challenge of musical phrase segmentation that uses grammatical induction algorithms, a class of algorithms which infer a context-free grammar from an input sequence. We study five different grammatical induction algorithms on three different datasets, one of which is introduced in this work. Additionally, we test how the performance of each algorithm varies when transforming musical sequences using viewpoint combinations. Our experiments show that the algorithm longestFirst achieves the best F1 scores across all three datasets, and that viewpoint combinations which include the duration viewpoint result in the best performance.
Degree
MS
College and Department
Physical and Mathematical Sciences; Computer Science
Rights
https://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Perkins, Reed James, "Musical Phrase Segmentation via Grammatical Induction" (2022). Theses and Dissertations. 9426.
https://scholarsarchive.byu.edu/etd/9426
Date Submitted
2022-04-06
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd12063
Keywords
grammtical induction, LZ78, Sequitur, context-free grammars, phrase segmentation
Language
english