Keywords
elicited imitation, testing, second language
Abstract
This paper explores the role of machine learning in automating the scoring for one kind of spoken language test: elicited imitation (EI). After sketching the background and rationale for EI testing, we give a brief overview of EI test results that we have collected. To date, the administration and scoring of these tests have been done sequentially and the scoring latency has not been critically important; our goal now is to automate the test. We show how this implies the need for an adaptive capability at run time, and motivate the need for machine learning in the creation of this kind of test. We discuss our sizable store of data from prior EI test administrations. Then we show various experiments that illustrate how this prior information is useful in predicting student performance. We present simulations designed to foreshadow how well the system will be able to adapt on-the-fly to student responses. Finally, we draw conclusions and mention possible future work.
Original Publication Citation
Deryle Lonsdale and Carl Christensen. (2011). Automating the scoring of elicited imitation tests. Proceedings of the ACL-HLT/ICML/ISCA Joint Symposium on Machine Learning in Speechand Language Processing. (5 pages). On-line publication here.
BYU ScholarsArchive Citation
Lonsdale, Deryle W. and Chritensen, Carl, "Automating the Scoring of Elicited Imitation Tests" (2011). Faculty Publications. 6857.
https://scholarsarchive.byu.edu/facpub/6857
Document Type
Conference Paper
Publication Date
2011
Publisher
Machine Learning in Speechand Language Processing
Language
English
College
Humanities
Department
Linguistics
Copyright Use Information
https://lib.byu.edu/about/copyright/