Keywords
elicited imitation, Japanese oral proficiency, item development, speech recognition
Abstract
This study introduces and evaluates a computerized approach to measuring Japanese L2 oral proficiency. We present a testing and scoring method that uses a type of structured speech called elicited imitation (EI) to evaluate accuracy of speech productions. Several types of language resources and toolkits are required to develop, administer, and score responses to this test. First, we present a corpus-based test item creation method to produce EI items with targeted linguistic features in a principled and efficient manner. Second, we sketch how we are able to bootstrap a small learner speech corpus to generate a significantly large corpus of training data for language model construction. Lastly, we show how newly created test items effectively classify learners according to their L2 speaking capability and illustrate how our scoring method computes a metric for language proficiency that correlates well with more traditional human scoring methods.
Original Publication Citation
Hitokazu Matsushita and Deryle Lonsdale (2012). Item Development and Scoring for Japanese Oral Proficiency Testing. In (Nicoletta Calzolari. Khalid Choukri, Thierry Declerck, MehmetUğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, Eds.)Proceedings of the Eighth International Conference on Language Resources and Evaluation(LREC '12), European Language Resources Association (ELRA); pp. 2682-2689, ISBN 978-2-9517408-7-7.
BYU ScholarsArchive Citation
Lonsdale, Deryle W. and Matsushita, Hitokazu, "Item Development and Scoring for Japanese Oral Proficiency Testing" (2012). Faculty Publications. 6862.
https://scholarsarchive.byu.edu/facpub/6862
Document Type
Conference Paper
Publication Date
2012
Publisher
European Language Resources Association
Language
English
College
Humanities
Department
Linguistics
Copyright Use Information
https://lib.byu.edu/about/copyright/