Keywords
NLP parsers, TESOL-oriented applications, Scoring written compositions, Dependency-based shallow parsing, English essay rating
Abstract
To date, traditional NLP parsers have not been widely successful in TESOLoriented applications, particularly in scoring written compositions. Re-engineering such applications to provide the necessary robustness for handling ungrammatical English has proven a formidable obstacle. We discuss the use of a nontraditional parser for rating compositions that attenuates some of these difficulties. Its dependency-based shallow parsing approach provides significant robustness in the face of language learners’ ungrammatical compositions. This paper discusses how a corpus of L2 essays for English was rated using the parser, and how the automatic evaulations compared to those obtained by manual methods. The types of modifications that were made to the system are discussed. Limitations to the current system are described, future plans for developing the system are sketched, and further applications beyond English essay rating are mentioned.
Original Publication Citation
Deryle Lonsdale and Diane Strong-Krause (2003). Automated Rating of ESL Essays, Proceedings of the HLT/NAACL-03 Workshop on Building Educational Applications withNatural Language Processing, Edmonton, Canada; Association for Computational Linguistics;pp. 61-67.
BYU ScholarsArchive Citation
Lonsdale, Deryle W. and Strong-Krause, Diane, "Automated Rating of ESL Essays" (2003). Faculty Publications. 6833.
https://scholarsarchive.byu.edu/facpub/6833
Document Type
Conference Paper
Publication Date
2003
Publisher
Association for Computational Linguistics
Language
English
College
Humanities
Department
Linguistics and English Language
Copyright Use Information
https://lib.byu.edu/about/copyright/