Abstract
The purpose of this study is to explore a potentially more practical approach to direct writing assessment using computer algorithms. Traditional rubric rating (RR) is a common yet highly resource-intensive evaluation practice when performed reliably. This study compared the traditional rubric model of ESL writing assessment and many-facet Rasch modeling (MFRM) to comparative judgment (CJ), the new approach, which shows promising results in terms of reliability and validity. We employed two groups of raters”novice and experienced”and used essays that had been previously double-rated, analyzed with MFRM, and selected with fit statistics. We compared the results of the novice and experienced groups against the initial ratings using raw scores, MFRM, and a modern form of CJ”randomly distributed comparative judgment (RDCJ). Results showed that the CJ approach, though not appropriate for all contexts, can be valid and as reliable as RR while requiring less time to generate procedures, train and norm raters, and rate the essays. Additionally, the CJ approach is more easily transferable to novel assessment tasks while still providing context-specific scores. Results from this study will not only inform future studies but can help guide ESL programs to determine which rating model best suits their specific needs.
Degree
MA
College and Department
Humanities; Linguistics and English Language
BYU ScholarsArchive Citation
Sims, Maureen Estelle, "Rubric Rating with MFRM vs. Randomly Distributed Comparative Judgment: A Comparison of Two Approaches to Second-Language Writing Assessment" (2018). Theses and Dissertations. 7312.
https://scholarsarchive.byu.edu/etd/7312
Date Submitted
2018-04-01
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd9777
Keywords
rubric rating, many-facet Rasch measurement model (MFRM), comparative judgment (CJ), reliability of ESL writing assessment, practicality of ESL writing assessment
Language
english