Abstract

The purpose of this study is to explore a potentially more practical approach to direct writing assessment using computer algorithms. Traditional rubric rating (RR) is a common yet highly resource-intensive evaluation practice when performed reliably. This study compared the traditional rubric model of ESL writing assessment and many-facet Rasch modeling (MFRM) to comparative judgment (CJ), the new approach, which shows promising results in terms of reliability and validity. We employed two groups of raters<&hyphen>”novice and experienced<&hyphen>”and used essays that had been previously double-rated, analyzed with MFRM, and selected with fit statistics. We compared the results of the novice and experienced groups against the initial ratings using raw scores, MFRM, and a modern form of CJ<&hyphen>”randomly distributed comparative judgment (RDCJ). Results showed that the CJ approach, though not appropriate for all contexts, can be valid and as reliable as RR while requiring less time to generate procedures, train and norm raters, and rate the essays. Additionally, the CJ approach is more easily transferable to novel assessment tasks while still providing context-specific scores. Results from this study will not only inform future studies but can help guide ESL programs to determine which rating model best suits their specific needs.

Degree

MA

College and Department

Humanities; Linguistics and English Language

Date Submitted

2018-04-01

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd9777

Keywords

rubric rating, many-facet Rasch measurement model (MFRM), comparative judgment (CJ), reliability of ESL writing assessment, practicality of ESL writing assessment

Language

english

Share

COinS