Abstract

A validity study can be used to investigate the effectiveness of an exam and reveal both its strengths and weaknesses. This study concerns an investigation of the writing portfolio Level Achievement Test (LAT) at the English Language Center (ELC) of Brigham Young University (BYU). The writing portfolios of 251 students at five proficiency levels were rated by 11 raters. Writing portfolios consisted of two coursework essays, a self-reflection assignment, and a 30-minute timed essay. Quantitative methods included an analysis with Many-Facet Rasch Model (MFRM) software, called FACETS, which looked for anomalies in levels, classes, examinees, raters, writing criteria, and the rating scale categories. Qualitative methods involved a rater survey, rater Think Aloud Protocols (TAPs), and rater interviews. Results indicated that the exam has a high degree of validity based on the MFRM analysis. The survey and TAPs revealed that although raters follow a similar pattern for rating portfolios, they differed both in the time they took to rate portfolios and in the degree to which they favored the rating criteria. This may explain some of the discrepancies in the MFRM rater analysis. Conclusions from the MFRM analysis, surveys, TAPs, and interviews were all used to make recommendations to improve the rating process of the LAT, as well as to strengthen the relationship between LAT rating and classroom teaching and grading.

Degree

MA

College and Department

Humanities; Linguistics and English Language

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2006-06-29

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd1363

Keywords

language testing, Rasch model, Think Aloud Protocols, rater, writing, ESL, validity, teaching, TESOL, portfolio, rating scale, writing criteria

Language

English

Included in

Linguistics Commons

Share

COinS