Using Adaptive Comparative Judgment in Writing Assessment: An Investigation of Reliability Among Interdisciplinary Evaluators

Keywords

Adaptive Comparative Judgement (ACJ), assessment, interdisciplinary learning/ environment, interdisciplinary evaluators, integrated STEM education, engineering and technology, design thinking, composition, communication

Abstract

Adaptive Comparative Judgment (ACJ) is an assessment method that facilitates holistic, flexible judgments of student work in place of more quantitative or rubric-based methods. This method “requires little training, and has proved very popular with assessors and teachers in several subjects, and in several countries” (Pollitt 2012, p. 281). This research explores ACJ as a holistic, flexible, interdisciplinary assessment and research tool in the context of an integrated program that combines Design, English Composition, and Communications courses. All technology students at Polytechnic Institute at Purdue University are required to take each of these three core courses. Considering the interdisciplinary nature of the program’s curriculum, this research first explored whether three judges from differing backgrounds could reach an acceptable level of reliability in assessment using only ACJ, without the prerequisites of similar disciplinary backgrounds or significant assessment experience, and without extensive negotiation or other calibration efforts. After establishing acceptable reliability among interdisciplinary judges, analysis was also conducted to investigate differences in student learning between integrated (i.e., interdisciplinary) and non-integrated learning environments. These results suggest evaluators from various backgrounds can establish acceptable levels of reliability using ACJ as an alternative assessment tool to more traditional measures of student learning. This research also suggests technology students in the integrated/ interdisciplinary environment may have demonstrated higher learning gains than their peers and that further research should control for student differences to add confidence to these findings.

Original Publication Citation

Baniya, S., Mentzer, N., Bartholomew, S. R., Chesley, A., Moon, C., & Sherman, D. (2019). Using Adaptive Comparative Judgment in Writing Assessment: An Investigation of Reliability among Interdisciplinary Evaluators. Journal of Technology Studies, 45(1), 24-35.

Document Type

Peer-Reviewed Article

Publication Date

2019

Permanent URL

http://hdl.lib.byu.edu/1877/8278

Publisher

Journal of Technology Studies

Language

English

College

Ira A. Fulton College of Engineering and Technology

Department

Technology

University Standing at Time of Publication

Assistant Professor

Share

COinS