Abstract
Concept maps provide a way to assess how well students have developed an organized understanding of how the concepts taught in a unit are interrelated and fit together. However, concept maps are challenging to score because of the idiosyncratic ways in which students organize their knowledge (McClure, Sonak, & Suen, 1999). The construct a map or C-mapping" task has been shown to capture students' organized understanding. This "C-mapping" task involves giving students a list of concepts and asking them to produce a map showing how these concepts are interrelated. The purpose of this study was twofold: (a) to determine to what extent the use of the restricted C-mapping technique coupled with the threefold scoring rubric produced reliable ratings of students conceptual understanding from two examinations, and (b) to project how the reliability of the mean ratings for individual students would likely vary as a function of the average number of raters and rating occasions from two examinations. Nearly three-fourths (73%) of the variability in the ratings for one exam and (43 %) of the variability for the other exam were due to dependable differences in the students' understanding detected by the raters. The rater inconsistencies were higher for one exam and somewhat lower for the other exam. The person-to-rater interaction was relatively small for one exam and somewhat higher for the other exam. The rater-by-occasion variance components were zero for both exams. The unexplained variance accounted for 19% on one exam and 14% on the other. The size of the reliability coefficient of student concept map scores varied across the two examinations. A reliability of .95 and .93 for relative and absolute decision was obtained for one exam. A reliability of .88 and .78. for absolute and relative decision was obtained for the other exam. Increasing the number of raters from one to two on one rating occasion would yield a greater increase in the reliability of the ratings at a lower cost than increasing the number of rating occasions. The same pattern holds for both exams.
Degree
PhD
College and Department
David O. McKay School of Education; Instructional Psychology and Technology
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Jimenez, Laura, "Estimating the Reliability of Concept Map Ratings Using a Scoring Rubric Based on Three Attributes" (2010). Theses and Dissertations. 2284.
https://scholarsarchive.byu.edu/etd/2284
Date Submitted
2010-07-16
Document Type
Dissertation
Handle
http://hdl.lib.byu.edu/1877/etd3863
Keywords
concept maps, reliability, scoring rubric, rating concept maps propositions, Biology, connected understanding, assessment, psychometrics, concept map ratings, Ruiz-Primo
Language
English