Abstract
In order to improve the feedback an intelligent tutoring system provides, the grading engine needs to do more than simply indicate whether a student gives a correct answer or not. Good feedback must provide actionable information with diagnostic value. This means the grading system must be able to determine what knowledge gap or misconception may have caused the student to answer a question incorrectly. This research evaluated the quality of a rules-based grading engine in an automated online homework system by comparing grading engine scores with manually graded scores. The research sought to improve the grading engine by assessing student understanding using knowledge component research. Comparing both the current student scores and the new student scores with the manually graded scores led us to believe the grading engine rules were improved. By better aligning grading engine rules with requisite knowledge components and making revisions to task instructions the quality of the feedback provided would likely be enhanced.
Degree
PhD
College and Department
David O. McKay School of Education; Instructional Psychology and Technology
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Chapman, John Shadrack, "Task-Level Feedback in Interactive Learning Enivonments Using a Rules Based Grading Engine" (2016). Theses and Dissertations. 6605.
https://scholarsarchive.byu.edu/etd/6605
Date Submitted
2016-12-01
Document Type
Dissertation
Handle
http://hdl.lib.byu.edu/1877/etd9037
Keywords
knowledge components, diagnostic instructional feedback, data mining
Language
english