Abstract
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
Degree
MS
College and Department
David O. McKay School of Education; Communication Disorders
Rights
http://lib.byu.edu/about/copyright/
BYU ScholarsArchive Citation
Janis, Sarah Elizabeth, "A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring" (2016). Theses and Dissertations. 5892.
https://scholarsarchive.byu.edu/etd/5892
Date Submitted
2016-05-01
Document Type
Thesis
Handle
http://hdl.lib.byu.edu/1877/etd8566
Keywords
developmental sentence scoring, automated language sample analysis, automated developmental sentence scoring
Language
english