Abstract

Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.

Degree

MS

College and Department

David O. McKay School of Education; Communication Disorders

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2016-05-01

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd8566

Keywords

developmental sentence scoring, automated language sample analysis, automated developmental sentence scoring

Language

english

Share

COinS