Automated Feedback and Automated Scoring in the Elementary Grades: Usage, Attitudes, and Associations with Writing Outcomes in a Districtwide Implementation of MI Write

Publication Information

Authors:

  • Joshua Wilson, University of Delaware
  • Yue Huang, University of Delaware
  • Corey Palermo, Measurement Incorporated
  • Gaysha Beard, Red Clay Consolidated School District
  • Charles A. MacArthur, University of Delaware

Pages:

  • 234-276

Keywords:

  • Automated feedback, Automated writing evaluation, Automated essay scoring, Writing assessment, Educational technology

Abstract:

  • This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called MI Write in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other predictors like demographics and writing self-efficacy explained variability in students’ performance on a proximal and distal measure of writing performance. The participants included 1935 students in Grades 3–5 and 135 writing teachers from 14 elementary schools in a mid-Atlantic school district. Findings indicated that though MI Write was somewhat under-utilized, teachers and students held positive attitudes towards the AWE system. Usage of MI Write had a mixed and limited predictive effect on outcomes: The number of essays written had a small predictive effect on state test performance for Grades 3 and 5; gain on revision had a moderate predictive effect on posttest writing quality and a small predictive effect for Grade 5 state test performance. Students’ average AWE scores showed consistently moderate to large predictive effects for all outcomes. Interpreted in light of the underlying architecture of MI Write, findings have implications for other school districts considering implementing AWE as well as the design of AWE systems intended to support the teaching and learning of writing.