Automated Essay Evaluation for English Language Learners: A Case Study of MY Access.

Autor: Giang Thi Linh Hoang, Kunnan, Antony John
Předmět:
Zdroj: Language Assessment Quarterly; 2016, Vol. 13 Issue 4, p359-376, 18p
Abstrakt: Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or evaluate the claims of the developers. This study examined the effectiveness of automated writing instructional programs in consistent scoring of essays and appropriate feedback to student writers. It examined the scoring and instructional program called MY Access! Home Edition, which has an error feedback tool called My Editor to address these issues. The study combined a quantitative study of agreement and correlational analyses with an analysis of content and topic. Participants included 114 English language learners who wrote 147 essays to three writing prompts, which were graded by trained EFL raters and MY Access. From the sample, 15 randomly selected essays were also used for an error analysis comparing My Editor with human annotations to examine My Editor's accuracy. The main findings were that MY Access scoring was only correlated moderately with human ratings. Furthermore, because MY Access scoring is limited to the recognition of content words, not how these words are organized at the discourse level, it did not detect slightly off-topic essays and plagiarism. Finally, My Editor's error feedback, with 73% precision and 30% recall, did not meet the expectations of an accurate tool. In conclusion, the home edition of MY Access was not found to be useful as an independent instructional tool. These findings give us pause regarding the effectiveness of MY Access. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index