Providing Meaningful Feedback for Autograding of Programming Assignments
Autor: | Andrew Tjang, Danielle Yucht, Georgiana Haldeman, Stephen A. Bartos, Monica Babes-Vroman, Thu D. Nguyen, Jay V. Shah |
---|---|
Rok vydání: | 2018 |
Předmět: |
Class (computer programming)
Information retrieval Computer science Scale (chemistry) 05 social sciences 050301 education 020206 networking & telecommunications 02 engineering and technology Programming assignment Categorization ComputingMilieux_COMPUTERSANDEDUCATION 0202 electrical engineering electronic engineering information engineering Test suite Code (cryptography) 0503 education |
Zdroj: | SIGCSE |
DOI: | 10.1145/3159450.3159502 |
Popis: | Autograding systems are increasingly being deployed to meet the challenge of teaching programming at scale. We propose a methodology for extending autograders to provide meaningful feedback for incorrect programs. Our methodology starts with the instructor identifying the concepts and skills important to each programming assignment, designing the assignment, and designing a comprehensive test suite. Tests are then applied to code submissions to learn classes of common errors and produce classifiers to automatically categorize errors in future submissions. The instructor maps the errors to concepts and skills and writes hints to help students find their misconceptions and mistakes. We have applied the methodology to two assignments from our Introduction to Computer Science course. We used submissions from one semester of the class to build classifiers and write hints for observed common errors. We manually validated the automatic error categorization and potential usefulness of the hints using submissions from a second semester. We found that the hints given for erroneous submissions should be helpful for 96% or more of the cases. Based on these promising results, we have deployed our hints and are currently collecting submissions and feedback from students and instructors. |
Databáze: | OpenAIRE |
Externí odkaz: |