How Consistent Are Humans When Grading Programming Assignments?

Autor: Messer, Marcus, Brown, Neil C. C., Kölling, Michael, Shi, Miaojing
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Providing consistent summative assessment to students is important, as the grades they are awarded affect their progression through university and future career prospects. While small cohorts are typically assessed by a single assessor, such as the class leader, larger cohorts are often assessed by multiple assessors, which increases the risk of inconsistent grading. To investigate the consistency of human grading of programming assignments, we asked 28 participants to each grade 40 CS1 introductory Java assignments, providing grades and feedback for correctness, code elegance, readability and documentation; the 40 assignments were split into two batches of 20. In the second batch of 20, we duplicated one assignment from the first to analyse the internal consistency of individual assessors. We measured the inter-rater reliability of the groups using Krippendorf's $\alpha$ -- an $\alpha > 0.667$ is recommended to make tentative conclusions based on the rating. Our groups were inconsistent, with an average $\alpha = 0.2$ when grading correctness and an average $\alpha < 0.1$ for code elegance, readability and documentation. To measure the individual consistency of graders, we measured the distance between the grades they awarded for the duplicated assignment in batch one and batch two. Only one participant of the 22 who didn't notice that the assignment was a duplicate was awarded the same grade for correctness, code elegance, readability and documentation. The average grade difference was 1.79 for correctness and less than 1.6 for code elegance, readability and documentation. Our results show that human graders in our study can not agree on the grade to give a piece of student work and are often individually inconsistent, suggesting that the idea of a ``gold standard'' of human grading might be flawed, and highlights that a shared rubric alone is not enough to ensure consistency.
Databáze: arXiv