Increasing accuracy of automated essay grading by grouping similar graders
Autor: | Kaja Zupanc, Zoran Bosnić |
---|---|
Rok vydání: | 2018 |
Předmět: |
060201 languages & linguistics
Computer science business.industry 05 social sciences 050401 social sciences methods 06 humanities and the arts computer.software_genre 0504 sociology Automated essay evaluation 0602 languages and literature ComputingMilieux_COMPUTERSANDEDUCATION Artificial intelligence Cluster analysis Grading (education) business computer Natural language processing |
Zdroj: | WIMS |
DOI: | 10.1145/3227609.3227645 |
Popis: | Automated essay evaluation is a widely used practical solution for replacing time-consuming manual grading of student essays. Automated systems are used in combination with human graders in different high-stake assessments, where grading models are learned on essays datasets scored by different graders. Despite the unified grading rules, human graders can unintentionally introduce subjective bias into scores. Consequently, a grading model has to learn from a data that represents a noisy relationship between essay attributes and its grade. We propose an approach for separating a set of essays into subsets that represent similar graders, which uses an explanation methodology and clustering. The results confirm our assumption that learning from the ensemble of separated models can significantly improve the average prediction accuracy on artificial and real-world datasets. |
Databáze: | OpenAIRE |
Externí odkaz: |