Towards Automated Evaluation of Handwritten Assessments
Autor: | Vijay Rowtula, Subba Reddy Oota, Jawahar C.V |
---|---|
Rok vydání: | 2019 |
Předmět: |
business.industry
Computer science Semantic analysis (machine learning) Deep learning 02 engineering and technology computer.software_genre Domain (software engineering) Set (abstract data type) 020204 information systems 0202 electrical engineering electronic engineering information engineering Feature (machine learning) 020201 artificial intelligence & image processing Artificial intelligence business computer Throughput (business) Natural language processing |
Zdroj: | ICDAR |
DOI: | 10.1109/icdar.2019.00075 |
Popis: | Automated evaluation of handwritten answers has been a challenging problem for scaling the education system for many years. Speeding up the evaluation remains as the major bottleneck for enhancing the throughput of instructors. This paper describes an effective method for automatically evaluating the short descriptive handwritten answers from the digitized images. Our goal is to evaluate a student's handwritten answer by assigning an evaluation score that is comparable to the human-assigned scores. Existing works in this domain mainly focused on evaluating handwritten essays with handcrafted, non-semantic features. Our contribution is two-fold: 1) we model this problem as a self-supervised, feature-based classification problem, which can fine-tune itself for each question without any explicit supervision. 2) We introduce the usage of semantic analysis for auto-evaluation in handwritten text space using the combination of Information Retrieval and Extraction (IRE) and, Natural Language Processing (NLP) methods to derive a set of useful features. We tested our method on three datasets created from various domains, using the help of students of different age groups. Experiments show that our method performs comparably to that of human evaluators. |
Databáze: | OpenAIRE |
Externí odkaz: |