Assessment Modeling: Fundamental Pre-training Tasks for Interactive Educational Systems

Autor: Choi, Youngduck, Lee, Youngnam, Cho, Junghyun, Baek, Jineon, Shin, Dongmin, Yu, Hangyeol, Shim, Yugeun, Lee, Seewoo, Shin, Jonghun, Bae, Chan, Kim, Byungsoo, Heo, Jaewe
Rok vydání: 2019
Předmět:
Druh dokumentu: Working Paper
Popis: Like many other domains in Artificial Intelligence (AI), there are specific tasks in the field of AI in Education (AIEd) for which labels are scarce and expensive, such as predicting exam score or review correctness. A common way of circumventing label-scarce problems is pre-training a model to learn representations of the contents of learning items. However, such methods fail to utilize the full range of student interaction data available and do not model student learning behavior. To this end, we propose Assessment Modeling, a class of fundamental pre-training tasks for general interactive educational systems. An assessment is a feature of student-system interactions which can serve as a pedagogical evaluation. Examples include the correctness and timeliness of a student's answer. Assessment Modeling is the prediction of assessments conditioned on the surrounding context of interactions. Although it is natural to pre-train on interactive features available in large amounts, limiting the prediction targets to assessments focuses the tasks' relevance to the label-scarce educational problems and reduces less-relevant noise. While the effectiveness of different combinations of assessments is open for exploration, we suggest Assessment Modeling as a first-order guiding principle for selecting proper pre-training tasks for label-scarce educational problems.
Databáze: arXiv