Human-level Multiple Choice Question Guessing Without Domain Knowledge
Autor: | Maria Chang, Sharad Sundararajan, Ravi Tejwani, Patrick Watson, Jae-wook Ahn, Tengfei Ma |
---|---|
Rok vydání: | 2018 |
Předmět: |
business.industry
Computer science Deep learning media_common.quotation_subject 02 engineering and technology Data science Open educational resources Framing effect Variety (cybernetics) Crowds 020204 information systems 0202 electrical engineering electronic engineering information engineering Domain knowledge 020201 artificial intelligence & image processing Quality (business) Artificial intelligence business media_common Multiple choice |
Zdroj: | WWW (Companion Volume) |
DOI: | 10.1145/3184558.3186340 |
Popis: | The availability of open educational resources (OER) has enabled educators and researchers to access a variety of learning assessments online. OER communities are particularly useful for gathering multiple choice questions (MCQs), which are easy to grade, but difficult to design well. To account for this, OERs often rely on crowd-sourced data to validate the quality of MCQs. However, because crowds contain many non-experts, and are susceptible to question framing effects, they may produce ratings driven by guessing on the basis of surface-level linguistic features, rather than deep topic knowledge. Consumers of OER multiple choice questions (and authors of original multiple choice questions) would benefit from a tool that automatically provided feedback on assessment quality, and assessed the degree to which OER MCQs are susceptible to framing effects. This paper describes a model that is trained to use domain-naive strategies to guess which multiple choice answer is correct. The extent to which this model can predict the correct answer to an MCQ is an indicator that the MCQ is a poor measure of domain-specific knowledge. We describe an integration of this model with a front-end visualizer and MCQ authoring tool. |
Databáze: | OpenAIRE |
Externí odkaz: |