Automated Scoring of Short-Answer Reading Items: Implications for Constructs
Autor: | Xiaoming Xi, Nathan T. Carr |
---|---|
Rok vydání: | 2010 |
Předmět: |
Linguistics and Language
Point (typography) business.industry Process (engineering) Computer science media_common.quotation_subject Short answer Replicate computer.software_genre Automation Language and Linguistics Test (assessment) Reading (process) Data mining Artificial intelligence business Construct (philosophy) computer Natural language processing media_common |
Zdroj: | Language Assessment Quarterly. 7:205-218 |
ISSN: | 1543-4311 1543-4303 |
DOI: | 10.1080/15434300903443958 |
Popis: | This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they wish to assess. The article also points out examples of ways in which attempting to replicate human rating behavior while avoiding incidental construct alteration can pinpoint areas of the automated scoring process requiring further development. The examples discussed in this article illustrate the central point that construct definitions should always guide the development of scoring algorithms while the process of developing and refining such algorithms requires more rigorous construct definitions and can potentially push us to refine our constructs. |
Databáze: | OpenAIRE |
Externí odkaz: |