Comparison of Ranking and Rating Scales in Online Peer Assessment
Autor: | Andrew E. Waters, Dmytro Babik, Scott P. Stevens |
---|---|
Rok vydání: | 2019 |
Předmět: |
business.industry
Computer science media_common.quotation_subject education Fidelity Context (language use) Artifact (software development) Machine learning computer.software_genre Network topology Peer assessment Ranking Rating scale Artificial intelligence business computer Reliability (statistics) media_common |
Zdroj: | LAK |
Popis: | This study examines fidelity of ranking and rating scales in the context of online peer review and assessment. Using the Monte-Carlo simulation technique, we demonstrated that rating scales outperform ranking scales in revealing the relative "true" latent quality of the peer-assessed artifacts via the observed aggregate peer assessment scores. Our analysis focused on a simple, single-round peer assessment process and took into account peer assessment network topology, network size, the number of assessments per artifact, and the correlation statistics used. This methodology allows to separate the effects of structural components of peer assessment from cognitive effects. |
Databáze: | OpenAIRE |
Externí odkaz: |