Validating a forced-choice method for eliciting quality-of-reasoning judgments.

Autor: Marcoci, Alexandru, Webb, Margaret E., Rowe, Luke, Barnett, Ashley, Primoratz, Tamar, Kruger, Ariel, Karvetski, Christopher W., Stone, Benjamin, Diamond, Michael L., Saletta, Morgan, van Gelder, Tim, Tetlock, Philip E., Dennis, Simon
Zdroj: Behavior Research Methods; Aug2024, Vol. 56 Issue 5, p4958-4973, 16p
Abstrakt: In this paper we investigate the criterion validity of forced-choice comparisons of the quality of written arguments with normative solutions. Across two studies, novices and experts assessing quality of reasoning through a forced-choice design were both able to choose arguments supporting more accurate solutions—62.2% (SE = 1%) of the time for novices and 74.4% (SE = 1%) for experts—and arguments produced by larger teams—up to 82% of the time for novices and 85% for experts—with high inter-rater reliability, namely 70.58% (95% CI = 1.18) agreement for novices and 80.98% (95% CI = 2.26) for experts. We also explored two methods for increasing efficiency. We found that the number of comparative judgments needed could be substantially reduced with little accuracy loss by leveraging transitivity and producing quality-of-reasoning assessments using an AVL tree method. Moreover, a regression model trained to predict scores based on automatically derived linguistic features of participants' judgments achieved a high correlation with the objective accuracy scores of the arguments in our dataset. Despite the inherent subjectivity involved in evaluating differing quality of reasoning, the forced-choice paradigm allows even novice raters to perform beyond chance and can provide a valid, reliable, and efficient method for producing quality-of-reasoning assessments at scale. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index