Aligning Human and LLM Judgments: Insights from EvalAssist on Task-Specific Evaluations and AI-assisted Assessment Strategy Preferences

Autor: Ashktorab, Zahra, Desmond, Michael, Pan, Qian, Johnson, James M., Cooper, Martin Santillan, Daly, Elizabeth M., Nair, Rahul, Pedapati, Tejaswini, Achintalwar, Swapnaja, Geyer, Werner
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Evaluation of large language model (LLM) outputs requires users to make critical judgments about the best outputs across various configurations. This process is costly and takes time given the large amounts of data. LLMs are increasingly used as evaluators to filter training data, evaluate model performance or assist human evaluators with detailed assessments. To support this process, effective front-end tools are critical for evaluation. Two common approaches for using LLMs as evaluators are direct assessment and pairwise comparison. In our study with machine learning practitioners (n=15), each completing 6 tasks yielding 131 evaluations, we explore how task-related factors and assessment strategies influence criteria refinement and user perceptions. Findings show that users performed more evaluations with direct assessment by making criteria task-specific, modifying judgments, and changing the evaluator model. We conclude with recommendations for how systems can better support interactions in LLM-assisted evaluations.
Databáze: arXiv