When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards

Autor: Alzahrani, Norah, Alyahya, Hisham Abdullah, Alnumay, Yazeed, Alrashed, Sultan, Alsubaie, Shaykhah, Almushaykeh, Yusef, Mirza, Faisal, Alotaibi, Nouf, Altwairesh, Nora, Alowisheq, Areeb, Bari, M Saiful, Khan, Haidar
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value - we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a hybrid scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness.
Comment: updated with ACL 2024 camera ready version
Databáze: arXiv