LLM Stability: A detailed analysis with some surprises

Autor: Atil, Berk, Chittams, Alexa, Fu, Liseng, Ture, Ferhan, Xu, Lixinyu, Baldwin, Breck
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: LLM (large language model) practitioners commonly notice that outputs can vary for the same inputs, but we have been unable to find work that evaluates LLM stability as the main objective. In our study of 6 deterministically configured LLMs across 8 common tasks with 5 identical runs, we see accuracy variations up to 10\%. In addition, no LLM consistently delivers repeatable accuracy across all tasks. We also show examples of variation that are not normally distributed and compare configurations with zero-shot/few-shot prompting and fine-tuned examples. To better quantify what is going on, we introduce metrics focused on stability: TARr@N for the total agreement rate at N runs over raw output, and TARa@N for total agreement over parsed-out answers. We suggest that stability metrics be integrated into leader boards and research results going forward.
Databáze: arXiv