Werewolf Arena: A Case Study in LLM Evaluation via Social Deduction

Autor: Bailis, Suma, Friedhoff, Jane, Chen, Feiyang
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: This paper introduces Werewolf Arena, a novel framework for evaluating large language models (LLMs) through the lens of the classic social deduction game, Werewolf. In Werewolf Arena, LLMs compete against each other, navigating the game's complex dynamics of deception, deduction, and persuasion. The framework introduces a dynamic turn-taking system based on bidding, mirroring real-world discussions where individuals strategically choose when to speak. We demonstrate the framework's utility through an arena-style tournament featuring Gemini and GPT models. Our results reveal distinct strengths and weaknesses in the models' strategic reasoning and communication. These findings highlight Werewolf Arena's potential as a challenging and scalable LLM benchmark.
Comment: 13 pages, 10 figures
Databáze: arXiv