Training Language Models to Win Debates with Self-Play Improves Judge Accuracy
Autor: | Arnesen, Samuel, Rein, David, Michael, Julian |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We test the robustness of debate as a method of scalable oversight by training models to debate with data generated via self-play. In a long-context reading comprehension task, we find that language model based evaluators answer questions more accurately when judging models optimized to win debates. By contrast, we find no such relationship for consultancy models trained to persuade a judge without an opposing debater present. In quantitative and qualitative comparisons between our debate models and novel consultancy baselines, we find evidence that debate training encourages stronger and more informative arguments, showing promise that it can help provide high-quality supervision for tasks that are difficult to directly evaluate. Comment: 48 pages, 12 figures; code at https://github.com/samuelarnesen/nyu-debate-modeling |
Databáze: | arXiv |
Externí odkaz: |