Mastering Board Games by External and Internal Planning with Language Models

Autor: Schultz, John, Adamek, Jakub, Jusup, Matej, Lanctot, Marc, Kaisers, Michael, Perrin, Sarah, Hennes, Daniel, Shar, Jeremy, Lewis, Cannada, Ruoss, Anian, Zahavy, Tom, Veličković, Petar, Prince, Laurel, Singh, Satinder, Malmi, Eric, Tomašev, Nenad
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: While large language models perform well on a range of complex tasks (e.g., text generation, question answering, summarization), robust multi-step planning and reasoning remains a considerable challenge for them. In this paper we show that search-based planning can significantly improve LLMs' playing strength across several board games (Chess, Fischer Random / Chess960, Connect Four, and Hex). We introduce, compare and contrast two major approaches: In external search, the model guides Monte Carlo Tree Search (MCTS) rollouts and evaluations without calls to an external engine, and in internal search, the model directly generates in-context a linearized tree of potential futures and a resulting final choice. Both build on a language model pre-trained on relevant domain knowledge, capturing the transition and value functions across these games. We find that our pre-training method minimizes hallucinations, as our model is highly accurate regarding state prediction and legal moves. Additionally, both internal and external search indeed improve win-rates against state-of-the-art bots, even reaching Grandmaster-level performance in chess while operating on a similar move count search budget per decision as human Grandmasters. The way we combine search with domain knowledge is not specific to board games, suggesting direct extensions into more general language model inference and training techniques.
Databáze: arXiv