Multi-Armed Bandits for Minesweeper: Profiting from Exploration-Exploitation Synergy
Autor: | Lordeiro, Igor Q., Haddad, Diego B., Cardoso, Douglas O. |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/TG.2021.3082909 |
Popis: | A popular computer puzzle, the game of Minesweeper requires its human players to have a mix of both luck and strategy to succeed. Analyzing these aspects more formally, in our research we assessed the feasibility of a novel methodology based on Reinforcement Learning as an adequate approach to tackle the problem presented by this game. For this purpose we employed Multi-Armed Bandit algorithms which were carefully adapted in order to enable their use to define autonomous computational players, targeting to make the best use of some game peculiarities. After experimental evaluation, results showed that this approach was indeed successful, especially in smaller game boards, such as the standard beginner level. Despite this fact the main contribution of this work is a detailed examination of Minesweeper from a learning perspective, which led to various original insights which are thoroughly discussed. Comment: To be published in IEEE Transactions on Games (ISSN 2475-1510 / 2475-1502) |
Databáze: | arXiv |
Externí odkaz: |