An empirical evaluation of active inference in multi-armed bandits.
Autor: | Marković D; Faculty of Psychology, Technische Universität Dresden, 01062 Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062 Dresden, Germany. Electronic address: dimitrije.markovic@tu-dresden.de., Stojić H; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, 10-12 Russell Square, London, WC1B 5EH, United Kingdom; Secondmind, 72 Hills Rd, Cambridge, CB2 1LA, United Kingdom., Schwöbel S; Faculty of Psychology, Technische Universität Dresden, 01062 Dresden, Germany., Kiebel SJ; Faculty of Psychology, Technische Universität Dresden, 01062 Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062 Dresden, Germany. |
---|---|
Jazyk: | angličtina |
Zdroj: | Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2021 Dec; Vol. 144, pp. 229-246. Date of Electronic Publication: 2021 Aug 26. |
DOI: | 10.1016/j.neunet.2021.08.018 |
Abstrakt: | A key feature of sequential decision making under uncertainty is a need to balance between exploiting-choosing the best action according to the current knowledge, and exploring-obtaining information about values of other actions. The multi-armed bandit problem, a classical task that captures this trade-off, served as a vehicle in machine learning for developing bandit algorithms that proved to be useful in numerous industrial applications. The active inference framework, an approach to sequential decision making recently developed in neuroscience for understanding human and animal behaviour, is distinguished by its sophisticated strategy for resolving the exploration-exploitation trade-off. This makes active inference an exciting alternative to already established bandit algorithms. Here we derive an efficient and scalable approximate active inference algorithm and compare it to two state-of-the-art bandit algorithms: Bayesian upper confidence bound and optimistic Thompson sampling. This comparison is done on two types of bandit problems: a stationary and a dynamic switching bandit. Our empirical evaluation shows that the active inference algorithm does not produce efficient long-term behaviour in stationary bandits. However, in the more challenging switching bandit problem active inference performs substantially better than the two state-of-the-art bandit algorithms. The results open exciting venues for further research in theoretical and applied machine learning, as well as lend additional credibility to active inference as a general framework for studying human and animal behaviour. Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. (Copyright © 2021 The Author(s). Published by Elsevier Ltd.. All rights reserved.) |
Databáze: | MEDLINE |
Externí odkaz: |