Learning in Volatile Environments with the Bayes Factor Surprise
Autor: | Alireza Modirshanechi, Wulfram Gerstner, Vasiliki Liakoni, Johanni Brea |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer science Cognitive Neuroscience media_common.quotation_subject Machine Learning (stat.ML) Machine learning computer.software_genre Bayesian inference Statistics - Applications Hierarchical database model Machine Learning (cs.LG) 03 medical and health sciences 0302 clinical medicine Arts and Humanities (miscellaneous) Statistics - Machine Learning Animals Humans Learning Computer Simulation Applications (stat.AP) 030304 developmental biology media_common Behavior 0303 health sciences business.industry Bayes Theorem Bayes factor Surprise FOS: Biological sciences Quantitative Biology - Neurons and Cognition Neurons and Cognition (q-bio.NC) Artificial intelligence business Reinforcement Psychology computer Algorithms 030217 neurology & neurosurgery |
Popis: | Surprise-based learning allows agents to rapidly adapt to nonstationary stochastic environments characterized by sudden changes. We show that exact Bayesian inference in a hierarchical model gives rise to a surprise-modulated trade-off between forgetting old observations and integrating them with the new ones. The modulation depends on a probability ratio, which we call the Bayes Factor Surprise, that tests the prior belief against the current belief. We demonstrate that in several existing approximate algorithms, the Bayes Factor Surprise modulates the rate of adaptation to new observations. We derive three novel surprise-based algorithms, one in the family of particle filters, one in the family of variational learning, and one in the family of message passing, that have constant scaling in observation sequence length and particularly simple update dynamics for any distribution in the exponential family. Empirical results show that these surprise-based algorithms estimate parameters better than alternative approximate approaches and reach levels of performance comparable to computationally more expensive algorithms. The Bayes Factor Surprise is related to but different from the Shannon Surprise. In two hypothetical experiments, we make testable predictions for physiological indicators that dissociate the Bayes Factor Surprise from the Shannon Surprise. The theoretical insight of casting various approaches as surprise-based learning, as well as the proposed online algorithms, may be applied to the analysis of animal and human behavior and to reinforcement learning in nonstationary environments. |
Databáze: | OpenAIRE |
Externí odkaz: |