Intrinsic rewards explain context-sensitive valuation in reinforcement learning.
Autor: | Molinaro G; Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America., Collins AGE; Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America.; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America. |
---|---|
Jazyk: | angličtina |
Zdroj: | PLoS biology [PLoS Biol] 2023 Jul 17; Vol. 21 (7), pp. e3002201. Date of Electronic Publication: 2023 Jul 17 (Print Publication: 2023). |
DOI: | 10.1371/journal.pbio.3002201 |
Abstrakt: | When observing the outcome of a choice, people are sensitive to the choice's context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms-reflecting a different theoretical viewpoint-may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new "intrinsically enhanced" RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond. Competing Interests: The authors have declared that no competing interests exist. (Copyright: © 2023 Molinaro, Collins. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.) |
Databáze: | MEDLINE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |