Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments
Autor: | Jeff Cockburn, Yisong Yue, Logan Cross, John P. O'Doherty |
---|---|
Rok vydání: | 2020 |
Předmět: |
0301 basic medicine
Adult Male Brain activity and meditation Computer science media_common.quotation_subject Posterior parietal cortex Sensory system computer.software_genre 03 medical and health sciences Young Adult 0302 clinical medicine Deep Learning Voxel Perception Reinforcement learning State space Humans media_common Computational neuroscience business.industry General Neuroscience Brain Pattern recognition Magnetic Resonance Imaging 030104 developmental biology Video Games Female Artificial intelligence business computer Reinforcement Psychology 030217 neurology & neurosurgery Psychomotor Performance |
Zdroj: | Neuron. 109(4) |
ISSN: | 1097-4199 |
Popis: | Humans possess an exceptional aptitude to efficiently make decisions from high-dimensional sensory observations. However, it is unknown how the brain compactly represents the current state of the environment to guide this process. The deep Q-network (DQN) achieves this by capturing highly nonlinear mappings from multivariate inputs to the values of potential actions. We deployed DQN as a model of brain activity and behavior in participants playing three Atari video games during fMRI. Hidden layers of DQN exhibited a striking resemblance to voxel activity in a distributed sensorimotor network, extending throughout the dorsal visual pathway into posterior parietal cortex. Neural state-space representations emerged from nonlinear transformations of the pixel space bridging perception to action and reward. These transformations reshape axes to reflect relevant high-level features and strip away information about task-irrelevant sensory features. Our findings shed light on the neural encoding of task representations for decision-making in real-world situations. |
Databáze: | OpenAIRE |
Externí odkaz: |