Autor: |
Chen, Bo-Wei, Yang, Shih-Hung, Lo, Yu-Chun, Wang, Ching-Fu, Wang, Han-Lin, Hsu, Chen-Yang, Kuo, Yun-Ting, Chen, Jung-Chen, Lin, Sheng-Huang, Pan, Han-Chi, Lee, Sheng-Wei, Yu, Xiao, Qu, Boyi, Kuo, Chao-Hung, Chen, You-Yin, Lai, Hsin-Yi |
Předmět: |
|
Zdroj: |
International Journal of Neural Systems; Sep2020, Vol. 30 Issue 09, pN.PAG-N.PAG, 21p |
Abstrakt: |
Hippocampal place cells and interneurons in mammals have stable place fields and theta phase precession profiles that encode spatial environmental information. Hippocampal CA1 neurons can represent the animal's location and prospective information about the goal location. Reinforcement learning (RL) algorithms such as Q-learning have been used to build the navigation models. However, the traditional Q-learning (t Q-learning) limits the reward function once the animals arrive at the goal location, leading to unsatisfactory location accuracy and convergence rates. Therefore, we proposed a revised version of the Q-learning algorithm, dynamical Q-learning (d Q-learning), which assigns the reward function adaptively to improve the decoding performance. Firing rate was the input of the neural network of d Q-learning and was used to predict the movement direction. On the other hand, phase precession was the input of the reward function to update the weights of d Q-learning. Trajectory predictions using d Q- and t Q-learning were compared by the root mean squared error (RMSE) between the actual and predicted rat trajectories. Using d Q-learning, significantly higher prediction accuracy and faster convergence rate were obtained compared with t Q-learning in all cell types. Moreover, combining place cells and interneurons with theta phase precession improved the convergence rate and prediction accuracy. The proposed d Q-learning algorithm is a quick and more accurate method to perform trajectory reconstruction and prediction. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|