Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Dhushan Thevarajah"'
Publikováno v:
Frontiers in Behavioral Neuroscience, Vol 3 (2010)
In learning models of strategic game play, an agent constructs a valuation (action value) over possible future choices as a function of past actions and rewards. Choices are then stochastic functions of these action values. Our goal is to uncover a n
Externí odkaz:
https://doaj.org/article/2b0d428a42514d4a8c3f9321a896754c
Autor:
Siwei Xie, Abdullahi Abunafeesa, Yong Gu, Mingpo Yang, Xiaochun Wang, Jiahao Tu, Dhushan Thevarajah, Michael Christopher Dorris
Game theory can predict the distribution of choices in aggregate during mixed-strategy games, yet the neural process mediating individual probabilistic choices remains poorly understood. Here, we examined the role of frontal eye field (FEF) in a deci
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::95e1852acf8fb266f876a51fde4427d3
https://doi.org/10.1101/2022.09.18.508403
https://doi.org/10.1101/2022.09.18.508403
Autor:
Beizhen Zhang, Baijie Xu (徐佰杰), Michael C. Dorris, Dhushan Thevarajah, Mingpo Yang, David Martin Milstein, Yuchen Zhao (赵宇晨), Gongchen Yu (余功臣), Janis Ying Ying Kan
Publikováno v:
Journal of Neurophysiology. 115:741-751
Microsaccades are small-amplitude (typically
Game theory outlines optimal response strategies during mixed-strategy competitions. The neural processes involved in choosing individual strategic actions, however, remain poorly understood. Here, we tested whether the superior colliculus (SC), a br
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::23295860880c56adbe82d363f8fcffd8
https://europepmc.org/articles/PMC6666345/
https://europepmc.org/articles/PMC6666345/
Publikováno v:
Frontiers in Behavioral Neuroscience
Frontiers in Behavioral Neuroscience, Vol 3 (2010)
Frontiers in Behavioral Neuroscience, Vol 3 (2010)
In learning models of strategic game play, an agent constructs a valuation (action value) over possible future choices as a function of past actions and rewards. Choices are then stochastic functions of these action values. Our goal is to uncover a n