Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Nicholas A. Ketz"'
Autor:
Bradley M. Robert, Aaron P. Jones, Teagan S. Mullins, Michael C.S. Trumbo, Nicholas A. Ketz, Michael D. Howard, Praveen K. Pilly, Vincent P. Clark
Publikováno v:
Brain Stimulation, Vol 15, Iss 6, Pp 1565-1566 (2022)
Externí odkaz:
https://doaj.org/article/3d837e8393e84342915a33cf8ffb7832
Autor:
Aaron P. Jones, Natalie B. Bryant, Bradley M. Robert, Teagan S. Mullins, Michael C. S. Trumbo, Nicholas A. Ketz, Michael D. Howard, Praveen K. Pilly, Vincent P. Clark
Publikováno v:
Brain Sciences, Vol 13, Iss 3, p 468 (2023)
Previous studies have found a benefit of closed-loop transcranial alternating current stimulation (CL-tACS) matched to ongoing slow-wave oscillations (SWO) during sleep on memory consolidation for words in a paired associates task (PAT). Here, we exa
Externí odkaz:
https://doaj.org/article/d7d41e0ff368404d8c9e1e2908b8a6dd
Autor:
Praveen K. Pilly, Steven W. Skorheim, Ryan J. Hubbard, Nicholas A. Ketz, Shane M. Roach, Itamar Lerner, Aaron P. Jones, Bradley Robert, Natalie B. Bryant, Arno Hartholt, Teagan S. Mullins, Jaehoon Choe, Vincent P. Clark, Michael D. Howard
Publikováno v:
Frontiers in Neuroscience, Vol 13 (2020)
Targeted memory reactivation (TMR) during slow-wave oscillations (SWOs) in sleep has been demonstrated with sensory cues to achieve about 5–12% improvement in post-nap memory performance on simple laboratory tasks. But prior work has not yet addres
Externí odkaz:
https://doaj.org/article/ea47c2a417f94c33820ebb6d4552a619
Autor:
Aaron P. Jones, Jaehoon Choe, Natalie B. Bryant, Charles S. H. Robinson, Nicholas A. Ketz, Steven W. Skorheim, Angela Combs, Melanie L. Lamphere, Bradley Robert, Hope A. Gill, Melissa D. Heinrich, Michael D. Howard, Vincent P. Clark, Praveen K. Pilly
Publikováno v:
Frontiers in Neuroscience, Vol 12 (2018)
Sleep is critically important to consolidate information learned throughout the day. Slow-wave sleep (SWS) serves to consolidate declarative memories, a process previously modulated with open-loop non-invasive electrical stimulation, though not alway
Externí odkaz:
https://doaj.org/article/4959d746f7404eb9b3feb5e52fdc69de
The Benefits of Closed-Loop Transcranial Alternating Current Stimulation on Subjective Sleep Quality
Autor:
Charles S. H. Robinson, Natalie B. Bryant, Joshua W. Maxwell, Aaron P. Jones, Bradley Robert, Melanie Lamphere, Angela Combs, Hussein M. Al Azzawi, Benjamin C. Gibson, Joseph L. Sanguinetti, Nicholas A. Ketz, Praveen K. Pilly, Vincent P. Clark
Publikováno v:
Brain Sciences, Vol 8, Iss 12, p 204 (2018)
Background: Poor sleep quality is a common complaint, affecting over one third of people in the United States. While sleep quality is thought to be related to slow-wave sleep (SWS), there has been little investigation to address whether modulating sl
Externí odkaz:
https://doaj.org/article/e144ac21de0f43a1be6040fc8580eff6
Publikováno v:
From Animals to Animats 16 ISBN: 9783031167690
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::24612338cd6f2acdfe21833a8005753d
https://doi.org/10.1007/978-3-031-16770-6_15
https://doi.org/10.1007/978-3-031-16770-6_15
Publikováno v:
From Animals to Animats 16 ISBN: 9783031167690
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::dc8607365c33fe6fd13803d184dabd84
https://doi.org/10.1007/978-3-031-16770-6_10
https://doi.org/10.1007/978-3-031-16770-6_10
Autor:
Eseoghene Ben-Iwhiwhu, Pawel Ladosz, Nicholas A. Ketz, Soheil Kolouri, Jeffrey L. Krichmar, Praveen K. Pilly, Jeffery Dick, Andrea Soltoggio
Publikováno v:
IEEE transactions on neural networks and learning systems. 33(5)
In this article, we consider a subclass of partially observable Markov decision process (POMDP) problems which we termed confounding POMDPs. In these types of POMDPs, temporal difference (TD)-based reinforcement learning (RL) algorithms struggle, as
Flexible planning is necessary for reaching goals and adapting when conditions change. We introduce a biologically plausible path planning model that learns its environment, rapidly adapts to change, and plans efficient routes to goals. Unlike prior
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::2b9348c36925ee02b2057314a2e9da20
https://doi.org/10.1101/2021.09.08.459317
https://doi.org/10.1101/2021.09.08.459317
Meta-reinforcement learning (meta-RL) algorithms enable agents to adapt quickly to tasks from few samples in dynamic environments. Such a feat is achieved through dynamic representations in an agent's policy network (obtained via reasoning about task
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e00469d84c57593dfde86817eb4e46ce