Zobrazeno 1 - 10
of 723
pro vyhledávání: '"Gershman, Samuel"'
Does learning of task-relevant representations stop when behavior stops changing? Motivated by recent theoretical advances in machine learning and the intuitive observation that human experts continue to learn from practice even after mastery, we hyp
Externí odkaz:
http://arxiv.org/abs/2411.03541
A suite of impressive scientific discoveries have been driven by recent advances in artificial intelligence. These almost all result from training flexible algorithms to solve difficult optimization problems specified in advance by teams of domain sc
Externí odkaz:
http://arxiv.org/abs/2408.14508
Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This paper integrates these theoretical ideas with work on cogniti
Externí odkaz:
http://arxiv.org/abs/2402.06590
Autor:
Lu, Qihong, Nguyen, Tan T., Zhang, Qiong, Hasson, Uri, Griffiths, Thomas L., Zacks, Jeffrey M., Gershman, Samuel J., Norman, Kenneth A.
It has been proposed that, when processing a stream of events, humans divide their experiences in terms of inferred latent causes (LCs) to support context-dependent learning. However, when shared structure is present across contexts, it is still uncl
Externí odkaz:
http://arxiv.org/abs/2312.08519
Autor:
Binz, Marcel, Alaniz, Stephan, Roskies, Adina, Aczel, Balazs, Bergstrom, Carl T., Allen, Colin, Schad, Daniel, Wulff, Dirk, West, Jevin D., Zhang, Qiong, Shiffrin, Richard M., Gershman, Samuel J., Popov, Ven, Bender, Emily M., Marelli, Marco, Botvinick, Matthew M., Akata, Zeynep, Schulz, Eric
Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advent of large language models affect the practice of science? For t
Externí odkaz:
http://arxiv.org/abs/2312.03759
We propose that the grokking phenomenon, where the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime. To illustrate
Externí odkaz:
http://arxiv.org/abs/2310.06110
Exploration is essential in reinforcement learning, particularly in environments where external rewards are sparse. Here we focus on exploration with intrinsic rewards, where the agent transiently augments the external rewards with self-generated int
Externí odkaz:
http://arxiv.org/abs/2305.15277
Autor:
Gershman, Samuel J.
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As
Externí odkaz:
http://arxiv.org/abs/2209.04923
Publikováno v:
In Cognition January 2025 254