Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Andrew M. Saxe"'
Publikováno v:
eLife, Vol 12 (2023)
Making optimal decisions in the face of noise requires balancing short-term speed and accuracy. But a theory of optimality should account for the fact that short-term speed can influence long-term accuracy through learning. Here, we demonstrate that
Externí odkaz:
https://doaj.org/article/0329cbb29ba5425aa9acd571e46162aa
Publikováno v:
Neuron.
Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible “knowledge assembly” requires few-shot reorganisation of neural codes for relations amon
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::7254ae059c1c5dcae50f18d4a6044fb2
https://doi.org/10.1101/2021.10.21.465374
https://doi.org/10.1101/2021.10.21.465374
Memorization and generalization are complementary cognitive processes that jointly promote adaptive behavior. For example, animals should memorize a safe route to a water source and generalize to features that allow them to find new water sources, wi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::f4fd2d3b87bb459cedfa0e37bf93cab0
https://doi.org/10.1101/2021.10.13.463791
https://doi.org/10.1101/2021.10.13.463791
How do neural populations code for multiple, potentially conflicting tasks? Here, we used computational simulations involving neural networks to define “lazy” and “rich” coding solutions to this multitasking problem, which trade off learning
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::be728440880bb9d4d9e03eb34b12a8fa
https://doi.org/10.1101/2021.04.23.441128
https://doi.org/10.1101/2021.04.23.441128
Balancing the speed and accuracy of decisions is crucial for survival, but how organisms manage this trade-off during learning is largely unknown. Here, we track this trade-off during perceptual learning in rats and simulated agents. At the start of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::5323bbb52117d162b538379ff37542c0
https://doi.org/10.1101/2020.09.01.259911
https://doi.org/10.1101/2020.09.01.259911
Publikováno v:
Neural Networks
We perform an analysis of the average generalization dynamics of large neural networks trained using gradient descent. We study the practically-relevant "high-dimensional" regime where the number of free parameters in the network is on the order of o
Publikováno v:
Journal of Statistical Mechanics: Theory and Experiment
Journal of Statistical Mechanics: Theory and Experiment, IOP Publishing, 2020, 2020 (12), pp.124010. ⟨10.1088/1742-5468/abc61e⟩
Scopus-Elsevier
Journal of Statistical Mechanics: Theory and Experiment, 2020, 2020 (12), pp.124010. ⟨10.1088/1742-5468/abc61e⟩
Journal of Statistical Mechanics: Theory and Experiment, IOP Publishing, 2020, 2020 (12), pp.124010. ⟨10.1088/1742-5468/abc61e⟩
Scopus-Elsevier
Journal of Statistical Mechanics: Theory and Experiment, 2020, 2020 (12), pp.124010. ⟨10.1088/1742-5468/abc61e⟩
Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data. We study this phenomenon by analysing the dynamics and the performance of over-parameterised two-layer neural networks in
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c056d9077390ad4059b597d10abee462
https://hal.sorbonne-universite.fr/hal-03289916/document
https://hal.sorbonne-universite.fr/hal-03289916/document
Autor:
Panayiota Poirazi, Greg Wayne, Christopher C. Pack, Surya Ganguli, Joel Zylberberg, Pieter R. Roelfsema, Grace W. Lindsay, Blake A. Richards, Walter Senn, Colleen J Gillon, Denis Therien, Philippe Beaudoin, Anna C. Schapiro, Kenneth D. Miller, Archy O. de Berker, Yoshua Bengio, Claudia Clopath, Peter E. Latham, Amelia J. Christensen, João Sacramento, Nikolaus Kriegeskorte, Timothy P. Lillicrap, Rui Ponte Costa, Danijar Hafner, Daniel L. K. Yamins, Benjamin Scellier, Rafal Bogacz, Adam Kepecs, Richard Naud, Friedemann Zenke, Konrad P. Kording, Andrew M. Saxe
Publikováno v:
Richards, B A, Lillicrap, T P, Beaudoin, P, Bengio, Y, Bogacz, R, Christensen, A, Clopath, C, Costa, R P, de Berker, A, Ganguli, S, Gillon, C J, Hafner, D, Kepecs, A, Kriegeskorte, N, Latham, P, Lindsay, G W, Miller, K D, Naud, R, Pack, C C, Poirazi, P, Roelfsema, P, Sacramento, J, Saxe, A, Scellier, B, Schapiro, A C, Senn, W, Wayne, G, Yamins, D, Zenke, F, Zylberberg, J, Therien, D & Kording, K P 2019, ' A deep learning framework for neuroscience ', Nature Neuroscience, vol. 22, no. 11, pp. 1761-1770 . https://doi.org/10.1038/s41593-019-0520-2
Nature Neuroscience, 22(11), 1761-1770. Nature Publishing Group
Nat Neurosci
Nature Neuroscience, 22(11), 1761-1770. Nature Publishing Group
Nat Neurosci
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3b42f784020df4a8b09aed7fa2d889d1
https://research-information.bris.ac.uk/ws/files/218472922/A_deep_learning_framework_for_neuroscience_vFinal_RC.pdf
https://research-information.bris.ac.uk/ws/files/218472922/A_deep_learning_framework_for_neuroscience_vFinal_RC.pdf