Learning probabilistic neural representations with randomly connected circuits
Autor: | Gašper Tkačik, Roozbeh Kiani, Ori Maoz, Mohamad Saleh Esteki, Elad Schneidman |
---|---|
Rok vydání: | 2020 |
Předmět: |
0301 basic medicine
Computer science Computer Science::Neural and Evolutionary Computation Models Neurological Action Potentials Machine Learning Computer Science::Hardware Architecture 03 medical and health sciences 0302 clinical medicine Biological neural network Humans neural circuits Neurons Neuronal Plasticity Multidisciplinary Computational neuroscience Quantitative Biology::Neurons and Cognition population codes business.industry Deep learning sparse nonlinear random projections Probabilistic logic Brain learning rules Pattern recognition Statistical model Biological Sciences Biophysics and Computational Biology 030104 developmental biology Physical Sciences Scalability Unsupervised learning Neural Networks Computer Artificial intelligence Nerve Net business Neural coding cortical computation Algorithms 030217 neurology & neurosurgery Neuroscience |
Zdroj: | Proceedings of the National Academy of Sciences of the United States of America |
ISSN: | 1091-6490 0027-8424 |
DOI: | 10.1073/pnas.1912804117 |
Popis: | Significance We present a theory of neural circuits’ design and function, inspired by the random connectivity of real neural circuits and the mathematical power of random projections. Specifically, we introduce a family of statistical models for large neural population codes, a straightforward neural circuit architecture that would implement these models, and a biologically plausible learning rule for such circuits. The resulting neural architecture suggests a design principle for neural circuit—namely, that they learn to compute the mathematical surprise of their inputs, given past inputs, without an explicit teaching signal. We applied these models to recordings from large neural populations in monkeys’ visual and prefrontal cortices and show them to be highly accurate, efficient, and scalable. The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation. |
Databáze: | OpenAIRE |
Externí odkaz: |