Zobrazeno 1 - 10
of 108
pro vyhledávání: '"Park, Il Memming"'
State-space graphical models and the variational autoencoder framework provide a principled apparatus for learning dynamical systems from data. State-of-the-art probabilistic approaches are often able to scale to large problems at the cost of flexibi
Externí odkaz:
http://arxiv.org/abs/2403.01371
Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful lear
Externí odkaz:
http://arxiv.org/abs/2308.12585
Latent Gaussian process (GP) models are widely used in neuroscience to uncover hidden state evolutions from sequential observations, mainly in neural activity recordings. While latent GP models provide a principled and powerful solution in theory, th
Externí odkaz:
http://arxiv.org/abs/2306.01802
Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation. This has fostered the development of powerful offline algorithms for extracting latent neural trajectories from neural recordings. H
Externí odkaz:
http://arxiv.org/abs/2305.11278
Publikováno v:
Transactions on Machine Learning Research (2023)
Latent linear dynamical systems with Bernoulli observations provide a powerful modeling framework for identifying the temporal dynamics underlying binary time series data, which arise in a variety of contexts such as binary decision-making and discre
Externí odkaz:
http://arxiv.org/abs/2303.02060
Autor:
Brinkman, Braden A. W., Yan, Han, Maffei, Arianna, Park, Il Memming, Fontanini, Alfredo, Wang, Jin, La Camera, Giancarlo
Publikováno v:
Applied Physics Reviews (2022) 9(1), 011313
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which
Externí odkaz:
http://arxiv.org/abs/2110.03025
Autor:
Pei, Felix, Ye, Joel, Zoltowski, David, Wu, Anqi, Chowdhury, Raeed H., Sohn, Hansem, O'Doherty, Joseph E., Shenoy, Krishna V., Kaufman, Matthew T., Churchland, Mark, Jazayeri, Mehrdad, Miller, Lee E., Pillow, Jonathan, Park, Il Memming, Dyer, Eva L., Pandarinath, Chethan
Advances in neural recording present increasing opportunities to study neural activity in unprecedented detail. Latent variable models (LVMs) are promising tools for analyzing this rich activity across diverse neural systems and behaviors, as LVMs do
Externí odkaz:
http://arxiv.org/abs/2109.04463
We present the class of Hida-Mat\'ern kernels, which is the canonical family of covariance functions over the entire space of stationary Gauss-Markov Processes. It extends upon Mat\'ern kernels, by allowing for flexible construction of priors over pr
Externí odkaz:
http://arxiv.org/abs/2107.07098
Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question i
Externí odkaz:
http://arxiv.org/abs/2012.04729
The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for mor
Externí odkaz:
http://arxiv.org/abs/2010.12362