Zobrazeno 1 - 10
of 148
pro vyhledávání: '"Dahmen, David"'
Autor:
Fischer, Kirsten, Lindner, Javed, Dahmen, David, Ringel, Zohar, Krämer, Michael, Helias, Moritz
A key property of neural networks driving their success is their ability to learn features from data. Understanding feature learning from a theoretical viewpoint is an emerging field with many open questions. In this work we capture finite-width effe
Externí odkaz:
http://arxiv.org/abs/2405.10761
Publikováno v:
PRX Life, 2, 013013 (2024)
Recent advancements in measurement techniques have resulted in an increasing amount of data on neural activities recorded in parallel, revealing largely heterogeneous correlation patterns across neurons. Yet, the mechanistic origin of this heterogene
Externí odkaz:
http://arxiv.org/abs/2308.00421
Bayesian inference and kernel methods are well established in machine learning. The neural network Gaussian process in particular provides a concept to investigate neural networks in the limit of infinitely wide hidden layers by using kernel and infe
Externí odkaz:
http://arxiv.org/abs/2307.16695
Residual networks have significantly better trainability and thus performance than feed-forward networks at large depth. Introducing skip connections facilitates signal propagation to deeper layers. In addition, previous works found that adding a sca
Externí odkaz:
http://arxiv.org/abs/2305.07715
Autor:
Merger, Claudia, René, Alexandre, Fischer, Kirsten, Bouss, Peter, Nestler, Sandra, Dahmen, David, Honerkamp, Carsten, Helias, Moritz
One challenge of physics is to explain how collective properties arise from microscopic interactions. Indeed, interactions form the building blocks of almost all physical theories and are described by polynomial terms in the action. The traditional a
Externí odkaz:
http://arxiv.org/abs/2304.00599
Many observables of brain dynamics appear to be optimized for computation. Which connectivity structures underlie this fine-tuning? We propose that many of these structures are naturally encoded in the space that more directly relates to network dyna
Externí odkaz:
http://arxiv.org/abs/2303.02476
Autor:
Fischer, Kirsten, René, Alexandre, Keup, Christian, Layer, Moritz, Dahmen, David, Helias, Moritz
Publikováno v:
Phys. Rev. Research 4, 043143 (2022)
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probabilit
Externí odkaz:
http://arxiv.org/abs/2202.04925
Autor:
Segadlo, Kai, Epping, Bastian, van Meegen, Alexander, Dahmen, David, Krämer, Michael, Helias, Moritz
Understanding capabilities and limitations of different network architectures is of fundamental importance to machine learning. Bayesian inference on Gaussian processes has proven to be a viable approach for studying recurrent and deep networks in th
Externí odkaz:
http://arxiv.org/abs/2112.05589
Criticality is deeply related to optimal computational capacity. The lack of a renormalized theory of critical brain dynamics, however, so far limits insights into this form of biological information processing to mean-field results. These methods ne
Externí odkaz:
http://arxiv.org/abs/2110.01859
Autor:
Nestler, Sandra, Keup, Christian, Dahmen, David, Gilson, Matthieu, Rauhut, Holger, Helias, Moritz
Publikováno v:
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 17380--17390
Cortical networks are strongly recurrent, and neurons have intrinsic temporal dynamics. This sets them apart from deep feed-forward networks. Despite the tremendous progress in the application of feed-forward networks and their theoretical understand
Externí odkaz:
http://arxiv.org/abs/2010.06247