Zobrazeno 1 - 10
of 32
pro vyhledávání: '"RUSCH, T. KONSTANTIN"'
Incorporating equivariance as an inductive bias into deep learning architectures to take advantage of the data symmetry has been successful in multiple applications, such as chemistry and dynamical systems. In particular, roto-translations are crucia
Externí odkaz:
http://arxiv.org/abs/2410.17878
Autor:
Rusch, T. Konstantin, Rus, Daniela
We propose Linear Oscillatory State-Space models (LinOSS) for efficiently learning on long sequences. Inspired by cortical dynamics of biological neural networks, we base our proposed LinOSS model on a system of forced harmonic oscillators. A stable
Externí odkaz:
http://arxiv.org/abs/2410.03943
Sampling-based motion planning methods, while effective in high-dimensional spaces, often suffer from inefficiencies due to irregular sampling distributions, leading to suboptimal exploration of the configuration space. In this paper, we propose an a
Externí odkaz:
http://arxiv.org/abs/2410.03909
Discrepancy is a well-known measure for the irregularity of the distribution of a point set. Point sets with small discrepancy are called low-discrepancy and are known to efficiently fill the space in a uniform manner. Low-discrepancy points play a c
Externí odkaz:
http://arxiv.org/abs/2405.15059
Autor:
Di Giovanni, Francesco, Rusch, T. Konstantin, Bronstein, Michael M., Deac, Andreea, Lackenby, Marc, Mishra, Siddhartha, Veličković, Petar
Graph Neural Networks (GNNs) are the state-of-the-art model for machine learning on graph-structured data. The most popular class of GNNs operate by exchanging information between adjacent nodes, and are known as Message Passing Neural Networks (MPNN
Externí odkaz:
http://arxiv.org/abs/2306.03589
Coupled oscillators are being increasingly used as the basis of machine learning (ML) architectures, for instance in sequence modeling, graph representation learning and in physical neural networks that are used in analog ML devices. We introduce an
Externí odkaz:
http://arxiv.org/abs/2305.08753
Node features of graph neural networks (GNNs) tend to become more similar with the increase of the network depth. This effect is known as over-smoothing, which we axiomatically define as the exponential convergence of suitable similarity measures on
Externí odkaz:
http://arxiv.org/abs/2303.10993
We propose a novel multi-scale message passing neural network algorithm for learning the solutions of time-dependent PDEs. Our algorithm possesses both temporal and spatial multi-scale resolution features by incorporating multi-scale sequence models
Externí odkaz:
http://arxiv.org/abs/2302.03580
Autor:
Rusch, T. Konstantin, Chamberlain, Benjamin P., Mahoney, Michael W., Bronstein, Michael M., Mishra, Siddhartha
We present Gradient Gating (G$^2$), a novel framework for improving the performance of Graph Neural Networks (GNNs). Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across
Externí odkaz:
http://arxiv.org/abs/2210.00513
Autor:
Rusch, T. Konstantin, Chamberlain, Benjamin P., Rowbottom, James, Mishra, Siddhartha, Bronstein, Michael M.
We propose Graph-Coupled Oscillator Networks (GraphCON), a novel framework for deep learning on graphs. It is based on discretizations of a second-order system of ordinary differential equations (ODEs), which model a network of nonlinear controlled a
Externí odkaz:
http://arxiv.org/abs/2202.02296