Zobrazeno 1 - 10
of 397
pro vyhledávání: '"Burkholz, A."'
Autor:
Jacobs, Tom, Burkholz, Rebekka
Sparsifying deep neural networks to reduce their inference cost is an NP-hard problem and difficult to optimize due to its mixed discrete and continuous nature. Yet, as we prove, continuous sparsification has already an implicit bias towards sparsity
Externí odkaz:
http://arxiv.org/abs/2408.09966
The success of iterative pruning methods in achieving state-of-the-art sparse networks has largely been attributed to improved mask identification and an implicit regularization induced by pruning. We challenge this hypothesis and instead posit that
Externí odkaz:
http://arxiv.org/abs/2406.02773
Autor:
Mustafa, Nimrah, Burkholz, Rebekka
Graph Attention Networks (GATs) are designed to provide flexible neighborhood aggregation that assigns weights to neighbors according to their importance. In practice, however, GATs are often unable to switch off task-irrelevant neighborhood aggregat
Externí odkaz:
http://arxiv.org/abs/2406.00418
Autor:
Hsieh, Ping-Han, Hsiao, Ru-Xiu, Ferenc, Katalin, Mathelier, Anthony, Burkholz, Rebekka, Chen, Chien-Yu, Sandve, Geir Kjetil, Belova, Tatiana, Kuijjer, Marieke Lydia
Paired single-cell sequencing technologies enable the simultaneous measurement of complementary modalities of molecular data at single-cell resolution. Along with the advances in these technologies, many methods based on variational autoencoders have
Externí odkaz:
http://arxiv.org/abs/2405.18655
Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing. The former results from topological bottlenecks that hamper the information flow f
Externí odkaz:
http://arxiv.org/abs/2404.04612
The practical utility of machine learning models in the sciences often hinges on their interpretability. It is common to assess a model's merit for scientific discovery, and thus novel insights, by how well it aligns with already available domain kno
Externí odkaz:
http://arxiv.org/abs/2403.04805
Autor:
Gadhikar, Advait, Burkholz, Rebekka
Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learnin
Externí odkaz:
http://arxiv.org/abs/2402.19262
While the expressive power and computational capabilities of graph neural networks (GNNs) have been theoretically studied, their optimization and learning dynamics, in general, remain largely unexplored. Our study undertakes the Graph Attention Netwo
Externí odkaz:
http://arxiv.org/abs/2310.07235
Low-dimensional embeddings and visualizations are an indispensable tool for analysis of high-dimensional data. State-of-the-art methods, such as tSNE and UMAP, excel in unveiling local structures hidden in high-dimensional data and are therefore rout
Externí odkaz:
http://arxiv.org/abs/2301.13732
Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense architectures and state-of-the-art lottery ticket pruning algorithms, even though they
Externí odkaz:
http://arxiv.org/abs/2210.02412