Zobrazeno 1 - 10
of 1 782
pro vyhledávání: '"Bordé, P"'
Autor:
Loeber, J. Gerard, Platis, Dimitris, Zetterström, Rolf H., Almashanu, Shlomo, Boemer, François, Bonham, James R., Borde, Patricia, Brincat, Ian, Cheillan, David, Dekkers, Eugenie, Dimitrov, Dobry, Fingerhut, Ralph, Franzson, Leifur, Groselj, Urh, Hougaard, David, Knapkova, Maria, Kocova, Mirjana, Kotori, Vjosa, Kozich, Viktor, Kremezna, Anastasiia, Kurkijärvi, Riikka, La Marca, Giancarlo, Mikelsaar, Ruth, Milenkovic, Tatjana, Mitkin, Vyacheslav, Moldovanu, Florentina, Ceglarek, Uta, O´Grady, Loretta, Oltarzewski, Mariusz, Pettersen, Rolf D., Ramadza, Danijela, Salimbayeva, Damilya, Samardzic, Mira, Shamsiddinova, Markhabo, Songailiené, Jurgita, Szatmari, Ildiko, Tabatadze, Nazi, Tezel, Basak, Toromanovic, Alma, Tovmasyan, Irina, Usurelu, Natalia, Vevere, Parsla, Vilarinho, Laura, Vogazianos, Marios, Yahyaoui, Raquel, Zeyda, Maximilian, Schielen, Peter C. J. I.
Neonatal screening (NBS) was initiated in Europe during the 1960s with the screening for phenylketonuria. The panel of screened disorders (“conditions”) then gradually expanded, with a boost in the late 1990s with the introduction of tandem mass
Externí odkaz:
https://ul.qucosa.de/id/qucosa%3A85163
https://ul.qucosa.de/api/qucosa%3A85163/attachment/ATT-0/
https://ul.qucosa.de/api/qucosa%3A85163/attachment/ATT-0/
Autor:
Borde, Haitz Sáez de Ocáriz, Lukoianov, Artem, Kratsios, Anastasis, Bronstein, Michael, Dong, Xiaowen
We propose Scalable Message Passing Neural Networks (SMPNNs) and demonstrate that, by integrating standard convolutional message passing into a Pre-Layer Normalization Transformer-style block instead of attention, we can produce high-performing deep
Externí odkaz:
http://arxiv.org/abs/2411.00835
Autor:
Borde, Haitz Sáez de Ocáriz, Kratsios, Anastasis, Law, Marc T., Dong, Xiaowen, Bronstein, Michael
We propose a class of trainable deep learning-based geometries called Neural Spacetimes (NSTs), which can universally represent nodes in weighted directed acyclic graphs (DAGs) as events in a spacetime manifold. While most works in the literature foc
Externí odkaz:
http://arxiv.org/abs/2408.13885
Clifford Group Equivariant Neural Networks (CGENNs) leverage Clifford algebras and multivectors as an alternative approach to incorporating group equivariance to ensure symmetry constraints in neural representations. In principle, this formulation ge
Externí odkaz:
http://arxiv.org/abs/2407.09926
Autor:
Lukoianov, Artem, Borde, Haitz Sáez de Ocáriz, Greenewald, Kristjan, Guizilini, Vitor Campagnolo, Bagautdinov, Timur, Sitzmann, Vincent, Solomon, Justin
While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we s
Externí odkaz:
http://arxiv.org/abs/2405.15891
3D scene understanding for robotic applications exhibits a unique set of requirements including real-time inference, object-centric latent representation learning, accurate 6D pose estimation and 3D reconstruction of objects. Current methods for scen
Externí odkaz:
http://arxiv.org/abs/2402.16308
Autor:
Zhu, Jiacheng, Greenewald, Kristjan, Nadjahi, Kimia, Borde, Haitz Sáez de Ocáriz, Gabrielsson, Rickard Brüel, Choshen, Leshem, Ghassemi, Marzyeh, Yurochkin, Mikhail, Solomon, Justin
Parameter-efficient fine-tuning optimizes large, pre-trained foundation models by updating a subset of parameters; in this class, Low-Rank Adaptation (LoRA) is particularly effective. Inspired by an effort to investigate the different roles of LoRA m
Externí odkaz:
http://arxiv.org/abs/2402.16842
Mixture-of-Experts (MoEs) can scale up beyond traditional deep learning models by employing a routing strategy in which each input is processed by a single "expert" deep learning model. This strategy allows us to scale up the number of parameters def
Externí odkaz:
http://arxiv.org/abs/2402.03460
In real-world scenarios, although data entities may possess inherent relationships, the specific graph illustrating their connections might not be directly accessible. Latent graph inference addresses this issue by enabling Graph Neural Networks (GNN
Externí odkaz:
http://arxiv.org/abs/2311.11891
The inductive bias of a graph neural network (GNN) is largely encoded in its specified graph. Latent graph inference relies on latent geometric representations to dynamically rewire or infer a GNN's graph to maximize the GNN's predictive downstream p
Externí odkaz:
http://arxiv.org/abs/2310.15003