Zobrazeno 1 - 10
of 33 499
pro vyhledávání: '"Has, Hamdan"'
Heterogeneous graph neural networks have recently gained attention for long document summarization, modeling the extraction as a node classification task. Although effective, these models often require external tools or additional machine learning mo
Externí odkaz:
http://arxiv.org/abs/2410.21315
Autor:
Gonzalez-Cuadra, Daniel, Hamdan, Majd, Zache, Torsten V., Braverman, Boris, Kornjaca, Milan, Lukin, Alexander, Cantu, Sergio H., Liu, Fangli, Wang, Sheng-Tao, Keesling, Alexander, Lukin, Mikhail D., Zoller, Peter, Bylinskii, Alexei
Lattice gauge theories (LGTs) describe a broad range of phenomena in condensed matter and particle physics. A prominent example is confinement, responsible for bounding quarks inside hadrons such as protons or neutrons. When quark-antiquark pairs are
Externí odkaz:
http://arxiv.org/abs/2410.16558
Modern high performance computers are massively parallel; for many PDE applications spatial parallelism saturates long before the computer's capability is reached. Parallel-in-time methods enable further speedup beyond spatial saturation by solving m
Externí odkaz:
http://arxiv.org/abs/2409.18792
Physics, like many scientific disciplines, has long struggled with attracting and retaining a diverse population and fostering inclusivity. While there have been improvements in addressing equity issues within physics, significant challenges remain.
Externí odkaz:
http://arxiv.org/abs/2409.07724
Structured state space models' (SSMs) development in recent studies, such as Mamba and Mamba2, outperformed and solved the computational inefficiency of transformers and large language models at small to medium scale. In this work, we introduce the c
Externí odkaz:
http://arxiv.org/abs/2409.00563
Autor:
Bazaluk, Bruna, Hamdan, Mosab, Ghaleb, Mustafa, Gismalla, Mohammed S. M., da Silva, Flavio S. Correa, Batista, Daniel Macêdo
The classification of IoT traffic is important to improve the efficiency and security of IoT-based networks. As the state-of-the-art classification methods are based on Deep Learning, most of the current results require a large amount of data to be t
Externí odkaz:
http://arxiv.org/abs/2407.19051
Autor:
Hamdan, Shadi, Güney, Fatma
The choice of representation plays a key role in self-driving. Bird's eye view (BEV) representations have shown remarkable performance in recent years. In this paper, we propose to learn object-centric representations in BEV to distill a complex scen
Externí odkaz:
http://arxiv.org/abs/2407.15843
We present a unified approach to obtain scaling limits of neural networks using the genus expansion technique from random matrix theory. This approach begins with a novel expansion of neural networks which is reminiscent of Butcher series for ODEs, a
Externí odkaz:
http://arxiv.org/abs/2407.08459
Autor:
Kornjača, Milan, Hu, Hong-Ye, Zhao, Chen, Wurtz, Jonathan, Weinberg, Phillip, Hamdan, Majd, Zhdanov, Andrii, Cantu, Sergio H., Zhou, Hengyun, Bravo, Rodrigo Araiza, Bagnall, Kevin, Basham, James I., Campo, Joseph, Choukri, Adam, DeAngelo, Robert, Frederick, Paige, Haines, David, Hammett, Julian, Hsu, Ning, Hu, Ming-Guang, Huber, Florian, Jepsen, Paul Niklas, Jia, Ningyuan, Karolyshyn, Thomas, Kwon, Minho, Long, John, Lopatin, Jonathan, Lukin, Alexander, Macrì, Tommaso, Marković, Ognjen, Martínez-Martínez, Luis A., Meng, Xianmei, Ostroumov, Evgeny, Paquette, David, Robinson, John, Rodriguez, Pedro Sales, Singh, Anshuman, Sinha, Nandan, Thoreen, Henry, Wan, Noel, Waxman-Lenz, Daniel, Wong, Tak, Wu, Kai-Hsin, Lopes, Pedro L. S., Boger, Yuval, Gemelke, Nathan, Kitagawa, Takuya, Keesling, Alexander, Gao, Xun, Bylinskii, Alexei, Yelin, Susanne F., Liu, Fangli, Wang, Sheng-Tao
Quantum machine learning has gained considerable attention as quantum technology advances, presenting a promising approach for efficiently learning complex data patterns. Despite this promise, most contemporary quantum methods require significant res
Externí odkaz:
http://arxiv.org/abs/2407.02553
Central to the Transformer architectures' effectiveness is the self-attention mechanism, a function that maps queries, keys, and values into a high-dimensional vector space. However, training the attention weights of queries, keys, and values is non-
Externí odkaz:
http://arxiv.org/abs/2405.13901