Zobrazeno 1 - 10
of 95
pro vyhledávání: '"Mostafa, Hesham A."'
Training Graph Neural Networks (GNNs) on large graphs presents unique challenges due to the large memory and computing requirements. Distributed GNN training, where the graph is partitioned across multiple machines, is a common approach to training G
Externí odkaz:
http://arxiv.org/abs/2406.17611
Autor:
Zhao, Jianan, Mostafa, Hesham, Galkin, Mikhail, Bronstein, Michael, Zhu, Zhaocheng, Tang, Jian
Foundation models that can perform inference on any new task without requiring specific training have revolutionized machine learning in vision and language applications. However, applications involving graph-structured data remain a tough nut for fo
Externí odkaz:
http://arxiv.org/abs/2405.20445
The floorplanning of Systems-on-a-Chip (SoCs) and of chip sub-systems is a crucial step in the physical design flow as it determines the optimal shapes and locations of the blocks that make up the system. Simulated Annealing (SA) has been the method
Externí odkaz:
http://arxiv.org/abs/2405.05495
Floorplanning for systems-on-a-chip (SoCs) and its sub-systems is a crucial and non-trivial step of the physical design flow. It represents a difficult combinatorial optimization problem. A typical large scale SoC with 120 partitions generates a sear
Externí odkaz:
http://arxiv.org/abs/2405.05480
Foundation models in language and vision have the ability to run inference on any textual and visual inputs thanks to the transferable representations such as a vocabulary of tokens in language. Knowledge graphs (KGs) have different entity and relati
Externí odkaz:
http://arxiv.org/abs/2310.04562
Dynamic scene graph generation from a video is challenging due to the temporal dynamics of the scene and the inherent temporal fluctuations of predictions. We hypothesize that capturing long-term temporal dependencies is the key to effective generati
Externí odkaz:
http://arxiv.org/abs/2112.09828
Autor:
Mostafa, Hesham
We present the Sequential Aggregation and Rematerialization (SAR) scheme for distributed full-batch training of Graph Neural Networks (GNNs) on large graphs. Large-scale training of GNNs has recently been dominated by sampling-based methods and metho
Externí odkaz:
http://arxiv.org/abs/2111.06483
Autor:
Abu-El-Haija, Sami, Mostafa, Hesham, Nassar, Marcel, Crespi, Valentino, Steeg, Greg Ver, Galstyan, Aram
Publikováno v:
Advances in Neural Information Processing Systems (NeurIPS) 2021
Recent improvements in the performance of state-of-the-art (SOTA) methods for Graph Representational Learning (GRL) have come at the cost of significant computational resource requirements for training, e.g., for calculating gradients via backprop ov
Externí odkaz:
http://arxiv.org/abs/2111.06312
Many recent works have studied the performance of Graph Neural Networks (GNNs) in the context of graph homophily - a label-dependent measure of connectivity. Traditional GNNs generate node embeddings by aggregating information from a node's neighbors
Externí odkaz:
http://arxiv.org/abs/2106.03213
Convolutional layers are an integral part of many deep neural network solutions in computer vision. Recent work shows that replacing the standard convolution operation with mechanisms based on self-attention leads to improved performance on image cla
Externí odkaz:
http://arxiv.org/abs/2012.09904