Zobrazeno 1 - 10
of 83
pro vyhledávání: '"Chen, Dexiong"'
We explore the application of Vision Transformer (ViT) for handwritten text recognition. The limited availability of labeled data in this domain poses challenges for achieving high performance solely relying on ViT. Previous transformer-based models
Externí odkaz:
http://arxiv.org/abs/2409.08573
Message-passing graph neural networks (GNNs) excel at capturing local relationships but struggle with long-range dependencies in graphs. In contrast, graph transformers (GTs) enable global information exchange but often oversimplify the graph structu
Externí odkaz:
http://arxiv.org/abs/2406.03386
In this paper, we revisit techniques for uncertainty estimation within deep neural networks and consolidate a suite of techniques to enhance their reliability. Our investigation reveals that an integrated application of diverse techniques--spanning m
Externí odkaz:
http://arxiv.org/abs/2403.00543
Understanding the relationships between protein sequence, structure and function is a long-standing biological challenge with manifold implications from drug design to our understanding of evolution. Recently, protein language models have emerged as
Externí odkaz:
http://arxiv.org/abs/2401.14819
Attention-based graph neural networks (GNNs), such as graph attention networks (GATs), have become popular neural architectures for processing graph-structured data and learning node embeddings. Despite their empirical success, these models rely on l
Externí odkaz:
http://arxiv.org/abs/2305.07580
We introduce Joint Multidimensional Scaling, a novel approach for unsupervised manifold alignment, which maps datasets from two different domains, without any known correspondences between data instances across the datasets, to a common low-dimension
Externí odkaz:
http://arxiv.org/abs/2207.02968
Frequent and structurally related subgraphs, also known as network motifs, are valuable features of many graph datasets. However, the high computational complexity of identifying motif sets in arbitrary datasets (motif mining) has limited their use i
Externí odkaz:
http://arxiv.org/abs/2206.01008
Publikováno v:
In Pattern Recognition February 2025 158
The Transformer architecture has gained growing attention in graph representation learning recently, as it naturally overcomes several limitations of graph neural networks (GNNs) by avoiding their strict structural inductive biases and instead only e
Externí odkaz:
http://arxiv.org/abs/2202.03036
We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs). Our model, GraphiT
Externí odkaz:
http://arxiv.org/abs/2106.05667