Zobrazeno 1 - 10
of 216
pro vyhledávání: '"Hu, Yulan"'
Large Language Models (LLMs) have achieved impressive results in processing text data, which has sparked interest in applying these models beyond textual data, such as graphs. In the field of graph learning, there is a growing interest in harnessing
Externí odkaz:
http://arxiv.org/abs/2409.20053
Enhancing the conformity of large language models (LLMs) to human preferences remains an ongoing research challenge. Recently, offline approaches such as Direct Preference Optimization (DPO) have gained prominence as attractive options due to offerin
Externí odkaz:
http://arxiv.org/abs/2409.02118
Graph autoencoders (GAEs), as a kind of generative self-supervised learning approach, have shown great potential in recent years. GAEs typically rely on distance-based criteria, such as mean-square-error (MSE), to reconstruct the input graph. However
Externí odkaz:
http://arxiv.org/abs/2406.17517
Autor:
Hu, Yulan, Li, Qingyang, Ouyang, Sheng, Chen, Ge, Chen, Kaihui, Mei, Lijun, Ye, Xucheng, Zhang, Fuzheng, Liu, Yong
Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is train
Externí odkaz:
http://arxiv.org/abs/2406.16486
Graphs are ubiquitous in real-world scenarios and encompass a diverse range of tasks, from node-, edge-, and graph-level tasks to transfer learning. However, designing specific tasks for each type of graph data is often costly and lacks generalizabil
Externí odkaz:
http://arxiv.org/abs/2403.14340
Class imbalance in graph data presents significant challenges for node classification. While existing methods, such as SMOTE-based approaches, partially mitigate this issue, they still exhibit limitations in constructing imbalanced graphs. Generative
Externí odkaz:
http://arxiv.org/abs/2311.01191
Autor:
Hu, Yulan, Ouyang, Sheng, Liu, Jingyu, Chen, Ge, Yang, Zhirui, Wan, Junchen, Zhang, Fuzheng, Wang, Zhongyuan, Liu, Yong
Graph contrastive learning (GCL) has emerged as a representative graph self-supervised method, achieving significant success. The currently prevalent optimization objective for GCL is InfoNCE. Typically, it employs augmentation techniques to obtain t
Externí odkaz:
http://arxiv.org/abs/2310.14525
Self-Supervised Learning (SSL) has shown significant potential and has garnered increasing interest in graph learning. However, particularly for generative SSL methods, its potential in Heterogeneous Graph Learning (HGL) remains relatively underexplo
Externí odkaz:
http://arxiv.org/abs/2310.11102
Publikováno v:
In Journal of Cultural Heritage November-December 2024 70:90-96
Benefiting from the strong ability of the pre-trained model, the research on Chinese Word Segmentation (CWS) has made great progress in recent years. However, due to massive computation, large and complex models are incapable of empowering their abil
Externí odkaz:
http://arxiv.org/abs/2111.09078