Zobrazeno 1 - 10
of 277
pro vyhledávání: '"FAN, WENQI"'
Sequential recommendation methods are crucial in modern recommender systems for their remarkable capability to understand a user's changing interests based on past interactions. However, a significant challenge faced by current methods (e.g., RNN- or
Externí odkaz:
http://arxiv.org/abs/2409.01192
Autor:
Wang, Lionel Z., Ma, Yiming, Gao, Renfei, Guo, Beichen, Li, Zhuoran, Zhu, Han, Fan, Wenqi, Lu, Zexin, Ng, Ka Chung
The advent of large language models (LLMs) has revolutionized online content creation, making it much easier to generate high-quality fake news. This misuse threatens the integrity of our digital environment and ethical standards. Therefore, understa
Externí odkaz:
http://arxiv.org/abs/2408.11871
Deep neural networks (DNNs) have significantly boosted the performance of many challenging tasks. Despite the great development, DNNs have also exposed their vulnerability. Recent studies have shown that adversaries can manipulate the predictions of
Externí odkaz:
http://arxiv.org/abs/2408.01715
As one of the most representative DL techniques, Transformer architecture has empowered numerous advanced models, especially the large language models (LLMs) that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the i
Externí odkaz:
http://arxiv.org/abs/2408.01129
This paper addresses the need for improved precision in existing Retrieval-Augmented Generation (RAG) methods that primarily focus on enhancing recall. We propose a multi-layer knowledge pyramid approach within the RAG framework to achieve a better b
Externí odkaz:
http://arxiv.org/abs/2407.21276
Autor:
Yang, Xihong, Wang, Yiqi, Chen, Jin, Fan, Wenqi, Zhao, Xiangyu, Zhu, En, Liu, Xinwang, Lian, Defu
Deep learning has been widely applied in recommender systems, which has achieved revolutionary progress recently. However, most existing learning-based methods assume that the user and item distributions remain unchanged between the training phase an
Externí odkaz:
http://arxiv.org/abs/2407.15620
Recently, graph condensation has emerged as a prevalent technique to improve the training efficiency for graph neural networks (GNNs). It condenses a large graph into a small one such that a GNN trained on this small synthetic graph can achieve compa
Externí odkaz:
http://arxiv.org/abs/2407.11025
Molecular property prediction (MPP) is a fundamental and crucial task in drug discovery. However, prior methods are limited by the requirement for a large number of labeled molecules and their restricted ability to generalize for unseen and new tasks
Externí odkaz:
http://arxiv.org/abs/2406.12950
There is a growing interest in utilizing large-scale language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and in-context learning capabilities. In this scenario, tokenizing
Externí odkaz:
http://arxiv.org/abs/2406.10450
Social recommendation models weave social interactions into their design to provide uniquely personalized recommendation results for users. However, social networks not only amplify the popularity bias in recommendation models, resulting in more freq
Externí odkaz:
http://arxiv.org/abs/2405.16772