Zobrazeno 1 - 10
of 142
pro vyhledávání: '"Guo, Guibing"'
Autor:
Yang, Enneng, Shen, Li, Wang, Zhenyi, Guo, Guibing, Wang, Xingwei, Cao, Xiaocun, Zhang, Jie, Tao, Dacheng
Model merging-based multitask learning (MTL) offers a promising approach for performing MTL by merging multiple expert models without requiring access to raw training data. However, in this paper, we examine the merged model's representation distribu
Externí odkaz:
http://arxiv.org/abs/2410.14389
Autor:
Dang, Yizhou, Yang, Enneng, Liu, Yuting, Guo, Guibing, Jiang, Linying, Zhao, Jianzhe, Wang, Xingwei
As an essential branch of recommender systems, sequential recommendation (SR) has received much attention due to its well-consistency with real-world situations. However, the widespread data sparsity issue limits the SR model's performance. Therefore
Externí odkaz:
http://arxiv.org/abs/2409.13545
Autor:
Liu, Yuting, Zhang, Jinghao, Dang, Yizhou, Liang, Yuliang, Liu, Qiang, Guo, Guibing, Zhao, Jianzhe, Wang, Xingwei
Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input an
Externí odkaz:
http://arxiv.org/abs/2408.10645
Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various
Externí odkaz:
http://arxiv.org/abs/2408.07666
Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through learning the consistency between contrastive views. However, existing augmentation methods,
Externí odkaz:
http://arxiv.org/abs/2408.02691
Autor:
Zhao, Chu, Yang, Enneng, Liang, Yuliang, Lan, Pengxiang, Liu, Yuting, Zhao, Jianzhe, Guo, Guibing, Wang, Xingwei
Graph Neural Networks (GNNs)-based recommendation algorithms typically assume that training and testing data are drawn from independent and identically distributed (IID) spaces. However, this assumption often fails in the presence of out-of-distribut
Externí odkaz:
http://arxiv.org/abs/2408.00490
Autor:
Lan, Pengxiang, Yang, Enneng, Liu, Yuting, Guo, Guibing, Jiang, Linying, Zhao, Jianzhe, Wang, Xingwei
Prompt tuning is a promising method to fine-tune a pre-trained language model without retraining its large-scale parameters. Instead, it attaches a soft prompt to the input text, whereby downstream tasks can be well adapted by merely learning the emb
Externí odkaz:
http://arxiv.org/abs/2405.11464
Autor:
Liu, Yuting, Dang, Yizhou, Liang, Yuliang, Liu, Qiang, Guo, Guibing, Zhao, Jianzhe, Wang, Xingwei
Recently, sign-aware graph recommendation has drawn much attention as it will learn users' negative preferences besides positive ones from both positive and negative interactions (i.e., links in a graph) with items. To accommodate the different seman
Externí odkaz:
http://arxiv.org/abs/2403.08246
Autor:
Dang, Yizhou, Liu, Yuting, Yang, Enneng, Guo, Guibing, Jiang, Linying, Wang, Xingwei, Zhao, Jianzhe
Sequential recommendation aims to provide users with personalized suggestions based on their historical interactions. When training sequential models, padding is a widely adopted technique for two main reasons: 1) The vast majority of models can only
Externí odkaz:
http://arxiv.org/abs/2403.06372
Recently, the powerful large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS). However, while these systems have flourished, their susceptibility to security threats has been largely overlooked. In
Externí odkaz:
http://arxiv.org/abs/2402.14836