Autor: |
Han, Di, Huang, Yifan, Liu, Junmin, Liao, Kai, Lin, Kunling |
Předmět: |
|
Zdroj: |
ACM Transactions on Knowledge Discovery from Data; Apr2024, Vol. 18 Issue 3, p1-20, 20p |
Abstrakt: |
Since the weight of a self-attention model is not affected by the sequence interval, it can more accurately and completely describe the user interests, so it is widely used in processing sequential recommendation. However, the mainstream self-attention model focuses on the similarity between items when calculating the attention weight of user behavioral patterns but fails to reflect the impact of user sudden drift decisions on the model in time. In this article, we introduce a bias strategy in the self-attention module, referred to as Learning Self-Attention Bias (LSAB) to more accurately learn the fast-changing user behavioral patterns. The introduction of LSAB allows for the adjustment of bias resulting from self-attention weights, leading to enhanced prediction performance in sequential recommendation. In addition, this article designs four attention-weight bias types catering to diverse user behavior preferences. After testing on the benchmark datasets, each bias strategy in LSAB is useful for state-of-the-art and can improve the performance of the models by nearly 5% on average. The source code listing is publicly available at https://gitee.com/kyle-liao/lsab. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|