Popis: |
The personal recommendation system has been adopted in numerous applications and has achieved satisfied results. Nowadays, apart from the recommendation, it is also becoming important to provide an explanation of the recommendation, since the user will more likely accept the recommendation. Traditional approaches to explainable recommendation employ feature extraction, explanation templates, and socially based information. However, these template-based explanations cannot be tailored to the user's particular interests. Convolutional Neural Networks (CNN), Recurrent Neural Networds (RNN), and Transformer are introduced to the explanation system among all other methods to generate natural language explanations. With the development of Natural Language Processing (NLP), Transformer models demonstrated the efficacy of language modeling, particularly the pre-trained language models such as Generative Pre-trained Transformer (GPT), GPT-2, GPT-3, and the most recent GPT-4, among others. Predicting the user's next interaction item based on a sequence of previously interacted items is another typical recommendation system scenario. Existing methodologies based on RNN, Long short-term memory (LSTM) , and Bidirectional Encoder Representations from Transformers (BERT) have been proven to be highly accurate. Explainability in sequence-aware recommendation have not yet been researched, despite the fact that related works have investigated the promising solution using the Transformer architecture for the generation of explanations given user ID token and item ID tokens. Throught out this work, the Sequence-AWare Explainable Recommender systems (SAWER) were developed to address this problem in three recommendation tasks, namely rating prediction, sequence prediction and explanation generation. Concretely, the single item ID are expanded to include the sequence of history items by the baseline model. We developed additional architectures based on the pre-trained Filter-enhanced MLP (FMLP-Rec) model and the GPT-2 model in order to enhance sequential recommendation generation and explainability. This study aims to examine the 3 proposed methods and execute the three recommendation tasks on numerous datasets concurrently in order to determine the method that most effective for SAWER. We undertake exhaustive experiments on four datasets and report the results for each task. In comparison to other well-know state-of-the-art (SOTA) algorithms, the proposed SAWER model demonstrates superior performance in these results. |