Autor: |
Qiuyang GU, Chunhua JU, Gongxing WU |
Jazyk: |
čínština |
Rok vydání: |
2021 |
Předmět: |
|
Zdroj: |
Dianxin kexue, Vol 37, Pp 82-98 (2021) |
Druh dokumentu: |
article |
ISSN: |
1000-0801 |
DOI: |
10.11959/j.issn.1000-0801.2021031 |
Popis: |
Nowadays, the commonly used linear structure video recommendation methods have the problems of non-personalized recommendation results and low accuracy, so it is extremely urgent to develop high-precision personalized video recommendation method.A video recommendation method based on the fusion of autoencoders and multi-modal data was presented.This method fused two data including text and vision for video recommendation.To be specific, the method proposed firstly used bag of words and TF-IDF methods to describe text data, and then fused the obtained features with deep convolutional descriptors extracted from visual data, so that each video document could get a multi-modal descriptors, and constructed low-dimensional sparse representation by autoencoders.Experiments were performed on the proposed model by using three real data sets.The result shows that compared with the single-modal recommendation method, the recommendation results of the proposed method are significantly improved, and the performance is better than the reference method. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|