Popis: |
A new algorithm is developed in this paper to support more effective cross-modal image-tag relevance learning for large-scale social images, which integrates the multimodal feature representation, multimodal relevance measurement, and cross- modal relevance fusion. The main contribution of our work is that we provide a more reasonable base to learn cross-modal relevance among social images, which can be acquired from integrating multimodal image and tag relevance with multiple features in different modalities. Very positive results were obtained in our experiments using a large quantity of public social image data. |