Cross-Modal Saliency Correlation for Image Annotation
Autor: | Jie Yang, Haoyang Xue, Yun Gu |
---|---|
Rok vydání: | 2016 |
Předmět: |
Information retrieval
Crossmodal Computer Networks and Communications business.industry Computer science General Neuroscience Association (object-oriented programming) ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020207 software engineering Pattern recognition Computational intelligence 02 engineering and technology Image (mathematics) Correlation Modal Automatic image annotation Artificial Intelligence Salient 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business Software |
Zdroj: | Neural Processing Letters. 45:777-789 |
ISSN: | 1573-773X 1370-4621 |
Popis: | Automatic image annotation is an attractive service for users and administrators of online photo sharing websites. In this paper, we propose an image annotation approach exploiting the crossmodal saliency correlation including visual and textual saliency. For textual saliency, a concept graph is firstly established based on the association between the labels. Then semantic communities and latent textual saliency are detected; For visual saliency, we adopt a dual-layer BoW (DL-BoW) model integrated with the local features and salient regions of the image. Experiments on MIRFlickr and IAPR TC-12 datasets demonstrate that the proposed method outperforms other state-of-the-art approaches. |
Databáze: | OpenAIRE |
Externí odkaz: |