Hybrid textual-visual relevance learning for content-based image retrieval

Autor: Peiguang Lin, Xiushan Nie, Yilong Yin, Chaoran Cui, Qingfeng Zhu
Rok vydání: 2017
Předmět:
Zdroj: Journal of Visual Communication and Image Representation. 48:367-374
ISSN: 1047-3203
DOI: 10.1016/j.jvcir.2017.03.011
Popis: Learning effective relevance measures plays a crucial role in improving the performance of content-based image retrieval (CBIR) systems. Despite extensive research efforts for decades, how to discover and incorporate semantic information of images still poses a formidable challenge to real-world CBIR systems. In this paper, we propose a novel hybrid textual-visual relevance learning method, which mines textual relevance from image tags and combines textual relevance and visual relevance for CBIR. To alleviate the sparsity and unreliability of tags, we first perform tag completion to fill the missing tags as well as correct noisy tags of images. Then, we capture users’ semantic cognition to images by representing each image as a probability distribution over the permutations of tags. Finally, instead of early fusion, a ranking aggregation strategy is adopted to sew up textual relevance and visual relevance seamlessly. Extensive experiments on two benchmark datasets well verified the promise of our approach.
Databáze: OpenAIRE