Hybrid textual-visual relevance learning for content-based image retrieval
Autor: | Peiguang Lin, Xiushan Nie, Yilong Yin, Chaoran Cui, Qingfeng Zhu |
---|---|
Rok vydání: | 2017 |
Předmět: |
Information retrieval
Computer science ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020207 software engineering 02 engineering and technology Content-based image retrieval Relevance learning Image (mathematics) Ranking (information retrieval) Signal Processing 0202 electrical engineering electronic engineering information engineering Media Technology Benchmark (computing) Probability distribution 020201 artificial intelligence & image processing Relevance (information retrieval) Computer Vision and Pattern Recognition Electrical and Electronic Engineering Image retrieval |
Zdroj: | Journal of Visual Communication and Image Representation. 48:367-374 |
ISSN: | 1047-3203 |
DOI: | 10.1016/j.jvcir.2017.03.011 |
Popis: | Learning effective relevance measures plays a crucial role in improving the performance of content-based image retrieval (CBIR) systems. Despite extensive research efforts for decades, how to discover and incorporate semantic information of images still poses a formidable challenge to real-world CBIR systems. In this paper, we propose a novel hybrid textual-visual relevance learning method, which mines textual relevance from image tags and combines textual relevance and visual relevance for CBIR. To alleviate the sparsity and unreliability of tags, we first perform tag completion to fill the missing tags as well as correct noisy tags of images. Then, we capture users’ semantic cognition to images by representing each image as a probability distribution over the permutations of tags. Finally, instead of early fusion, a ranking aggregation strategy is adopted to sew up textual relevance and visual relevance seamlessly. Extensive experiments on two benchmark datasets well verified the promise of our approach. |
Databáze: | OpenAIRE |
Externí odkaz: |