TranSalNet: Towards perceptually relevant visual saliency prediction
Autor: | Jianxun Lou, Hanhe Lin, David Marshall, Dietmar Saupe, Hantao Liu |
---|---|
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Artificial Intelligence Computer Vision and Pattern Recognition (cs.CV) Cognitive Neuroscience Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ddc:004 Computer Science - Multimedia Multimedia (cs.MM) Computer Science Applications |
Zdroj: | Neurocomputing. 494:455-467 |
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2022.04.080 |
Popis: | Visual saliency prediction using transformers - Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models. The source code of our proposed saliency model TranSalNet is available at: https://github.com/LJOVO/TranSalNet Comment: Source code: https://github.com/LJOVO/TranSalNet |
Databáze: | OpenAIRE |
Externí odkaz: |