Neural networks based visual attention model for surveillance videos
Autor: | Fahad Fazal Elahi Guraya, Faouzi Alaya Cheikh |
---|---|
Rok vydání: | 2015 |
Předmět: |
Artificial neural network
business.industry Computer science Cognitive Neuroscience ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Gaze Computer Science Applications Artificial Intelligence Salience (neuroscience) Salient Visual attention Eye tracking Computer vision Artificial intelligence business Sensory cue |
Zdroj: | Neurocomputing. 149:1348-1359 |
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2014.08.062 |
Popis: | In this paper we propose a novel Computational Attention Models (CAM) that fuses bottom-up, top-down and salient motion visual cues to compute visual salience in surveillance videos. When dealing with a number of visual features/cues in a system, it is always challenging to combine or fuse them. As there is no commonly agreed natural way of combining different conspicuity maps obtained from different features: face and motion for example, the challenge is thus to find the right mix of visual cues to get a salience map that is the closest to a corresponding gaze map? In the literature many CAMs have used fixed weights for combining different visual cues. This is computationally attractive but is a very crude way of combining the different cues. Furthermore, the weights are typically set in an ad hoc fashion. Therefore in this paper, we propose a machine learning approach, using an Artificial Neural Network (ANN) to estimate these weights. The ANN is trained using gaze maps, obtained by eye tracking in psycho-physical experiments. These weights are then used to combine the conspicuities of the different visual cues in our CAM, which is later applied to surveillance videos. The proposed model is designed in a way to consider important visual cues typically present in surveillance videos, and to combine their conspicuities via ANN. The obtained results are encouraging and show a clear improvement over state-of-the-art CAMs. |
Databáze: | OpenAIRE |
Externí odkaz: |