TENet: Triple Excitation Network for Video Salient Object Detection
Autor: | Chu Han, Shengfeng He, Guoqiang Han, Sucheng Ren, Xin Yang |
---|---|
Rok vydání: | 2020 |
Předmět: |
Ground truth
Boosting (machine learning) business.industry Computer science 02 engineering and technology computer.software_genre Object (computer science) Image (mathematics) Feature (computer vision) Salient Convergence (routing) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Computer vision Plug-in Artificial intelligence business computer |
Zdroj: | Computer Vision – ECCV 2020 ISBN: 9783030585570 ECCV (5) |
Popis: | In this paper, we propose a simple yet effective approach, named Triple Excitation Network, to reinforce the training of video salient object detection (VSOD) from three aspects, spatial, temporal, and online excitations. These excitation mechanisms are designed following the spirit of curriculum learning and aim to reduce learning ambiguities at the beginning of training by selectively exciting feature activations using ground truth. Then we gradually reduce the weight of ground truth excitations by a curriculum rate and replace it by a curriculum complementary map for better and faster convergence. In particular, the spatial excitation strengthens feature activations for clear object boundaries, while the temporal excitation imposes motions to emphasize spatio-temporal salient regions. Spatial and temporal excitations can combat the saliency shifting problem and conflict between spatial and temporal features of VSOD. Furthermore, our semi-curriculum learning design enables the first online refinement strategy for VSOD, which allows exciting and boosting saliency responses during testing without re-training. The proposed triple excitations can easily plug in different VSOD methods. Extensive experiments show the effectiveness of all three excitation methods and the proposed method outperforms state-of-the-art image and video salient object detection methods. |
Databáze: | OpenAIRE |
Externí odkaz: |