Attention-guided Temporally Coherent Video Object Matting
Autor: | Chi Wang, Weiwei Xu, Xuansong Xie, Peiran Ren, Xian-Sheng Hua, Miaomiao Cui, Yunke Zhang, Hujun Bao, Qixing Huang |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Pixel Artificial neural network business.industry Computer science Computer Vision and Pattern Recognition (cs.CV) Feature vector Deep learning Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Construct (python library) Object (computer science) Computer Science::Multimedia Code (cryptography) Segmentation Computer vision Artificial intelligence business ComputingMethodologies_COMPUTERGRAPHICS |
Zdroj: | ACM Multimedia |
Popis: | This paper proposes a novel deep learning-based video object matting method that can achieve temporally coherent matting results. Its key component is an attention-based temporal aggregation module that maximizes image matting networks' strength for video matting networks. This module computes temporal correlations for pixels adjacent to each other along the time axis in feature space, which is robust against motion noises. We also design a novel loss term to train the attention weights, which drastically boosts the video matting performance. Besides, we show how to effectively solve the trimap generation problem by fine-tuning a state-of-the-art video object segmentation network with a sparse set of user-annotated keyframes. To facilitate video matting and trimap generation networks' training, we construct a large-scale video matting dataset with 80 training and 28 validation foreground video clips with ground-truth alpha mattes. Experimental results show that our method can generate high-quality alpha mattes for various videos featuring appearance change, occlusion, and fast motion. Our code and dataset can be found at: https://github.com/yunkezhang/TCVOM 10 pages, 6 figures, MM '21 camera-ready |
Databáze: | OpenAIRE |
Externí odkaz: |