Person Foreground Segmentation by Learning Multi-Domain Networks
Autor: | Kan Guo, Zhiyuan Liang, Xiaobo Li, Xiaogang Jin, Jianbing Shen |
---|---|
Rok vydání: | 2022 |
Předmět: |
Computer science
business.industry Pattern recognition Construct (python library) Image segmentation Computer Graphics and Computer-Aided Design Domain (software engineering) Variety (cybernetics) Consistency (database systems) Multi domain Scalability Image Processing Computer-Assisted Humans Segmentation Artificial intelligence business Algorithms Software |
Zdroj: | IEEE Transactions on Image Processing. 31:585-597 |
ISSN: | 1941-0042 1057-7149 |
DOI: | 10.1109/tip.2021.3097169 |
Popis: | Separating the dominant person from the complex background is significant to the human-related research and photo-editing based applications. Existing segmentation algorithms are either too general to separate the person region accurately, or not capable of achieving real-time speed. In this paper, we introduce the multi-domain learning framework into a novel baseline model to construct the Multi-domain TriSeNet Networks for the real-time single person image segmentation. We first divide training data into different subdomains based on the characteristics of single person images, then apply a multi-branch Feature Fusion Module (FFM) to decouple the networks into the domain-independent and the domain-specific layers. To further enhance the accuracy, a self-supervised learning strategy is proposed to dig out domain relations during training. It helps transfer domain-specific knowledge by improving predictive consistency among different FFM branches. Moreover, we create a large-scale single person image segmentation dataset named MSSP20k, which consists of 22,100 pixel-level annotated images in the real world. The MSSP20k dataset is more complex and challenging than existing public ones in terms of scalability and variety. Experiments show that our Multi-domain TriSeNet outperforms state-of-the-art approaches on both public and the newly built datasets with real-time speed. |
Databáze: | OpenAIRE |
Externí odkaz: |