Two-Phase Pseudo Label Densification for Self-training Based Domain Adaptation
Autor: | Sanghyun Woo, Inkyu Shin, Fei Pan, In So Kweon |
---|---|
Rok vydání: | 2020 |
Předmět: |
Scheme (programming language)
Domain adaptation Bootstrapping Computer science media_common.quotation_subject Phase (waves) Process (computing) 02 engineering and technology 010501 environmental sciences 01 natural sciences Sliding window protocol Voting 0202 electrical engineering electronic engineering information engineering Feature (machine learning) 020201 artificial intelligence & image processing Algorithm computer 0105 earth and related environmental sciences media_common computer.programming_language |
Zdroj: | Computer Vision – ECCV 2020 ISBN: 9783030586003 ECCV (13) |
DOI: | 10.1007/978-3-030-58601-0_32 |
Popis: | Recently, deep self-training approaches emerged as a powerful solution to the unsupervised domain adaptation. The self-training scheme involves iterative processing of target data; it generates target pseudo labels and retrains the network. However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice. We see this is critical because the resulting insufficient training-signals lead to a sub-optimal, error-prone model. In order to tackle this problem, we propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD. In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images. In the second phase, we perform a confidence-based easy-hard classification. For the easy samples, we now employ their full pseudo-labels. For the hard ones, we instead adopt adversarial learning to enforce hard-to-easy feature alignment. To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss. We show the proposed TPLD can be easily integrated into existing self-training based approaches and improves the performance significantly. Combined with the recently proposed CRST self-training framework, we achieve new state-of-the-art results on two standard UDA benchmarks. |
Databáze: | OpenAIRE |
Externí odkaz: |