Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation
Autor: | Yuhua Chen, Suman Saha, Adrian Koring, Lukas Hoyer, Dengxin Dai, Luc Van Gool |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Monocular Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Process (computing) Pattern recognition Image segmentation Semantics Image (mathematics) Feature (computer vision) Key (cryptography) Segmentation Artificial intelligence business |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr46437.2021.01098 |
Popis: | Training deep networks for semantic segmentation requires large amounts of labeled training data, which presents a major challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue, we present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences. In particular, we propose three key contributions: (1) We transfer knowledge from features learned during self-supervised depth estimation to semantic segmentation, (2) we implement a strong data augmentation by blending images and labels using the geometry of the scene, and (3) we utilize the depth feature diversity as well as the level of difficulty of learning depth in a student-teacher framework to select the most useful samples to be annotated for semantic segmentation. We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains, and we achieve state-of-the-art results for semi-supervised semantic segmentation. The implementation is available at https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth. Comment: CVPR21 |
Databáze: | OpenAIRE |
Externí odkaz: |