Context Encoding for Semantic Segmentation
Autor: | Jianping Shi, Ambrish Tyagi, Hang Zhang, Zhongyue Zhang, Xiaogang Wang, Amit Agrawal, Kristin J. Dana |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Source code Contextual image classification business.industry Computer science media_common.quotation_subject Computer Vision and Pattern Recognition (cs.CV) Feature extraction Computer Science - Computer Vision and Pattern Recognition Word error rate 020207 software engineering Pattern recognition 02 engineering and technology Image segmentation Pascal (programming language) Test set 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Segmentation Artificial intelligence business computer media_common computer.programming_language |
Zdroj: | CVPR |
DOI: | 10.48550/arxiv.1803.08904 |
Popis: | Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpass the winning entry of COCO-Place Challenge in 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10 times more layers. The source code for the complete system are publicly available. Comment: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018 |
Databáze: | OpenAIRE |
Externí odkaz: |