Geometry and context guided refinement for stereo matching
Autor: | Chen Shenglun, Zhi-Hui Wang, Yuxin Yue, Haojie Li, Zhang Hong |
---|---|
Rok vydání: | 2020 |
Předmět: |
Matching (statistics)
Fine-tuning Pixel Computer science ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020206 networking & telecommunications Context (language use) Geometry 02 engineering and technology Object (computer science) Domain (software engineering) Upsampling Iterative refinement Signal Processing 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Computer Vision and Pattern Recognition Electrical and Electronic Engineering Software |
Zdroj: | IET Image Processing. 14:2652-2659 |
ISSN: | 1751-9667 1751-9659 |
DOI: | 10.1049/iet-ipr.2019.1636 |
Popis: | The disparity refinement phase of existing end-to-end stereo matching networks refines the disparity by learning the mapping from the concatenated coarse disparity and corresponding features to fine disparity. It depends on the scenarios' characteristics, such as the distribution of disparity and semantic categories contained in the domain, which makes the network fail to work on unseen domain. In this paper, we propose a geometry and context guided refinement network (GCGR-Net) containing a Fine Matching module and an Upsampling module. GCGR-Net learns to utilize pixels' relationship to get high resolution dense disparity, which is independent of the data's content. The Fine Matching module performs a minimum range search based on the relationship between the possible matching pixel pairs, i.e. the called geometry information, to recover the internal structure of the object. The Upsampling module obtains context information, the relationship between central pixel and the pixels in its neighborhood, to upsample the lower resolution disparity. The final disparity map is obtained step by step through an iterative refinement model. Experiment results show that our method not only has good performance in the training scenarios, but also outperforms previous methods on the unseen domain without fine-tuning. |
Databáze: | OpenAIRE |
Externí odkaz: |