Multiscale and multidirection depth map super resolution with semantic inference
Autor: | Dan Xu, Xiaopeng Fan, Debin Zhao, Wen Gao |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | IET Image Processing, Vol 17, Iss 13, Pp 3670-3687 (2023) |
Druh dokumentu: | article |
ISSN: | 1751-9667 1751-9659 |
DOI: | 10.1049/ipr2.12877 |
Popis: | Abstract Depth map super resolution has been paid much attention in 3D applications due to the limitation of depth sensors. Few textures in objects with clear contours along them is the most important characteristic of depth map. An efficient image representation should be directional, multiscale and anisotropic. From this we propose a novel multiscale and multidirection depth map super resolution framework with semantic inference to improve the quality of depth maps. In this framework, a multiscale and multidirection depth map contour fusion scheme captures and assembles intrinsic geometrical structures through a multiview non‐subsampled contourlet transform manner. This scheme not only isolates the discontinuities of contours but retains the smoothness along the contours. The semantic inference is also utilized to segment and label the depth map into objects/backgrounds‐level which are coplanar. Furthermore, a semantic‐aware label refinement strategy is introduced to correct the rarely inaccurate labels of the label map for upscaling the target pixel with pixels in the same object or background. Experimental results on benchmark depth map dataset demonstrate that the proposed multiscale and multidirection depth map super resolution framework with semantic inference has a significant improvement than the state‐of‐the‐art algorithms both visually and quantitatively. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |