Virtual View Quality Enhancement using Side View Temporal Modelling Information for Free Viewpoint Video
Autor: | Manoranjan Paul, D. M. Motiur Rahaman, Nusrat Jahan Shoumy |
---|---|
Rok vydání: | 2018 |
Předmět: |
Spatial correlation
Pixel business.industry Computer science Gaussian ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Inpainting 020206 networking & telecommunications 02 engineering and technology A-weighting View synthesis Rendering (computer graphics) symbols.namesake 0202 electrical engineering electronic engineering information engineering symbols 020201 artificial intelligence & image processing Computer vision Artificial intelligence Image warping business ComputingMethodologies_COMPUTERGRAPHICS |
Zdroj: | DICTA |
DOI: | 10.1109/dicta.2018.8615827 |
Popis: | Virtual viewpoint video needs to be synthesised from adjacent reference viewpoints to provide immersive perceptual 3D viewing experience of a scene. View synthesised techniques suffer poor rendering quality due to holes created by occlusion in the warping process. Currently, spatial and temporal correlation of texture images and depth maps are exploited to improve the quality of the final synthesised view. Due to the low spatial correlation at the edge between foreground and background pixels, spatial correlation e.g. inpainting and inverse mapping (IM) techniques cannot fill holes effectively. Conversely, a temporal correlation among already synthesised frames through learning by Gaussian mixture modelling (GMM) fill missing pixels in occluded areas efficiently. In this process, there are no frames for GMM learning when the user switches view instantly. To address the above issues, in the proposed view synthesis technique, we apply GMM on the adjacent reference viewpoint texture images and depth maps to generate a most common frame in a scene (McFIS). Then, texture McFIS is warped into the target viewpoint by using depth McFIS and both warped McFISes are merged. Then, we utilize the number of GMM models to refine pixel intensities of the synthesised view by using a weighting factor between the pixel intensities of the merged McFIS and the warped images. This technique provides a better pixel correspondence and improves 0.58∼0.70dB PSNR compared to the IM technique. |
Databáze: | OpenAIRE |
Externí odkaz: |