Light Field Editing Propagation using 4D Convolutional Neural Networks
Autor: | Yuk Ying Chung, Zhibo Chen, Xiaoming Chen, Zhicheng Lu |
---|---|
Rok vydání: | 2020 |
Předmět: |
Scheme (programming language)
business.industry Computer science Process (computing) Image processing Image editing computer.software_genre Convolutional neural network Image (mathematics) Consistency (database systems) Computer vision Artificial intelligence Parallax business computer computer.programming_language |
Zdroj: | VR Workshops |
DOI: | 10.1109/vrw50115.2020.00153 |
Popis: | 2D image editing has been a well-studied problem. However, 2D image processing techniques cannot be directly applied to the emerging light field image (LFI) due to the particular structural characteristics of LFI. Without a dedicatedly designed editing scheme for LFI, users need to manually edit each sub-view of the LFI. This process is extremely time consuming, and more importantly, users have no ways to guarantee parallax consistency between sub-views. This poster proposes two different LFI editing schemes including the direct editing scheme and the deep-learning-based scheme. These schemes enable automatic propagation of the user’s edits, particularly “augmentation” editing, from central view to all the other sub-views of LFI. In particular, the learning-based scheme employs interleaved spatial-angular convolutions (4D CNN) to enable effective learning of both spatial and angular features, which are subsequently used to help the augmentation editing. We constructed a preliminary LFI dataset and compared the proposed two schemes. The experimental results show that the learning-based scheme produces higher PSNR (0.51dB) and more pleasant subjective editing results than the direct editing. |
Databáze: | OpenAIRE |
Externí odkaz: |