Autor: |
Hao Sheng, Kun Cheng, Xiaokang Jin, Tian Han, Xiaolin Jiang, Changchun Dong |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
AIP Advances, Vol 13, Iss 3, Pp 035118-035118-11 (2023) |
Druh dokumentu: |
article |
ISSN: |
2158-3226 |
DOI: |
10.1063/5.0140530 |
Popis: |
Compressive light field cameras have attracted notable attention over the past few years because they can efficiently determine redundancy from light fields. However, much of the research has only concentrated on reconstructing the entire light field from compressed sampling, which ignores the possibility of directly extracting information such as depth from it. In this paper, we introduce a light field camera configuration with a random color-coded microlens array. Considering the color-coded light fields, we propose a novel attention-based encoder–decoder network. Specifically, the encoder part compresses the coded measurement into a low-dimensional representation that removes most redundancy, and the decoder part constructs the depth map directly from the latent representation. The attention mechanism enables the network to process spatial and angular features dynamically and effectively, thus significantly improving performance. Extensive experiments on synthetic and real-world datasets show that our method outperforms the state-of-the-art light field depth estimation method designed for non-coded light fields. To our knowledge, this is the first study that combines the color-coded light field with the attention-based deep learning approach, which provides a crucial insight into the design of enhanced light field photography systems. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|