Popis: |
There is an apparent discrepancy between visual perception, which is colorful, complete, and in high resolution, and the saccadic, and spatially heterogeneous retinal input data. In this work, we computationally emulated foveated color maps and intensity channels as well as intra-saccadic motion data using a neuromorphic event camera. We used a convolutional neural network, U-Net, and adversarial optimization to demonstrate how retinal inputs can be used for the reconstruction of colorful images in high resolution. Our model may set the groundwork for the development of biologically plausible neural networks for computational vision perception. |