Super-Resolution of Sentinel-2 Images Using Convolutional Neural Networks and Real Ground Truth Data
Autor: | Rubén Sesma, Mikel Galar, C. Aranda, C. Ayala, Lourdes Albizua |
---|---|
Přispěvatelé: | Universidad Pública de Navarra / Nafarroako Unibertsitate Publikoa. ISC - Institute of Smart Cities, Gobierno de Navarra / Nafarroako Gobernua, 0011-1408-2020-000008. |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Earth observation
Similarity (geometry) 010504 meteorology & atmospheric sciences Computer science Science super-resolution 02 engineering and technology Multi-spectral image 01 natural sciences Convolutional neural network convolutional neural networks 0202 electrical engineering electronic engineering information engineering Computer vision Image resolution 0105 earth and related environmental sciences Ground truth business.industry Deep learning deep learning Spectral bands multi-spectral image Super-resolution General Earth and Planetary Sciences RGB color model Sentinel-2 020201 artificial intelligence & image processing Convolutional neural networks Artificial intelligence business |
Zdroj: | Remote Sensing, Vol 12, Iss 2941, p 2941 (2020) Academica-e. Repositorio Institucional de la Universidad Pública de Navarra instname Academica-e: Repositorio Institucional de la Universidad Pública de Navarra Universidad Pública de Navarra Remote Sensing; Volume 12; Issue 18; Pages: 2941 |
ISSN: | 2072-4292 |
Popis: | Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2. M.G. was partially supported by Tracasa Instrumental S.L. under projects OTRI 2018-901-073, OTRI 2019-901-091 and OTRI 2020-901-050. C.A. (Christian Ayala) was partially supported by the Goverment of Navarra under the industrial PhD program 2020 reference 0011-1408-2020-000008. M.G. was partially supported by Tracasa Instrumental S.L. under projects OTRI 2018-901-073, OTRI 2019-901-091 and OTRI 2020-901-050. C.A. (Christian Ayala) was partially supported by the Goverment of Navarra under the industrial PhD program 2020 reference 0011-1408-2020-000008. |
Databáze: | OpenAIRE |
Externí odkaz: |