Popis: |
Legacy grayscale aerial photographs represent one of the main available sources for studying the past state of the environment and its relationship to the present. However, these photographs lack spectral information thereby hindering their use in current remote sensing approaches that rely on spectral data for characterizing surfaces. This article proposes a conditional generative adversarial network, a deep learning model, to enrich legacy photographs by predicting color channels for an input grayscale image. The technique was used to colorize two orthophotographs (taken in 1956 and 1978) covering the entire Eurométropole de Strasbourg. To assess the model's performances, two strategies were proposed: first, colorized photographs were evaluated with metrics such as peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM); second, random forest classifications were performed to extract land cover classes from grayscale and colorized photographs, respectively. The results revealed strong performances, with PSNR = 25.56 ± 2.20 and SSIM = 0.93 ± 0.06 indicating that the model successfully learned the mapping between grayscale and color photographs over a large territory. Moreover, land cover classifications performed on colorized data showed significant improvements over grayscale photographs, respectively, +6% and +17% for 1956 and 1978. Finally, the plausibility of outputs images was evaluated visually. We conclude that deep learning models are powerful tools for improving radiometric properties of old aerial grayscale photographs and land cover mapping. We also argue that the proposed approach could serve as a basis for further developments aiming to promote the use of aerial photographs archives for landscapes reconstruction. |