Physical-model guided self-distillation network for single image dehazing

Autor: Yunwei Lan, Zhigao Cui, Yanzhao Su, Nian Wang, Aihua Li, Deshuai Han
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Frontiers in Neurorobotics, Vol 16 (2022)
Druh dokumentu: article
ISSN: 1662-5218
DOI: 10.3389/fnbot.2022.1036465
Popis: MotivationImage dehazing, as a key prerequisite of high-level computer vision tasks, has gained extensive attention in recent years. Traditional model-based methods acquire dehazed images via the atmospheric scattering model, which dehazed favorably but often causes artifacts due to the error of parameter estimation. By contrast, recent model-free methods directly restore dehazed images by building an end-to-end network, which achieves better color fidelity. To improve the dehazing effect, we combine the complementary merits of these two categories and propose a physical-model guided self-distillation network for single image dehazing named PMGSDN.Proposed methodFirst, we propose a novel attention guided feature extraction block (AGFEB) and build a deep feature extraction network by it. Second, we propose three early-exit branches and embed the dark channel prior information to the network to merge the merits of model-based methods and model-free methods, and then we adopt self-distillation to transfer the features from the deeper layers (perform as teacher) to shallow early-exit branches (perform as student) to improve the dehazing effect.ResultsFor I-HAZE and O-HAZE datasets, better than the other methods, the proposed method achieves the best values of PSNR and SSIM being 17.41dB, 0.813, 18.48dB, and 0.802. Moreover, for real-world images, the proposed method also obtains high quality dehazed results.ConclusionExperimental results on both synthetic and real-world images demonstrate that the proposed PMGSDN can effectively dehaze images, resulting in dehazed results with clear textures and good color fidelity.
Databáze: Directory of Open Access Journals