Low Light Video Enhancement Using Synthetic Data Produced with an Intermediate Domain Mapping
Autor: | Sean Moran, Sarah Parisot, Gregory G. Slabaugh, Steven McDonagh, Danai Triantafyllidou |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer science
business.industry Image quality ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION 020206 networking & telecommunications 02 engineering and technology Synthetic data 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing The Internet Computer vision Artificial intelligence business Domain mapping |
Zdroj: | Computer Vision – ECCV 2020 ISBN: 9783030586003 ECCV (13) |
DOI: | 10.1007/978-3-030-58601-0_7 |
Popis: | Advances in low-light video RAW-to-RGB translation are opening up the possibility of fast low-light imaging on commodity devices (e.g. smartphone cameras) without the need for a tripod. However, it is challenging to collect the required paired short-long exposure frames to learn a supervised mapping. Current approaches require a specialised rig or the use of static videos with no subject or object motion, resulting in datasets that are limited in size, diversity, and motion. We address the data collection bottleneck for low-light video RAW-to-RGB by proposing a data synthesis mechanism, dubbed SIDGAN, that can generate abundant dynamic video training pairs. SIDGAN maps videos found ‘in the wild’ (e.g. internet videos) into a low-light (short, long exposure) domain. By generating dynamic video data synthetically, we enable a recently proposed state-of-the-art RAW-to-RGB model to attain higher image quality (improved colour, reduced artifacts) and improved temporal consistency, compared to the same model trained with only static real video data. |
Databáze: | OpenAIRE |
Externí odkaz: |