Attribute Manipulation Generative Adversarial Networks for Fashion Images
Autor: | Ashraf A. Kassim, Kenan E. Ak, Joo-Hwee Lim, Jo Yew Tham |
---|---|
Rok vydání: | 2019 |
Předmět: |
business.industry
Computer science media_common.quotation_subject 02 engineering and technology 010501 environmental sciences 01 natural sciences Class (biology) Perception 0202 electrical engineering electronic engineering information engineering Task analysis 020201 artificial intelligence & image processing Artificial intelligence business Image retrieval Generative grammar 0105 earth and related environmental sciences media_common Generator (mathematics) |
Zdroj: | ICCV |
DOI: | 10.1109/iccv.2019.01064 |
Popis: | Recent advances in Generative Adversarial Networks (GANs) have made it possible to conduct multi-domain image-to-image translation using a single generative network. While recent methods such as Ganimation and SaGAN are able to conduct translations on attribute-relevant regions using attention, they do not perform well when the number of attributes increases as the training of attention masks mostly rely on classification losses. To address this and other limitations, we introduce Attribute Manipulation Generative Adversarial Networks (AMGAN) for fashion images. While AMGAN's generator network uses class activation maps (CAMs) to empower its attention mechanism, it also exploits perceptual losses by assigning reference (target) images based on attribute similarities. AMGAN incorporates an additional discriminator network that focuses on attribute-relevant regions to detect unrealistic translations. Additionally, AMGAN can be controlled to perform attribute manipulations on specific regions such as the sleeve or torso regions. Experiments show that AMGAN outperforms state-of-the-art methods using traditional evaluation metrics as well as an alternative one that is based on image retrieval. |
Databáze: | OpenAIRE |
Externí odkaz: |