Conditional image generation and manipulation for user-specified content
Autor: | Stap, D., Bleeker, M., Ibrahimi, S., ter Hoeve, M. |
---|---|
Přispěvatelé: | Language Technology Lab (IvI, FNWI), Information Retrieval Lab (IvI, FNWI), Multimedia Analytics Lab (IvI, FNWI) |
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | AI for Content Creation Workshop |
Popis: | In recent years, Generative Adversarial Networks (GANs) have improved steadily towards generating increasingly impressive real-world images. It is useful to steer the image generation process for purposes such as content creation. This can be done by conditioning the model on additional information. However, when conditioning on additional information, there still exists a large set of images that agree with a particular conditioning. This makes it unlikely that the generated image is exactly as envisioned by a user, which is problematic for practical content creation scenarios such as generating facial composites or stock photos. To solve this problem, we propose a single pipeline for text-to-image generation and manipulation. In the first part of our pipeline we introduce textStyleGAN, a model that is conditioned on text. In the second part of our pipeline we make use of the pre-trained weights of textStyleGAN to perform semantic facial image manipulation. The approach works by finding semantic directions in latent space. We show that this method can be used to manipulate facial images for a wide range of attributes. Finally, we introduce the CelebTD-HQ dataset, an extension to CelebA-HQ, consisting of faces and corresponding textual descriptions. Comment: Accepted to the AI for content creation workshop at CVPR 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |