Learning Disentangled Representations via Independent Subspaces
Autor: | Hanno Ackermann, Bodo Rosenhahn, Maren Awiszus |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Artificial neural network business.industry Image quality Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Machine Learning (stat.ML) Pattern recognition Linear subspace Autoencoder Residual neural network Machine Learning (cs.LG) Statistics - Machine Learning Entropy (information theory) Segmentation Artificial intelligence business |
Zdroj: | ICCV Workshops |
DOI: | 10.1109/iccvw.2019.00069 |
Popis: | Image generating neural networks are mostly viewed as black boxes, where any change in the input can have a number of globally effective changes on the output. In this work, we propose a method for learning disentangled representations to allow for localized image manipulations. We use face images as our example of choice. Depending on the image region, identity and other facial attributes can be modified. The proposed network can transfer parts of a face such as shape and color of eyes, hair, mouth, etc.~directly between persons while all other parts of the face remain unchanged. The network allows to generate modified images which appear like realistic images. Our model learns disentangled representations by weak supervision. We propose a localized resnet autoencoder optimized using several loss functions including a loss based on the semantic segmentation, which we interpret as masks, and a loss which enforces disentanglement by decomposition of the latent space into statistically independent subspaces. We evaluate the proposed solution w.r.t. disentanglement and generated image quality. Convincing results are demonstrated using the CelebA dataset. Accepted at ICCV 2019 Workshop on Robust Subspace Learning and Applications in Computer Vision |
Databáze: | OpenAIRE |
Externí odkaz: |