Learning Disentangled Representations via Independent Subspaces

Autor: Awiszus, Maren, Ackermann, Hanno, Rosenhahn, Bodo
Rok vydání: 2019
Předmět:
Druh dokumentu: Working Paper
Popis: Image generating neural networks are mostly viewed as black boxes, where any change in the input can have a number of globally effective changes on the output. In this work, we propose a method for learning disentangled representations to allow for localized image manipulations. We use face images as our example of choice. Depending on the image region, identity and other facial attributes can be modified. The proposed network can transfer parts of a face such as shape and color of eyes, hair, mouth, etc.~directly between persons while all other parts of the face remain unchanged. The network allows to generate modified images which appear like realistic images. Our model learns disentangled representations by weak supervision. We propose a localized resnet autoencoder optimized using several loss functions including a loss based on the semantic segmentation, which we interpret as masks, and a loss which enforces disentanglement by decomposition of the latent space into statistically independent subspaces. We evaluate the proposed solution w.r.t. disentanglement and generated image quality. Convincing results are demonstrated using the CelebA dataset.
Comment: Accepted at ICCV 2019 Workshop on Robust Subspace Learning and Applications in Computer Vision
Databáze: arXiv