Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Ilya Kalinovskiy"'
Publikováno v:
Informatica. :425-440
Publikováno v:
Pattern Recognition Letters. 138:527-533
In this work, we introduce a novel framework based on Generative Adversarial Networks to control the pose, expression and facial features of a given face image using another face image. It can then be used for data augmentation, pose invariant face i
Publikováno v:
Proceedings of the 5th International Conference on Engineering and MIS.
This paper provides a comparative analysis between two recent image-to-image translation models based on Generative Adversarial Networks. The first one is UNIT which consists of coupled GANs and variational autoencoders (VAEs) with shared-latent spac
Publikováno v:
Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions ISBN: 9783030304928
ICANN (Workshop)
ICANN (Workshop)
Synthesizing realistic multi-view face images from a single-view input is an effective and cheap way for data augmentation. In addition it is promising for more efficiently training deep pose-invariant models for large-scale unconstrained face recogn
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::bd3d0bb044900011e8abd4e8e7e53480
https://doi.org/10.1007/978-3-030-30493-5_51
https://doi.org/10.1007/978-3-030-30493-5_51
Autor:
Andrei Oleinik, Ilya Kalinovskiy, Aleksandr Melnikov, Evgeny Smirnov, Eugene Luckyanets, Elizaveta Ivanova
Publikováno v:
CVPR Workshops
Hard example mining is an important part of the deep embedding learning. Most methods perform it at the mini-batch level. However, in the large-scale settings there is only a small chance that proper examples will appear in the same mini-batch and wi
Publikováno v:
IOP Conference Series: Materials Science and Engineering. 618:012012
This paper provides the comparative analysis between two recent image-to-image translation models that based on Generative Adversarial Networks. The first one is UNIT which consists of coupled GANs and variational autoencoders (VAEs) with shared-late