Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Kupyn, Orest"'
Human head detection, keypoint estimation, and 3D head model fitting are essential tasks with many applications. However, traditional real-world datasets often suffer from bias, privacy, and ethical concerns, and they have been recorded in laboratory
Externí odkaz:
http://arxiv.org/abs/2407.18245
Autor:
Kupyn, Orest, Rupprecht, Christian
We present a method for expanding a dataset by incorporating knowledge from the wide distribution of pre-trained latent diffusion models. Data augmentations typically incorporate inductive biases about the image formation process into the training (e
Externí odkaz:
http://arxiv.org/abs/2406.08249
Autor:
Martyniuk, Tetiana, Kupyn, Orest, Kurlyak, Yana, Krashenyi, Igor, Matas, Jiři, Sharmanska, Viktoriia
We present DAD-3DHeads, a dense and diverse large-scale dataset, and a robust model for 3D Dense Head Alignment in the wild. It contains annotations of over 3.5K landmarks that accurately represent 3D head shape compared to the ground-truth scans. Th
Externí odkaz:
http://arxiv.org/abs/2204.03688
We present FEAR, a family of fast, efficient, accurate, and robust Siamese visual trackers. We present a novel and efficient way to benefit from dual-template representation for object model adaption, which incorporates temporal information with only
Externí odkaz:
http://arxiv.org/abs/2112.07957
Autor:
Kosarevych, Ivan, Petruk, Marian, Kostiv, Markian, Kupyn, Orest, Maksymenko, Mykola, Budzan, Volodymyr
This paper introduces ActGAN - a novel end-to-end generative adversarial network (GAN) for one-shot face reenactment. Given two images, the goal is to transfer the facial expression of the source actor onto a target person in a photo-realistic fashio
Externí odkaz:
http://arxiv.org/abs/2003.13840
Autor:
Kupyn, Orest, Pranchuk, Dmitry
The highest accuracy object detectors to date are based either on a two-stage approach such as Fast R-CNN or one-stage detectors such as Retina-Net or SSD with deep and complex backbones. In this paper we present TigerNet - simple yet efficient FPN b
Externí odkaz:
http://arxiv.org/abs/1909.01122
We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2 is based on a relativist
Externí odkaz:
http://arxiv.org/abs/1908.03826
Data augmentation is widely used as a part of the training process applied to deep learning models, especially in the computer vision domain. Currently, common data augmentation techniques are designed manually. Therefore they require expert knowledg
Externí odkaz:
http://arxiv.org/abs/1907.12896
We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss . DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearanc
Externí odkaz:
http://arxiv.org/abs/1711.07064
Publikováno v:
Journal of Physics: Conference Series; 10/25/2023, Vol. 2640 Issue 1, p1-6, 6p