Zobrazeno 1 - 10
of 22
pro vyhledávání: '"Nitzan, Yotam"'
Virtual Try-On (VTON) is a highly active line of research, with increasing demand. It aims to replace a piece of garment in an image with one from another, while preserving person and garment characteristics as well as image fidelity. Current literat
Externí odkaz:
http://arxiv.org/abs/2406.15331
Autor:
Nitzan, Yotam, Wu, Zongze, Zhang, Richard, Shechtman, Eli, Cohen-Or, Daniel, Park, Taesung, Gharbi, Michaël
We introduce a novel diffusion transformer, LazyDiffusion, that generates partial image updates efficiently. Our approach targets interactive image editing applications in which, starting from a blank canvas or an image, a user specifies a sequence o
Externí odkaz:
http://arxiv.org/abs/2404.12382
In recent years, the role of image generative models in facial reenactment has been steadily increasing. Such models are usually subject-agnostic and trained on domain-wide datasets. The appearance of the reenacted individual is learned from a single
Externí odkaz:
http://arxiv.org/abs/2307.06307
Autor:
Nitzan, Yotam, Gharbi, Michaël, Zhang, Richard, Park, Taesung, Zhu, Jun-Yan, Cohen-Or, Daniel, Shechtman, Eli
Can one inject new concepts into an already trained generative model, while respecting its existing structure and knowledge? We propose a new task - domain expansion - to address this. Given a pretrained generator and novel (but related) domains, we
Externí odkaz:
http://arxiv.org/abs/2301.05225
Autor:
Nitzan, Yotam, Aberman, Kfir, He, Qiurui, Liba, Orly, Yarom, Michal, Gandelsman, Yossi, Mosseri, Inbar, Pritch, Yael, Cohen-or, Daniel
We introduce MyStyle, a personalized deep generative prior trained with a few shots of an individual. MyStyle allows to reconstruct, enhance and edit images of a specific person, such that the output is faithful to the person's key facial characteris
Externí odkaz:
http://arxiv.org/abs/2203.17272
Autor:
Bermano, Amit H., Gal, Rinon, Alaluf, Yuval, Mokady, Ron, Nitzan, Yotam, Tov, Omer, Patashnik, Or, Cohen-Or, Daniel
Generative Adversarial Networks (GANs) have established themselves as a prevalent approach to image synthesis. Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downs
Externí odkaz:
http://arxiv.org/abs/2202.14020
Publikováno v:
Proc. 10th International Conference on Learning Representations, ICLR 2022
In this paper, we perform an in-depth study of the properties and applications of aligned generative models. We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) vi
Externí odkaz:
http://arxiv.org/abs/2110.11323
We propose a novel method for solving regression tasks using few-shot or weak supervision. At the core of our method is the fundamental observation that GANs are incredibly successful at encoding semantic information within their latent space, even i
Externí odkaz:
http://arxiv.org/abs/2107.11186
Recently, there has been a surge of diverse methods for performing image editing by employing pre-trained unconditional generators. Applying these methods on real images, however, remains a challenge, as it necessarily requires the inversion of the i
Externí odkaz:
http://arxiv.org/abs/2102.02766
Autor:
Richardson, Elad, Alaluf, Yuval, Patashnik, Or, Nitzan, Yotam, Azar, Yaniv, Shapiro, Stav, Cohen-Or, Daniel
We present a generic image-to-image translation framework, pixel2style2pixel (pSp). Our pSp framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming th
Externí odkaz:
http://arxiv.org/abs/2008.00951