Zobrazeno 1 - 10
of 327
pro vyhledávání: '"Elgammal, Ahmed A."'
In this paper, we present MoMA: an open-vocabulary, training-free personalized image model that boasts flexible zero-shot capabilities. As foundational text-to-image models rapidly evolve, the demand for robust image-to-image translation grows. Addre
Externí odkaz:
http://arxiv.org/abs/2404.05674
Autor:
Khan, Faizan Farooq, Kim, Diana, Jha, Divyansh, Mohamed, Youssef, Chang, Hanna H, Elgammal, Ahmed, Elliott, Luba, Elhoseiny, Mohamed
Discovering the creative potentials of a random signal to various artistic expressions in aesthetic and conceptual richness is a ground for the recent success of generative machine learning as a way of art creation. To understand the new artistic med
Externí odkaz:
http://arxiv.org/abs/2402.02453
Can a text-to-image diffusion model be used as a training objective for adapting a GAN generator to another domain? In this paper, we show that the classifier-free guidance can be leveraged as a critic and enable generators to distill knowledge from
Externí odkaz:
http://arxiv.org/abs/2212.04473
We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art. This formal analysis is fundamental for understanding art, but developing such a system is challenging. Paintings have high
Externí odkaz:
http://arxiv.org/abs/2201.01819
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We p
Externí odkaz:
http://arxiv.org/abs/2101.04775
Imagining a colored realistic image from an arbitrarily drawn sketch is one of the human capabilities that we eager machines to mimic. Unlike previous methods that either requires the sketch-image pairs or utilize low-quantity detected edges as sketc
Externí odkaz:
http://arxiv.org/abs/2012.09290
Autor:
Khayatkhoei, Mahyar, Elgammal, Ahmed
As the success of Generative Adversarial Networks (GANs) on natural images quickly propels them into various real-life applications across different domains, it becomes more and more important to clearly understand their limitations. Specifically, un
Externí odkaz:
http://arxiv.org/abs/2010.01473
Focusing on text-to-image (T2I) generation, we propose Text and Image Mutual-Translation Adversarial Networks (TIME), a lightweight but effective model that jointly learns a T2I generator G and an image captioning discriminator D under the Generative
Externí odkaz:
http://arxiv.org/abs/2005.13192
Publikováno v:
ACCV 2020
We propose a new approach for synthesizing fully detailed art-stylized images from sketches. Given a sketch, with no semantic tagging, and a reference image of a specific style, the model can synthesize meaningful details with colors and textures. Th
Externí odkaz:
http://arxiv.org/abs/2002.12888
Publikováno v:
In Journal of Constructional Steel Research December 2023 211