Zobrazeno 1 - 10
of 2 284
pro vyhledávání: '"cross-domain image"'
Different camera sensors have different noise patterns, and thus an image denoising model trained on one sensor often does not generalize well to a different sensor. One plausible solution is to collect a large dataset for each sensor for training or
Externí odkaz:
http://arxiv.org/abs/2411.01472
We present TALE, a novel training-free framework harnessing the generative capabilities of text-to-image diffusion models to address the cross-domain image composition task that focuses on flawlessly incorporating user-specified objects into a design
Externí odkaz:
http://arxiv.org/abs/2408.03637
Publikováno v:
Scientific Reports, Vol 14, Iss 1, Pp 1-12 (2024)
Abstract Recently, impressive progress has been made in cross-domain image translation using image generation models pre-trained on massive amounts of data since these pre-trained generative models have strong generative capabilities.However, due to
Externí odkaz:
https://doaj.org/article/59d01d0c66ac46ae986c5de6969eaf2d
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Cross-Domain Image Retrieval (CDIR) is a challenging task in computer vision, aiming to match images across different visual domains such as sketches, paintings, and photographs. Traditional approaches focus on visual image features and rely heavily
Externí odkaz:
http://arxiv.org/abs/2403.15152
Unsupervised cross-domain image retrieval (UCIR) aims to retrieve images sharing the same category across diverse domains without relying on labeled data. Prior approaches have typically decomposed the UCIR problem into two distinct tasks: intra-doma
Externí odkaz:
http://arxiv.org/abs/2402.18411
The purpose of this paper is to enable the conversion between machine-printed character images (i.e., font images) and handwritten character images through machine learning. For this purpose, we propose a novel unpaired image-to-image domain conversi
Externí odkaz:
http://arxiv.org/abs/2403.02919
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Xie, Yunjie1 (AUTHOR) 222208855029@zust.edu.cn, Xiang, Jian1 (AUTHOR) xiangjian@zust.edu.cn, Li, Xiaoyong1 (AUTHOR), Yang, Chen1 (AUTHOR)
Publikováno v:
Fishes (MDPI AG). Sep2024, Vol. 9 Issue 9, p338. 24p.
Text-driven diffusion models have exhibited impressive generative capabilities, enabling various image editing tasks. In this paper, we propose TF-ICON, a novel Training-Free Image COmpositioN framework that harnesses the power of text-driven diffusi
Externí odkaz:
http://arxiv.org/abs/2307.12493