Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Jiang, Zeyinzi"'
Autor:
Han, Zhen, Jiang, Zeyinzi, Pan, Yulin, Zhang, Jingfeng, Mao, Chaojie, Xie, Chenwei, Liu, Yu, Zhou, Jingren
Diffusion models have emerged as a powerful generative technology and have been found to be applicable in various scenarios. Most existing foundational diffusion models are primarily designed for text-guided visual generation and do not support multi
Externí odkaz:
http://arxiv.org/abs/2410.00086
Autor:
Zhao, Yuze, Huang, Jintao, Hu, Jinghan, Wang, Xingjun, Mao, Yunlin, Zhang, Daoze, Jiang, Zeyinzi, Wu, Zhikai, Ai, Baole, Wang, Ang, Zhou, Wenmeng, Chen, Yingda
Recent development in Large Language Models (LLMs) and Multi-modal Large Language Models (MLLMs) have leverage Attention-based Transformer architectures and achieved superior performance and generalization capabilities. They have since covered extens
Externí odkaz:
http://arxiv.org/abs/2408.05517
Given an original image, image editing aims to generate an image that align with the provided instruction. The challenges are to accept multimodal inputs as instructions and a scarcity of high-quality training data, including crucial triplets of sour
Externí odkaz:
http://arxiv.org/abs/2404.12154
Prior studies have made significant progress in image inpainting guided by either text or subject image. However, the research on editing with their combined guidance is still in the early stages. To tackle this challenge, we present LAR-Gen, a novel
Externí odkaz:
http://arxiv.org/abs/2403.19534
Autor:
Mao, Chaojie, Jiang, Zeyinzi
Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of tra
Externí odkaz:
http://arxiv.org/abs/2312.16916
Image diffusion models have been utilized in various tasks, such as text-to-image generation and controllable image synthesis. Recent research has introduced tuning methods that make subtle adjustments to the original models, yielding promising resul
Externí odkaz:
http://arxiv.org/abs/2312.11392
Autor:
Jiang, Zeyinzi, Mao, Chaojie, Huang, Ziyuan, Ma, Ao, Lv, Yiliang, Shen, Yujun, Zhao, Deli, Zhou, Jingren
Parameter-efficient tuning has become a trend in transferring large-scale foundation models to downstream applications. Existing methods typically embed some light-weight tuners into the backbone, where both the design and the learning of the tuners
Externí odkaz:
http://arxiv.org/abs/2310.19859
Parameter-efficient transfer learning (PETL) based on large-scale pre-trained foundation models has achieved great success in various downstream applications. Existing tuning methods, such as prompt, prefix, and adapter, perform task-specific lightwe
Externí odkaz:
http://arxiv.org/abs/2303.00690