Zobrazeno 1 - 10
of 4 028
pro vyhledávání: '"KANG, YAN"'
Autor:
You, Haoran, Barnes, Connelly, Zhou, Yuqian, Kang, Yan, Du, Zhenbang, Zhou, Wei, Zhang, Lingzhi, Nitzan, Yotam, Liu, Xiaoyang, Lin, Zhe, Shechtman, Eli, Amirghodsi, Sohrab, Lin, Yingyan Celine
Diffusion Transformers (DiTs) have achieved state-of-the-art (SOTA) image generation quality but suffer from high latency and memory inefficiency, making them difficult to deploy on resource-constrained devices. One key efficiency bottleneck is that
Externí odkaz:
http://arxiv.org/abs/2412.16822
Autor:
Ding, Zihan, Jin, Chi, Liu, Difan, Zheng, Haitian, Singh, Krishna Kumar, Zhang, Qiang, Kang, Yan, Lin, Zhe, Liu, Yuchen
Diffusion probabilistic models have shown significant progress in video generation; however, their computational efficiency is limited by the large number of sampling steps required. Reducing sampling steps often compromises video quality or generati
Externí odkaz:
http://arxiv.org/abs/2412.15689
By adapting Large Language Models (LLMs) to domain-specific tasks or enriching them with domain-specific knowledge, we can fully harness the capabilities of LLMs. Nonetheless, a gap persists in achieving simultaneous mutual enhancement between the se
Externí odkaz:
http://arxiv.org/abs/2411.11707
Diffusion probabilistic models can generate high-quality samples. Yet, their sampling process requires numerous denoising steps, making it slow and computationally intensive. We propose to reduce the sampling cost by pruning a pretrained diffusion mo
Externí odkaz:
http://arxiv.org/abs/2409.15557
Personalized Federated Continual Learning (PFCL) is a new practical scenario that poses greater challenges in sharing and personalizing knowledge. PFCL not only relies on knowledge fusion for server aggregation at the global spatial-temporal perspect
Externí odkaz:
http://arxiv.org/abs/2407.00113
Autor:
Fan, Tao, Kang, Yan, Chen, Weijing, Gu, Hanlin, Song, Yuanfeng, Fan, Lixin, Chen, Kai, Yang, Qiang
In the context of real-world applications, leveraging large language models (LLMs) for domain-specific tasks often faces two major challenges: domain-specific knowledge privacy and constrained resources. To address these issues, we propose PDSS, a pr
Externí odkaz:
http://arxiv.org/abs/2406.12403
Autor:
Fan, Tao, Ma, Guoqiang, Kang, Yan, Gu, Hanlin, Song, Yuanfeng, Fan, Lixin, Chen, Kai, Yang, Qiang
Recent research in federated large language models (LLMs) has primarily focused on enabling clients to fine-tune their locally deployed homogeneous LLMs collaboratively or on transferring knowledge from server-based LLMs to small language models (SLM
Externí odkaz:
http://arxiv.org/abs/2406.02224
Autor:
Gu, Hanlin, Luo, Jiahuan, Kang, Yan, Yao, Yuan, Zhu, Gongxi, Li, Bowen, Fan, Lixin, Yang, Qiang
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data. The concern about privacy leakage, albeit demonstrated under specific condition
Externí odkaz:
http://arxiv.org/abs/2406.01085
While AI-generated content has garnered significant attention, achieving photo-realistic video synthesis remains a formidable challenge. Despite the promising advances in diffusion models for video generation quality, the complex model architecture a
Externí odkaz:
http://arxiv.org/abs/2406.00195
Individuals and businesses have been significantly benefited by Large Language Models (LLMs) including PaLM, Gemini and ChatGPT in various ways. For example, LLMs enhance productivity, reduce costs, and enable us to focus on more valuable tasks. Furt
Externí odkaz:
http://arxiv.org/abs/2405.20681