Zobrazeno 1 - 10
of 156
pro vyhledávání: '"Wang, Chaofei"'
Autor:
Guo, Jiayi, Xu, Xingqian, Pu, Yifan, Ni, Zanlin, Wang, Chaofei, Vasu, Manushree, Song, Shiji, Huang, Gao, Shi, Humphrey
Recently, diffusion models have made remarkable progress in text-to-image (T2I) generation, synthesizing images with high fidelity and diverse contents. Despite this advancement, latent space smoothness within diffusion models remains largely unexplo
Externí odkaz:
http://arxiv.org/abs/2312.04410
Autor:
Wang, Shenzhi, Liu, Chang, Zheng, Zilong, Qi, Siyuan, Chen, Shuo, Yang, Qisen, Zhao, Andrew, Wang, Chaofei, Song, Shiji, Huang, Gao
Recent breakthroughs in large language models (LLMs) have brought remarkable success in the field of LLM-as-Agent. Nevertheless, a prevalent assumption is that the information processed by LLMs is consistently honest, neglecting the pervasive decepti
Externí odkaz:
http://arxiv.org/abs/2310.01320
Dynamic computation has emerged as a promising avenue to enhance the inference efficiency of deep networks. It allows selective activation of computational units, leading to a reduction in unnecessary computations for each input sample. However, the
Externí odkaz:
http://arxiv.org/abs/2308.15949
Over the past decade, deep learning models have exhibited considerable advancements, reaching or even exceeding human-level performance in a range of visual perception tasks. This remarkable progress has sparked interest in applying deep networks to
Externí odkaz:
http://arxiv.org/abs/2308.13998
Autor:
Guo, Jiayi, Wang, Chaofei, Wu, You, Zhang, Eric, Wang, Kai, Xu, Xingqian, Song, Shiji, Shi, Humphrey, Huang, Gao
Recently, CLIP-guided image synthesis has shown appealing performance on adapting a pre-trained source-domain generator to an unseen target domain. It does not require any target-domain samples but only the textual domain labels. The training is high
Externí odkaz:
http://arxiv.org/abs/2304.03119
Knowledge distillation is an effective approach to learn compact models (students) with the supervision of large and strong models (teachers). As empirically there exists a strong correlation between the performance of teacher and student models, it
Externí odkaz:
http://arxiv.org/abs/2210.06458
Autor:
Han, Yizeng, Pu, Yifan, Lai, Zihang, Wang, Chaofei, Song, Shiji, Cao, Junfen, Huang, Wenhui, Deng, Chao, Huang, Gao
Early exiting is an effective paradigm for improving the inference efficiency of deep networks. By constructing classifiers with varying resource demands (the exits), such networks allow easy samples to be output at early exits, removing the need for
Externí odkaz:
http://arxiv.org/abs/2209.08310
Training a generative adversarial network (GAN) with limited data has been a challenging task. A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few
Externí odkaz:
http://arxiv.org/abs/2203.04121
Publikováno v:
In Pattern Recognition November 2024 155
Traditional knowledge distillation transfers "dark knowledge" of a pre-trained teacher network to a student network, and ignores the knowledge in the training process of the teacher, which we call teacher's experience. However, in realistic education
Externí odkaz:
http://arxiv.org/abs/2202.12488