Zobrazeno 1 - 10
of 2 919
pro vyhledávání: '"Geng, Xin"'
Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices while preserving user data privacy. A significant challenge of federated learning is data-level heterogeneity, i.e.,
Externí odkaz:
http://arxiv.org/abs/2408.07966
Pre-trained models have become the preferred backbone due to the expansion of model parameters, with techniques like Parameter-Efficient Fine-Tuning (PEFTs) typically fixing the parameters of these models. However, pre-trained models may not always b
Externí odkaz:
http://arxiv.org/abs/2408.07337
Large Language Models have demonstrated impressive capabilities in various language tasks but may produce content that misaligns with human expectations, raising ethical and legal concerns. Therefore, it is important to explore the limitations and im
Externí odkaz:
http://arxiv.org/abs/2408.02599
Autor:
Xu, Ning, Zhang, Zhaoyang, Qi, Lei, Wang, Wensuo, Zhang, Chao, Ren, Zihao, Zhang, Huaiyuan, Cheng, Xin, Zhang, Yanqi, Liu, Zhichao, Wei, Qingwen, Wu, Shiyang, Yang, Lanlan, Lu, Qianfeng, Ma, Yiqun, Zhao, Mengyao, Liu, Junbo, Song, Yufan, Geng, Xin, Yang, Jun
The field of integrated circuit (IC) design is highly specialized, presenting significant barriers to entry and research and development challenges. Although large language models (LLMs) have achieved remarkable success in various domains, existing L
Externí odkaz:
http://arxiv.org/abs/2408.00804
The expansion of model parameters underscores the significance of pre-trained models; however, the constraints encountered during model deployment necessitate models of variable sizes. Consequently, the traditional pre-training and fine-tuning paradi
Externí odkaz:
http://arxiv.org/abs/2406.17503
One of the approaches to quantum gravity is to formulate it in terms of De Rham algebra, choose a triangulation of space-time, and replace differential forms by cochains (that form a finite dimensional vector space). The key issue of general relativi
Externí odkaz:
http://arxiv.org/abs/2406.17922
As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by th
Externí odkaz:
http://arxiv.org/abs/2406.13185
Autor:
Zhang, Miaosen, Wei, Yixuan, Xing, Zhen, Ma, Yifei, Wu, Zuxuan, Li, Ji, Zhang, Zheng, Dai, Qi, Luo, Chong, Geng, Xin, Guo, Baining
Modern vision models are trained on very large noisy datasets. While these models acquire strong capabilities, they may not follow the user's intent to output the desired results in certain aspects, e.g., visual aesthetic, preferred style, and respon
Externí odkaz:
http://arxiv.org/abs/2406.09397
Dance plays an important role as an artistic form and expression in human culture, yet the creation of dance remains a challenging task. Most dance generation methods primarily rely solely on music, seldom taking into consideration intrinsic attribut
Externí odkaz:
http://arxiv.org/abs/2406.07871
In this paper, we introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning, which arise from dependencies on instances and labels. We start b
Externí odkaz:
http://arxiv.org/abs/2405.16474