Zobrazeno 1 - 10
of 833
pro vyhledávání: '"ZHOU Jiayu"'
Publikováno v:
Hangkong gongcheng jinzhan, Vol 15, Iss 5, Pp 86-96 (2024)
The damage effect of the explosion shock wave generated by the explosion of the air to air missile warhead on the aircraft skin structure is influenced by various factors,and the function mechanism is relatively com plex,so that a large number of
Externí odkaz:
https://doaj.org/article/ae63642b9d174857af6e15bff886b0b4
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training. However, long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization. Prior
Externí odkaz:
http://arxiv.org/abs/2410.23605
Advancements in large language models (LLMs) have shown their effectiveness in multiple complicated natural language reasoning tasks. A key challenge remains in adapting these models efficiently to new or unfamiliar tasks. In-context learning (ICL) p
Externí odkaz:
http://arxiv.org/abs/2408.00144
Autor:
Wang, Jiankun, Ahn, Sumyeong, Dalal, Taykhoom, Zhang, Xiaodan, Pan, Weishen, Zhang, Qiannan, Chen, Bin, Dodge, Hiroko H., Wang, Fei, Zhou, Jiayu
Alzheimer's disease (AD) is the fifth-leading cause of death among Americans aged 65 and older. Screening and early detection of AD and related dementias (ADRD) are critical for timely intervention and for identifying clinical trial participants. The
Externí odkaz:
http://arxiv.org/abs/2405.16413
Independent and identically distributed (i.i.d.) data is essential to many data analysis and modeling techniques. In the medical domain, collecting data from multiple sites or institutions is a common strategy that guarantees sufficient clinical dive
Externí odkaz:
http://arxiv.org/abs/2405.15081
Hyperparameter tuning, particularly the selection of an appropriate learning rate in adaptive gradient training methods, remains a challenge. To tackle this challenge, in this paper, we propose a novel parameter-free optimizer, \textsc{AdamG} (Adam w
Externí odkaz:
http://arxiv.org/abs/2405.04376
Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients, and this work investigates the carbon emission of the FL process. Cloud and edge servers hosting FL clients may exhibit diverse carbon foo
Externí odkaz:
http://arxiv.org/abs/2404.15503
Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization. However, a rigorous understanding of how the representation function learned on an unlabeled dataset affect
Externí odkaz:
http://arxiv.org/abs/2403.06871
Sub-population shift is a specific type of domain shift that highlights changes in data distribution within specific sub-groups or populations between training and testing. Sub-population shift accounts for a significant source of algorithmic bias an
Externí odkaz:
http://arxiv.org/abs/2403.07888
Autor:
Pang, Yijiang, Zhou, Jiayu
Large foundation models, such as large language models, have performed exceptionally well in various application scenarios. Building or fully fine-tuning such large models is usually prohibitive due to either hardware budget or lack of access to back
Externí odkaz:
http://arxiv.org/abs/2402.01621