Zobrazeno 1 - 10
of 401
pro vyhledávání: '"Yu Shuyang"'
Publikováno v:
Haiyang Kaifa yu guanli, Vol 41, Iss 6, Pp 108-119 (2024)
The scientific and sustainable utilization of marine space resources is of great significance for the high-quality development of China’s marine economy and the Ocean Power Strategy. This paper constructs the index system of marine space resources
Externí odkaz:
https://doaj.org/article/e11ac881b1774632ac69478a595bf18c
Publikováno v:
Zhongliu Fangzhi Yanjiu, Vol 50, Iss 12, Pp 1232-1236 (2023)
The Chinese Society of Clinical Oncology (CSCO) issued the new version of the guidelines on diagnosis and treatment of NSCLC in April 2023.The new version updated the diagnostic and therapeutic strategy of rare oncogenic mutations, including ROS1 fus
Externí odkaz:
https://doaj.org/article/5791ceb4dab8428f99c2fd8d98dd30b6
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training. However, long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization. Prior
Externí odkaz:
http://arxiv.org/abs/2410.23605
Autor:
Du, Guodong, Lee, Junlin, Li, Jing, Jiang, Runhua, Guo, Yifei, Yu, Shuyang, Liu, Hanting, Goh, Sim Kuan, Tang, Ho-Kin, He, Daojing, Zhang, Min
While fine-tuning pretrained models has become common practice, these models often underperform outside their specific domains. Recently developed model merging techniques enable the direct integration of multiple models, each fine-tuned for distinct
Externí odkaz:
http://arxiv.org/abs/2410.02396
Autor:
Du, Guodong, Li, Jing, Liu, Hanting, Jiang, Runhua, Yu, Shuyang, Guo, Yifei, Goh, Sim Kuan, Tang, Ho-Kin
Fine-tuning pre-trained language models, particularly large language models, demands extensive computing resources and can result in varying performance outcomes across different domains and datasets. This paper examines the approach of integrating m
Externí odkaz:
http://arxiv.org/abs/2406.12208
Spiking neural networks (SNNs) have gained prominence for their potential in neuromorphic computing and energy-efficient artificial intelligence, yet optimizing them remains a formidable challenge for gradient-based methods due to their discrete, spi
Externí odkaz:
http://arxiv.org/abs/2406.02349
Hyperparameter tuning, particularly the selection of an appropriate learning rate in adaptive gradient training methods, remains a challenge. To tackle this challenge, in this paper, we propose a novel parameter-free optimizer, \textsc{AdamG} (Adam w
Externí odkaz:
http://arxiv.org/abs/2405.04376
Federated learning (FL) emerges as an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training. Such collaboration results in non-trivial intellectual property (IP) repre
Externí odkaz:
http://arxiv.org/abs/2312.03205
Training a high-performance deep neural network requires large amounts of data and computational resources. Protecting the intellectual property (IP) and commercial ownership of a deep model is challenging yet increasingly crucial. A major stream of
Externí odkaz:
http://arxiv.org/abs/2309.01786
Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model. How
Externí odkaz:
http://arxiv.org/abs/2306.02368