Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Fan, Run-Ze"'
Autor:
Huang, Zhen, Wang, Zengzhi, Xia, Shijie, Li, Xuefeng, Zou, Haoyang, Xu, Ruijie, Fan, Run-Ze, Ye, Lyumanshan, Chern, Ethan, Ye, Yixin, Zhang, Yikai, Yang, Yuqing, Wu, Ting, Wang, Binjie, Sun, Shichao, Xiao, Yang, Li, Yiyuan, Zhou, Fan, Chern, Steffi, Qin, Yiwei, Ma, Yan, Su, Jiadi, Liu, Yixiu, Zheng, Yuxiang, Zhang, Shaoting, Lin, Dahua, Qiao, Yu, Liu, Pengfei
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and s
Externí odkaz:
http://arxiv.org/abs/2406.12753
Amid the expanding use of pre-training data, the phenomenon of benchmark dataset leakage has become increasingly prominent, exacerbated by opaque training processes and the often undisclosed inclusion of supervised data in contemporary Large Language
Externí odkaz:
http://arxiv.org/abs/2404.18824
Autor:
Fan, Run-Ze, Li, Xuefeng, Zou, Haoyang, Li, Junlong, He, Shwai, Chern, Ethan, Hu, Jiewen, Liu, Pengfei
The quality of finetuning data is crucial for aligning large language models (LLMs) with human values. Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations. This paper explores ele
Externí odkaz:
http://arxiv.org/abs/2402.12219
Automatic mainstream hashtag recommendation aims to accurately provide users with concise and popular topical hashtags before publication. Generally, mainstream hashtag recommendation faces challenges in the comprehensive difficulty of newly posted t
Externí odkaz:
http://arxiv.org/abs/2312.10466
Scaling the size of language models usually leads to remarkable advancements in NLP tasks. But it often comes with a price of growing computational cost. Although a sparse Mixture of Experts (MoE) can reduce the cost by activating a small subset of p
Externí odkaz:
http://arxiv.org/abs/2310.09832
The rapid development of Large Language Models (LLMs) has substantially expanded the range of tasks they can address. In the field of Natural Language Processing (NLP), researchers have shifted their focus from conventional NLP tasks (e.g., sequence
Externí odkaz:
http://arxiv.org/abs/2310.05470
Adapter tuning, which updates only a few parameters, has become a mainstream method for fine-tuning pretrained language models to downstream tasks. However, it often yields subpar results in few-shot learning. AdapterFusion, which assembles pretraine
Externí odkaz:
http://arxiv.org/abs/2308.15982
Autor:
Zhou ZH; Department of Urology, The Second Hospital Affiliated to Chongqing Medical University, Chongqing 400010, China., Liang PH; Department of Urology, The Second Hospital Affiliated to Chongqing Medical University, Chongqing 400010, China., Chen YL; Department of Urology, The Second Hospital Affiliated to Chongqing Medical University, Chongqing 400010, China., Fan RZ; Department of Urology, The Second Hospital Affiliated to Chongqing Medical University, Chongqing 400010, China., Hu J; Department of Urology, The Second Hospital Affiliated to Chongqing Medical University, Chongqing 400010, China.
Publikováno v:
Zhonghua nan ke xue = National journal of andrology [Zhonghua Nan Ke Xue] 2022 Jan; Vol. 28 (1), pp. 26-31.