Zobrazeno 1 - 10
of 456
pro vyhledávání: '"Fan, Run"'
Autor:
Sainz, Oscar, García-Ferrero, Iker, Jacovi, Alon, Campos, Jon Ander, Elazar, Yanai, Agirre, Eneko, Goldberg, Yoav, Chen, Wei-Lin, Chim, Jenny, Choshen, Leshem, D'Amico-Wong, Luca, Dell, Melissa, Fan, Run-Ze, Golchin, Shahriar, Li, Yucheng, Liu, Pengfei, Pahwa, Bhavish, Prabhu, Ameya, Sharma, Suryansh, Silcock, Emily, Solonko, Kateryna, Stap, David, Surdeanu, Mihai, Tseng, Yu-Min, Udandarao, Vishaal, Wang, Zengzhi, Xu, Ruijie, Yang, Jinglin
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora u
Externí odkaz:
http://arxiv.org/abs/2407.21530
Autor:
Huang, Zhen, Wang, Zengzhi, Xia, Shijie, Li, Xuefeng, Zou, Haoyang, Xu, Ruijie, Fan, Run-Ze, Ye, Lyumanshan, Chern, Ethan, Ye, Yixin, Zhang, Yikai, Yang, Yuqing, Wu, Ting, Wang, Binjie, Sun, Shichao, Xiao, Yang, Li, Yiyuan, Zhou, Fan, Chern, Steffi, Qin, Yiwei, Ma, Yan, Su, Jiadi, Liu, Yixiu, Zheng, Yuxiang, Zhang, Shaoting, Lin, Dahua, Qiao, Yu, Liu, Pengfei
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and s
Externí odkaz:
http://arxiv.org/abs/2406.12753
Amid the expanding use of pre-training data, the phenomenon of benchmark dataset leakage has become increasingly prominent, exacerbated by opaque training processes and the often undisclosed inclusion of supervised data in contemporary Large Language
Externí odkaz:
http://arxiv.org/abs/2404.18824
Autor:
Fan, Run-Ze, Li, Xuefeng, Zou, Haoyang, Li, Junlong, He, Shwai, Chern, Ethan, Hu, Jiewen, Liu, Pengfei
The quality of finetuning data is crucial for aligning large language models (LLMs) with human values. Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations. This paper explores ele
Externí odkaz:
http://arxiv.org/abs/2402.12219
Automatic mainstream hashtag recommendation aims to accurately provide users with concise and popular topical hashtags before publication. Generally, mainstream hashtag recommendation faces challenges in the comprehensive difficulty of newly posted t
Externí odkaz:
http://arxiv.org/abs/2312.10466
Scaling the size of language models usually leads to remarkable advancements in NLP tasks. But it often comes with a price of growing computational cost. Although a sparse Mixture of Experts (MoE) can reduce the cost by activating a small subset of p
Externí odkaz:
http://arxiv.org/abs/2310.09832
The rapid development of Large Language Models (LLMs) has substantially expanded the range of tasks they can address. In the field of Natural Language Processing (NLP), researchers have shifted their focus from conventional NLP tasks (e.g., sequence
Externí odkaz:
http://arxiv.org/abs/2310.05470
Adapter tuning, which updates only a few parameters, has become a mainstream method for fine-tuning pretrained language models to downstream tasks. However, it often yields subpar results in few-shot learning. AdapterFusion, which assembles pretraine
Externí odkaz:
http://arxiv.org/abs/2308.15982
Autor:
Chen, Jia-Qian, Li, Shen, Fan, Run-Zhu, Sun, Zhang-Hua, Zhu, Xin-Ying, Yin, Ai-Ping, Tang, Gui-Hua, Yin, Sheng
Publikováno v:
In Fitoterapia September 2024 177
Autor:
Tan, Qi, Fan, Run-Zhu, Yang, Wencong, Zou, Ge, Chen, Tao, Wu, Jianying, Wang, Bo, Yin, Sheng, She, Zhigang
Publikováno v:
In Chinese Chemical Letters September 2024 35(9)