Zobrazeno 1 - 10
of 491
pro vyhledávání: '"Fan Run"'
Autor:
DU Hong-yan, ZHANG Zi-dong, TIAN Rui, ZHANG Wen-jin, ZHANG Jian, LIU Xin-yu, SUN Kai, FAN Run-hua
Publikováno v:
Cailiao gongcheng, Vol 48, Iss 6, Pp 23-33 (2020)
Artificial electromagnetic medium can effectively expand the absorption bandwidth of the absorber,which has aroused wide attention of researchers at home and abroad. The main typical structure and application of broadband absorbers based on artificia
Externí odkaz:
https://doaj.org/article/4b3e2c4becda4168a4a2b642e10ba1e9
Autor:
Sainz, Oscar, García-Ferrero, Iker, Jacovi, Alon, Campos, Jon Ander, Elazar, Yanai, Agirre, Eneko, Goldberg, Yoav, Chen, Wei-Lin, Chim, Jenny, Choshen, Leshem, D'Amico-Wong, Luca, Dell, Melissa, Fan, Run-Ze, Golchin, Shahriar, Li, Yucheng, Liu, Pengfei, Pahwa, Bhavish, Prabhu, Ameya, Sharma, Suryansh, Silcock, Emily, Solonko, Kateryna, Stap, David, Surdeanu, Mihai, Tseng, Yu-Min, Udandarao, Vishaal, Wang, Zengzhi, Xu, Ruijie, Yang, Jinglin
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora u
Externí odkaz:
http://arxiv.org/abs/2407.21530
Autor:
Huang, Zhen, Wang, Zengzhi, Xia, Shijie, Li, Xuefeng, Zou, Haoyang, Xu, Ruijie, Fan, Run-Ze, Ye, Lyumanshan, Chern, Ethan, Ye, Yixin, Zhang, Yikai, Yang, Yuqing, Wu, Ting, Wang, Binjie, Sun, Shichao, Xiao, Yang, Li, Yiyuan, Zhou, Fan, Chern, Steffi, Qin, Yiwei, Ma, Yan, Su, Jiadi, Liu, Yixiu, Zheng, Yuxiang, Zhang, Shaoting, Lin, Dahua, Qiao, Yu, Liu, Pengfei
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and s
Externí odkaz:
http://arxiv.org/abs/2406.12753
Amid the expanding use of pre-training data, the phenomenon of benchmark dataset leakage has become increasingly prominent, exacerbated by opaque training processes and the often undisclosed inclusion of supervised data in contemporary Large Language
Externí odkaz:
http://arxiv.org/abs/2404.18824
Autor:
Fan, Run-Ze, Li, Xuefeng, Zou, Haoyang, Li, Junlong, He, Shwai, Chern, Ethan, Hu, Jiewen, Liu, Pengfei
The quality of finetuning data is crucial for aligning large language models (LLMs) with human values. Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations. This paper explores ele
Externí odkaz:
http://arxiv.org/abs/2402.12219
Automatic mainstream hashtag recommendation aims to accurately provide users with concise and popular topical hashtags before publication. Generally, mainstream hashtag recommendation faces challenges in the comprehensive difficulty of newly posted t
Externí odkaz:
http://arxiv.org/abs/2312.10466
Publikováno v:
Zeitschrift für Kristallographie - New Crystal Structures, Vol 232, Iss 2, Pp 181-183 (2017)
C72H52N4O8Ni2, monolinic, C2/c (no. 15), a = 43.017(19) Å, b = 17.091(8) Å, c = 11.301(5) Å, β = 103.053(8)°, V = 8094(6) Å3, Z = 6, Rgt(F) = 0.0575, wRref(F2) = 0.1263, T = 296(2) K.
Externí odkaz:
https://doaj.org/article/24ffe0bc2a58481caea3d2dd46bf7876
Scaling the size of language models usually leads to remarkable advancements in NLP tasks. But it often comes with a price of growing computational cost. Although a sparse Mixture of Experts (MoE) can reduce the cost by activating a small subset of p
Externí odkaz:
http://arxiv.org/abs/2310.09832
The rapid development of Large Language Models (LLMs) has substantially expanded the range of tasks they can address. In the field of Natural Language Processing (NLP), researchers have shifted their focus from conventional NLP tasks (e.g., sequence
Externí odkaz:
http://arxiv.org/abs/2310.05470
Adapter tuning, which updates only a few parameters, has become a mainstream method for fine-tuning pretrained language models to downstream tasks. However, it often yields subpar results in few-shot learning. AdapterFusion, which assembles pretraine
Externí odkaz:
http://arxiv.org/abs/2308.15982