Zobrazeno 1 - 10
of 9 495
pro vyhledávání: '"HUANG, Fei"'
Autor:
Qwen, Yang, An, Yang, Baosong, Zhang, Beichen, Hui, Binyuan, Zheng, Bo, Yu, Bowen, Li, Chengyuan, Liu, Dayiheng, Huang, Fei, Wei, Haoran, Lin, Huan, Yang, Jian, Tu, Jianhong, Zhang, Jianwei, Yang, Jianxin, Yang, Jiaxi, Zhou, Jingren, Lin, Junyang, Dang, Kai, Lu, Keming, Bao, Keqin, Yang, Kexin, Yu, Le, Li, Mei, Xue, Mingfeng, Zhang, Pei, Zhu, Qin, Men, Rui, Lin, Runji, Li, Tianhao, Xia, Tingyu, Ren, Xingzhang, Ren, Xuancheng, Fan, Yang, Su, Yang, Zhang, Yichang, Wan, Yu, Liu, Yuqiong, Cui, Zeyu, Zhang, Zhenru, Qiu, Zihan
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stag
Externí odkaz:
http://arxiv.org/abs/2412.15115
Autor:
Qiao, Ziheng, Zhou, Houquan, Liu, Yumeng, Li, Zhenghua, Zhang, Min, Zhang, Bo, Li, Chen, Zhang, Ji, Huang, Fei
One key characteristic of the Chinese spelling check (CSC) task is that incorrect characters are usually similar to the correct ones in either phonetics or glyph. To accommodate this, previous works usually leverage confusion sets, which suffer from
Externí odkaz:
http://arxiv.org/abs/2412.12863
In financial trading, factor models are widely used to price assets and capture excess returns from mispricing. Recently, we have witnessed the rise of variational autoencoder-based latent factor models, which learn latent factors self-adaptively. Wh
Externí odkaz:
http://arxiv.org/abs/2412.09468
The study of extra charged gauge boson beyond the Standard Model has always been of great interest. Future muon colliders will have a significant advantage in discovering exotic particles. In this paper, by studying the $\mu^+ \mu^- \to W^{\prime +}
Externí odkaz:
http://arxiv.org/abs/2412.05787
Autor:
Wang, Yunkun, Zhang, Yue, Qin, Zhen, Zhi, Chen, Li, Binhua, Huang, Fei, Li, Yongbin, Deng, Shuiguang
Through training on publicly available source code libraries, large language models (LLMs) can invoke multiple encapsulated APIs to solve complex programming problems. However, existing models inherently cannot generalize to use APIs that are unseen
Externí odkaz:
http://arxiv.org/abs/2412.05366
Autor:
Wang, Minzheng, Zhang, Xinghua, Chen, Kun, Xu, Nan, Yu, Haiyang, Huang, Fei, Mao, Wenji, Li, Yongbin
Large language models (LLMs) have made dialogue one of the central modes in human-machine interaction, leading to the vast amounts of conversation logs and increasing demand for dialogue generation. The dialogue's life-cycle spans from the $\textit{P
Externí odkaz:
http://arxiv.org/abs/2412.04905
Fairness-aware statistical learning is critical for data-driven decision-making to mitigate discrimination against protected attributes, such as gender, race, and ethnicity. This is especially important for high-stake decision-making, such as insuran
Externí odkaz:
http://arxiv.org/abs/2412.04663
The spin correlation of final-state hadrons provides a novel platform to explore the hadronization mechanism of polarized partons in unpolarized high-energy collisions. In this work, we investigate the helicity correlation of two hadrons originating
Externí odkaz:
http://arxiv.org/abs/2412.00394
Reproducing buggy code is the first and crucially important step in issue resolving, as it aids in identifying the underlying problems and validating that generated patches resolve the problem. While numerous approaches have been proposed for this ta
Externí odkaz:
http://arxiv.org/abs/2411.13941
Autor:
Jia, Hongrui, Jiang, Chaoya, Xu, Haiyang, Ye, Wei, Dong, Mengfan, Yan, Ming, Zhang, Ji, Huang, Fei, Zhang, Shikun
As language models continue to scale, Large Language Models (LLMs) have exhibited emerging capabilities in In-Context Learning (ICL), enabling them to solve language tasks by prefixing a few in-context demonstrations (ICDs) as context. Inspired by th
Externí odkaz:
http://arxiv.org/abs/2411.11909