Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Bai, Yuelin"'
Autor:
Li, Jiaming, Zhang, Lei, Li, Yunshui, Liu, Ziqiang, bai, yuelin, Luo, Run, Chen, Longze, Yang, Min
The instruction-following ability of large language models enables humans to interact with AI agents in a natural way. However, when required to generate responses of a specific length, large language models often struggle to meet users' needs due to
Externí odkaz:
http://arxiv.org/abs/2409.18943
Autor:
Xie, Nan, Bai, Yuelin, Gao, Hengyuan, Fang, Feiteng, Zhao, Qixuan, Li, Zhijian, Xue, Ziqiang, Zhu, Liang, Ni, Shiwen, Yang, Min
Traditional legal retrieval systems designed to retrieve legal documents, statutes, precedents, and other legal information are unable to give satisfactory answers due to lack of semantic understanding of specific questions. Large Language Models (LL
Externí odkaz:
http://arxiv.org/abs/2408.00357
Autor:
Liu, Ziqiang, Fang, Feiteng, Feng, Xi, Du, Xinrun, Zhang, Chenhao, Wang, Zekun, Bai, Yuelin, Zhao, Qixuan, Fan, Liyang, Gan, Chengguang, Lin, Hongquan, Li, Jiaming, Ni, Yuansheng, Wu, Haihong, Narsupalli, Yaswanth, Zheng, Zhigang, Li, Chengming, Hu, Xiping, Xu, Ruifeng, Chen, Xiaojun, Yang, Min, Liu, Jiaheng, Liu, Ruibo, Huang, Wenhao, Zhang, Ge, Ni, Shiwen
The rapid advancements in the development of multimodal large language models (MLLMs) have consistently led to new breakthroughs on various benchmarks. In response, numerous challenging and comprehensive benchmarks have been proposed to more accurate
Externí odkaz:
http://arxiv.org/abs/2406.05862
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Publikováno v:
ACL 2024, Main Conference
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges, including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integr
Externí odkaz:
http://arxiv.org/abs/2405.20978
Autor:
Zhang, Ge, Qu, Scott, Liu, Jiaheng, Zhang, Chenchen, Lin, Chenghua, Yu, Chou Leuang, Pan, Danny, Cheng, Esther, Liu, Jie, Lin, Qunshu, Yuan, Raven, Zheng, Tuney, Pang, Wei, Du, Xinrun, Liang, Yiming, Ma, Yinghao, Li, Yizhi, Ma, Ziyang, Lin, Bill, Benetos, Emmanouil, Yang, Huan, Zhou, Junting, Ma, Kaijing, Liu, Minghao, Niu, Morry, Wang, Noah, Que, Quehry, Liu, Ruibo, Liu, Sine, Guo, Shawn, Gao, Soren, Zhou, Wangchunshu, Zhang, Xinyue, Zhou, Yizhi, Wang, Yubo, Bai, Yuelin, Zhang, Yuhan, Zhang, Yuxiang, Wang, Zenith, Yang, Zhenzhu, Zhao, Zijian, Zhang, Jiajun, Ouyang, Wanli, Huang, Wenhao, Chen, Wenhu
Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind pro
Externí odkaz:
http://arxiv.org/abs/2405.19327
Autor:
Qu, Xingwei, Bai, Yuelin, Ma, Yinghao, Zhou, Ziya, Lo, Ka Man, Liu, Jiaheng, Yuan, Ruibin, Min, Lejun, Liu, Xueling, Zhang, Tianyu, Du, Xinrun, Guo, Shuyue, Liang, Yiming, Li, Yizhi, Wu, Shangda, Zhou, Junting, Zheng, Tianyu, Ma, Ziyang, Han, Fengze, Xue, Wei, Xia, Gus, Benetos, Emmanouil, Yue, Xiang, Lin, Chenghua, Tan, Xu, Huang, Stephen W., Fu, Jie, Zhang, Ge
In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Nota
Externí odkaz:
http://arxiv.org/abs/2404.06393
Autor:
Bai, Yuelin, Du, Xinrun, Liang, Yiming, Jin, Yonggang, Liu, Ziqiang, Zhou, Junting, Zheng, Tianyu, Zhang, Xincheng, Ma, Nuo, Wang, Zekun, Yuan, Ruibin, Wu, Haihong, Lin, Hongquan, Huang, Wenhao, Zhang, Jiajun, Chen, Wenhu, Lin, Chenghua, Fu, Jie, Yang, Min, Ni, Shiwen, Zhang, Ge
Recently, there have been significant advancements in large language models (LLMs), particularly focused on the English language. These advancements have enabled these LLMs to understand and execute complex instructions with unprecedented accuracy an
Externí odkaz:
http://arxiv.org/abs/2403.18058
Autor:
Ni, Shiwen, Tan, Minghuan, Bai, Yuelin, Niu, Fuqiang, Yang, Min, Zhang, Bowen, Xu, Ruifeng, Chen, Xiaojun, Li, Chengming, Hu, Xiping, Li, Ye, Fan, Jianping
Publikováno v:
LREC-COLING 2024
Large language models (LLMs) have demonstrated impressive performance in various natural language processing (NLP) tasks. However, there is limited understanding of how well LLMs perform in specific domains (e.g, the intellectual property (IP) domain
Externí odkaz:
http://arxiv.org/abs/2402.16389
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.