Zobrazeno 1 - 10
of 11
pro vyhledávání: '"bai, yuelin"'
The electrocardiogram (ECG) is an essential non-invasive diagnostic tool for assessing cardiac conditions. Existing automatic interpretation methods suffer from limited generalizability, focusing on a narrow range of cardiac conditions, and typically
Externí odkaz:
http://arxiv.org/abs/2410.19008
Autor:
Zhang, Chenhao, Feng, Xi, Bai, Yuelin, Du, Xinrun, Hou, Jinchang, Deng, Kaixin, Han, Guangzeng, Li, Qinrui, Wang, Bingli, Liu, Jiaheng, Qu, Xingwei, Zhang, Yifei, Zhao, Qixuan, Liang, Yiming, Liu, Ziqiang, Fang, Feiteng, Yang, Min, Huang, Wenhao, Lin, Chenghua, Zhang, Ge, Ni, Shiwen
As the capabilities of Multimodal Large Language Models (MLLMs) continue to improve, the need for higher-order capability evaluation of MLLMs is increasing. However, there is a lack of work evaluating MLLM for higher-order perception and understandin
Externí odkaz:
http://arxiv.org/abs/2410.13854
Autor:
Li, Jiaming, Zhang, Lei, Li, Yunshui, Liu, Ziqiang, bai, yuelin, Luo, Run, Chen, Longze, Yang, Min
The instruction-following ability of large language models enables humans to interact with AI agents in a natural way. However, when required to generate responses of a specific length, large language models often struggle to meet users' needs due to
Externí odkaz:
http://arxiv.org/abs/2409.18943
Autor:
Xie, Nan, Bai, Yuelin, Gao, Hengyuan, Fang, Feiteng, Zhao, Qixuan, Li, Zhijian, Xue, Ziqiang, Zhu, Liang, Ni, Shiwen, Yang, Min
Traditional legal retrieval systems designed to retrieve legal documents, statutes, precedents, and other legal information are unable to give satisfactory answers due to lack of semantic understanding of specific questions. Large Language Models (LL
Externí odkaz:
http://arxiv.org/abs/2408.00357
Autor:
Liu, Ziqiang, Fang, Feiteng, Feng, Xi, Du, Xinrun, Zhang, Chenhao, Wang, Zekun, Bai, Yuelin, Zhao, Qixuan, Fan, Liyang, Gan, Chengguang, Lin, Hongquan, Li, Jiaming, Ni, Yuansheng, Wu, Haihong, Narsupalli, Yaswanth, Zheng, Zhigang, Li, Chengming, Hu, Xiping, Xu, Ruifeng, Chen, Xiaojun, Yang, Min, Liu, Jiaheng, Liu, Ruibo, Huang, Wenhao, Zhang, Ge, Ni, Shiwen
The rapid advancements in the development of multimodal large language models (MLLMs) have consistently led to new breakthroughs on various benchmarks. In response, numerous challenging and comprehensive benchmarks have been proposed to more accurate
Externí odkaz:
http://arxiv.org/abs/2406.05862
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Publikováno v:
ACL 2024, Main Conference
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges, including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integr
Externí odkaz:
http://arxiv.org/abs/2405.20978
Autor:
Zhang, Ge, Qu, Scott, Liu, Jiaheng, Zhang, Chenchen, Lin, Chenghua, Yu, Chou Leuang, Pan, Danny, Cheng, Esther, Liu, Jie, Lin, Qunshu, Yuan, Raven, Zheng, Tuney, Pang, Wei, Du, Xinrun, Liang, Yiming, Ma, Yinghao, Li, Yizhi, Ma, Ziyang, Lin, Bill, Benetos, Emmanouil, Yang, Huan, Zhou, Junting, Ma, Kaijing, Liu, Minghao, Niu, Morry, Wang, Noah, Que, Quehry, Liu, Ruibo, Liu, Sine, Guo, Shawn, Gao, Soren, Zhou, Wangchunshu, Zhang, Xinyue, Zhou, Yizhi, Wang, Yubo, Bai, Yuelin, Zhang, Yuhan, Zhang, Yuxiang, Wang, Zenith, Yang, Zhenzhu, Zhao, Zijian, Zhang, Jiajun, Ouyang, Wanli, Huang, Wenhao, Chen, Wenhu
Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind pro
Externí odkaz:
http://arxiv.org/abs/2405.19327
Autor:
Qu, Xingwei, Bai, Yuelin, Ma, Yinghao, Zhou, Ziya, Lo, Ka Man, Liu, Jiaheng, Yuan, Ruibin, Min, Lejun, Liu, Xueling, Zhang, Tianyu, Du, Xinrun, Guo, Shuyue, Liang, Yiming, Li, Yizhi, Wu, Shangda, Zhou, Junting, Zheng, Tianyu, Ma, Ziyang, Han, Fengze, Xue, Wei, Xia, Gus, Benetos, Emmanouil, Yue, Xiang, Lin, Chenghua, Tan, Xu, Huang, Stephen W., Fu, Jie, Zhang, Ge
In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Nota
Externí odkaz:
http://arxiv.org/abs/2404.06393
Autor:
Bai, Yuelin, Du, Xinrun, Liang, Yiming, Jin, Yonggang, Zhou, Junting, Liu, Ziqiang, Fang, Feiteng, Chang, Mingshan, Zheng, Tianyu, Zhang, Xincheng, Ma, Nuo, Wang, Zekun, Yuan, Ruibin, Wu, Haihong, Lin, Hongquan, Huang, Wenhao, Zhang, Jiajun, Lin, Chenghua, Fu, Jie, Yang, Min, Ni, Shiwen, Zhang, Ge
Remarkable progress on English instruction tuning has facilitated the efficacy and reliability of large language models (LLMs). However, there remains a noticeable gap in instruction tuning for Chinese, where the complex linguistic features pose sign
Externí odkaz:
http://arxiv.org/abs/2403.18058
Autor:
Ni, Shiwen, Tan, Minghuan, Bai, Yuelin, Niu, Fuqiang, Yang, Min, Zhang, Bowen, Xu, Ruifeng, Chen, Xiaojun, Li, Chengming, Hu, Xiping, Li, Ye, Fan, Jianping
Publikováno v:
LREC-COLING 2024
Large language models (LLMs) have demonstrated impressive performance in various natural language processing (NLP) tasks. However, there is limited understanding of how well LLMs perform in specific domains (e.g, the intellectual property (IP) domain
Externí odkaz:
http://arxiv.org/abs/2402.16389