Zobrazeno 1 - 10
of 111
pro vyhledávání: '"Qin, Libo"'
In recent years, Large Language Models (LLMs) have made significant strides towards Artificial General Intelligence. However, training these models from scratch requires substantial computational resources and vast amounts of text data. In this paper
Externí odkaz:
http://arxiv.org/abs/2407.02118
Autor:
Ye, Jingheng, Qin, Shang, Li, Yinghui, Cheng, Xuxin, Qin, Libo, Zheng, Hai-Tao, Xing, Peng, Xu, Zishan, Cheng, Guo, Wei, Zhao
Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited scenario, where they ignore the interaction between corrections and explanations. To bridge the gap, this paper introduces the task of EXplainable GEC (EXG
Externí odkaz:
http://arxiv.org/abs/2407.00924
Autor:
Feng, Yunlong, Xu, Yang, Teng, Dechuan, Mu, Honglin, Xu, Xiao, Qin, Libo, Che, Wanxiang, Zhu, Qingfu
Decompilation transforms compiled code back into a high-level programming language for analysis when source code is unavailable. Previous work has primarily focused on enhancing decompilation performance by increasing the scale of model parameters or
Externí odkaz:
http://arxiv.org/abs/2406.17233
Cross-lingual chain-of-thought can effectively complete reasoning tasks across languages, which gains increasing attention. Recently, dominant approaches in the literature improve cross-lingual alignment capabilities by integrating reasoning knowledg
Externí odkaz:
http://arxiv.org/abs/2406.13940
Autor:
Qin, Libo, Wei, Fuxuan, Chen, Qiguang, Zhou, Jingxuan, Huang, Shijue, Si, Jiasheng, Lu, Wenpeng, Che, Wanxiang
Slot filling and intent detection are two highly correlated tasks in spoken language understanding (SLU). Recent SLU research attempts to explore zero-shot prompting techniques in large language models to alleviate the data scarcity problem. Neverthe
Externí odkaz:
http://arxiv.org/abs/2406.10505
Multi-modal Chain-of-Thought (MCoT) requires models to leverage knowledge from both textual and visual modalities for step-by-step reasoning, which gains increasing attention. Nevertheless, the current MCoT benchmark still faces some challenges: (1)
Externí odkaz:
http://arxiv.org/abs/2405.16473
Autor:
Qin, Libo, Chen, Qiguang, Feng, Xiachong, Wu, Yang, Zhang, Yongheng, Li, Yinghui, Li, Min, Che, Wanxiang, Yu, Philip S.
While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap
Externí odkaz:
http://arxiv.org/abs/2405.12819
Large-scale high-quality training data is important for improving the performance of models. After trained with data that has rationales (reasoning steps), models gain reasoning capability. However, the dataset with high-quality rationales is relativ
Externí odkaz:
http://arxiv.org/abs/2404.07017
Autor:
Qin, Libo, Chen, Qiguang, Zhou, Yuhang, Chen, Zhi, Li, Yinghui, Liao, Lizi, Li, Min, Che, Wanxiang, Yu, Philip S.
Multilingual Large Language Models are capable of using powerful Large Language Models to handle and respond to queries in multiple languages, which achieves remarkable success in multilingual natural language processing tasks. Despite these breakthr
Externí odkaz:
http://arxiv.org/abs/2404.04925
Autor:
Li, Yinghui, Qin, Shang, Ye, Jingheng, Ma, Shirong, Li, Yangning, Qin, Libo, Hu, Xuming, Jiang, Wenhao, Zheng, Hai-Tao, Yu, Philip S.
Recently, Large Language Models (LLMs) have been widely studied by researchers for their roles in various downstream NLP tasks. As a fundamental task in the NLP field, Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammati
Externí odkaz:
http://arxiv.org/abs/2402.11420