Zobrazeno 1 - 10
of 2 322
pro vyhledávání: '"Li Yinghui"'
Publikováno v:
Jichu yixue yu linchuang, Vol 44, Iss 6, Pp 763-771 (2024)
Objective To construct chimeric antigen receptors modified γδT cells targeting at BCMA (BCMA CAR-γδT) and to evaluate its efficacy of anti-multiple myeloma in vitro. Methods Lentiviral vectors containing BCMA single-chain variable fragment were c
Externí odkaz:
https://doaj.org/article/724286e5e0c244a399c78fa0f26bea6b
Autor:
Ye, Jingheng, Jiang, Yong, Wang, Xiaobin, Li, Yinghui, Li, Yangning, Zheng, Hai-Tao, Xie, Pengjun, Huang, Fei
This paper introduces the task of product demand clarification within an e-commercial scenario, where the user commences the conversation with ambiguous queries and the task-oriented agent is designed to achieve more accurate and tailored product sea
Externí odkaz:
http://arxiv.org/abs/2407.00942
Autor:
Ye, Jingheng, Xu, Zishan, Li, Yinghui, Cheng, Xuxin, Song, Linlin, Zhou, Qingyu, Zheng, Hai-Tao, Shen, Ying, Su, Xin
The paper focuses on improving the interpretability of Grammatical Error Correction (GEC) metrics, which receives little attention in previous studies. To bridge the gap, we propose CLEME2.0, a reference-based evaluation strategy that can describe fo
Externí odkaz:
http://arxiv.org/abs/2407.00934
Autor:
Ye, Jingheng, Qin, Shang, Li, Yinghui, Cheng, Xuxin, Qin, Libo, Zheng, Hai-Tao, Xing, Peng, Xu, Zishan, Cheng, Guo, Wei, Zhao
Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited scenario, where they ignore the interaction between corrections and explanations. To bridge the gap, this paper introduces the task of EXplainable GEC (EXG
Externí odkaz:
http://arxiv.org/abs/2407.00924
Autor:
Du, Jiangshu, Wang, Yibo, Zhao, Wenting, Deng, Zhongfen, Liu, Shuaiqi, Lou, Renze, Zou, Henry Peng, Venkit, Pranav Narayanan, Zhang, Nan, Srinath, Mukund, Zhang, Haoran Ranran, Gupta, Vipul, Li, Yinghui, Li, Tao, Wang, Fei, Liu, Qin, Liu, Tianlin, Gao, Pengzhi, Xia, Congying, Xing, Chen, Cheng, Jiayang, Wang, Zhaowei, Su, Ying, Shah, Raj Sanjay, Guo, Ruohao, Gu, Jing, Li, Haoran, Wei, Kangda, Wang, Zihao, Cheng, Lu, Ranathunga, Surangika, Fang, Meng, Fu, Jie, Liu, Fei, Huang, Ruihong, Blanco, Eduardo, Cao, Yixin, Zhang, Rui, Yu, Philip S., Yin, Wenpeng
This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many rout
Externí odkaz:
http://arxiv.org/abs/2406.16253
Autor:
Qin, Libo, Chen, Qiguang, Feng, Xiachong, Wu, Yang, Zhang, Yongheng, Li, Yinghui, Li, Min, Che, Wanxiang, Yu, Philip S.
While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap
Externí odkaz:
http://arxiv.org/abs/2405.12819
Autor:
Yan Hengbin, Li Yinghui
Publikováno v:
Open Linguistics, Vol 5, Iss 1, Pp 601-614 (2019)
As an important index of working memory burden and syntactic difficulty, Dependency Distance (DD) has been fruitfully applied in the context of Second Language Acquisition (SLA) to both native and non-native language production. Recent research has f
Externí odkaz:
https://doaj.org/article/9c75ecb72318458797a274b6ff356955
Autor:
Qin, Libo, Chen, Qiguang, Zhou, Yuhang, Chen, Zhi, Li, Yinghui, Liao, Lizi, Li, Min, Che, Wanxiang, Yu, Philip S.
Multilingual Large Language Models are capable of using powerful Large Language Models to handle and respond to queries in multiple languages, which achieves remarkable success in multilingual natural language processing tasks. Despite these breakthr
Externí odkaz:
http://arxiv.org/abs/2404.04925
Autor:
Li, Yangning, Lv, Qingsong, Yu, Tianyu, Li, Yinghui, Huang, Shulin, Lu, Tingwei, Hu, Xuming, JIang, Wenhao, Zheng, Hai-Tao, Wang, Hui
Entity Set Expansion (ESE) aims to identify new entities belonging to the same semantic class as a given set of seed entities. Traditional methods primarily relied on positive seed entities to represent a target semantic class, which poses challenge
Externí odkaz:
http://arxiv.org/abs/2403.04247
Autor:
Xu, Zhikun, Li, Yinghui, Ding, Ruixue, Wang, Xinyu, Chen, Boli, Jiang, Yong, Zheng, Hai-Tao, Lu, Wenlian, Xie, Pengjun, Huang, Fei
How to better evaluate the capabilities of Large Language Models (LLMs) is the focal point and hot topic in current LLMs research. Previous work has noted that due to the extremely high cost of iterative updates of LLMs, they are often unable to answ
Externí odkaz:
http://arxiv.org/abs/2402.19248