Zobrazeno 1 - 10
of 219
pro vyhledávání: '"Feng, Yunlong"'
Autor:
Xu, Yang, Feng, Yunlong, Mu, Honglin, Hou, Yutai, Li, Yitong, Wang, Xinghao, Zhong, Wanjun, Li, Zhongyang, Tu, Dandan, Zhu, Qingfu, Zhang, Min, Che, Wanxiang
Through reading the documentation in the context, tool-using language models can dynamically extend their capability using external tools. The cost is that we have to input lengthy documentation every time the model needs to use the tool, occupying t
Externí odkaz:
http://arxiv.org/abs/2407.02043
Autor:
Feng, Yunlong, Teng, Dechuan, Xu, Yang, Mu, Honglin, Xu, Xiao, Qin, Libo, Zhu, Qingfu, Che, Wanxiang
Decompilation transforms compiled code back into a high-level programming language for analysis when source code is unavailable. Previous work has primarily focused on enhancing decompilation performance by increasing the scale of model parameters or
Externí odkaz:
http://arxiv.org/abs/2406.17233
Autor:
Dai, Jianbo, Lu, Jianqiao, Feng, Yunlong, Ruan, Rongju, Cheng, Ming, Tan, Haochen, Guo, Zhijiang
Recent advancements in large language models (LLMs) have greatly improved code generation, specifically at the function level. For instance, GPT-4 has achieved an 88.4% pass rate on HumanEval. However, this draws into question the adequacy of existin
Externí odkaz:
http://arxiv.org/abs/2405.11430
Autor:
Feng, Yixiao, Jiang, Zhou, Shi, Yongliang, Feng, Yunlong, Chen, Xiangyu, Zhao, Hao, Zhou, Guyue
Accurate localization is an essential technology for the flexible navigation of robots in large-scale environments. Both SLAM-based and map-based localization will increase the computing load due to the increase in map size, which will affect downstr
Externí odkaz:
http://arxiv.org/abs/2404.18192
Large-scale high-quality training data is important for improving the performance of models. After trained with data that has rationales (reasoning steps), models gain reasoning capability. However, the dataset with high-quality rationales is relativ
Externí odkaz:
http://arxiv.org/abs/2404.07017
Beyond Static Evaluation: A Dynamic Approach to Assessing AI Assistants' API Invocation Capabilities
With the rise of Large Language Models (LLMs), AI assistants' ability to utilize tools, especially through API calls, has advanced notably. This progress has necessitated more accurate evaluation methods. Many existing studies adopt static evaluation
Externí odkaz:
http://arxiv.org/abs/2403.11128
Autor:
Tan, Haochen, Guo, Zhijiang, Shi, Zhan, Xu, Lu, Liu, Zhili, Feng, Yunlong, Li, Xiaoguang, Wang, Yasheng, Shang, Lifeng, Liu, Qun, Song, Linqi
Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents. However, exploring their capability for generating long-form contents, such as reports and articles, has been relatively unexplored and inadequately assessed
Externí odkaz:
http://arxiv.org/abs/2401.15042
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to
Externí odkaz:
http://arxiv.org/abs/2305.10231
Cross-domain text classification aims to adapt models to a target domain that lacks labeled data. It leverages or reuses rich labeled data from the different but related source domain(s) and unlabeled data from the target domain. To this end, previou
Externí odkaz:
http://arxiv.org/abs/2304.09820
Autor:
Li, Bohan, Dou, Longxu, Hou, Yutai, Feng, Yunlong, Mu, Honglin, Zhu, Qingfu, Sun, Qinghua, Che, Wanxiang
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template. This approach demonstrates its effectiveness, especially in few-shot learning
Externí odkaz:
http://arxiv.org/abs/2304.09402