Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Lei, Fangyu"'
Autor:
Cao, Ruisheng, Lei, Fangyu, Wu, Haoyuan, Chen, Jixuan, Fu, Yeqiao, Gao, Hongcheng, Xiong, Xinzhuang, Zhang, Hanchong, Mao, Yuchen, Hu, Wenjing, Xie, Tianbao, Xu, Hongshen, Zhang, Danyang, Wang, Sida, Sun, Ruoxi, Yin, Pengcheng, Xiong, Caiming, Ni, Ansong, Liu, Qian, Zhong, Victor, Chen, Lu, Yu, Kai, Yu, Tao
Data science and engineering workflows often span multiple stages, from warehousing to orchestration, using tools like BigQuery, dbt, and Airbyte. As vision language models (VLMs) advance in multimodal understanding and code generation, VLM-based age
Externí odkaz:
http://arxiv.org/abs/2407.10956
Autor:
Xie, Tianbao, Zhang, Danyang, Chen, Jixuan, Li, Xiaochuan, Zhao, Siheng, Cao, Ruisheng, Hua, Toh Jing, Cheng, Zhoujun, Shin, Dongchan, Lei, Fangyu, Liu, Yitao, Xu, Yiheng, Zhou, Shuyan, Savarese, Silvio, Xiong, Caiming, Zhong, Victor, Yu, Tao
Autonomous agents that accomplish complex computer tasks with minimal human interventions have the potential to transform human-computer interaction, significantly enhancing accessibility and productivity. However, existing benchmarks either lack an
Externí odkaz:
http://arxiv.org/abs/2404.07972
Large Language Models (LLMs) have revolutionized open-domain dialogue agents but encounter challenges in multi-character role-playing (MCRP) scenarios. To address the issue, we present Neeko, an innovative framework designed for efficient multiple ch
Externí odkaz:
http://arxiv.org/abs/2402.13717
Fine-tuning is often necessary to enhance the adaptability of Large Language Models (LLM) to downstream tasks. Nonetheless, the process of updating billions of parameters demands significant computational resources and training time, which poses a su
Externí odkaz:
http://arxiv.org/abs/2402.12851
Autor:
Huang, Yiming, Lin, Zhenghao, Liu, Xiao, Gong, Yeyun, Lu, Shuai, Lei, Fangyu, Liang, Yaobo, Shen, Yelong, Lin, Chen, Duan, Nan, Chen, Weizhu
Large language models (LLMs) have demonstrated impressive reasoning capabilities, yet there is ongoing debate about these abilities and the potential data contamination problem recently. This paper aims to evaluate the reasoning capacities of LLMs, s
Externí odkaz:
http://arxiv.org/abs/2312.02143
Knowledge Editing (KE) for modifying factual knowledge in Large Language Models (LLMs) has been receiving increasing attention. However, existing knowledge editing methods are entity-centric, and it is unclear whether this approach is suitable for a
Externí odkaz:
http://arxiv.org/abs/2311.09053
The rapid development of Large Language Models (LLMs) has led to great strides in model capabilities like long-context understanding and reasoning. However, as LLMs are able to process longer contexts, it becomes more challenging to evaluate whether
Externí odkaz:
http://arxiv.org/abs/2310.15147
Autor:
Lei, Fangyu, Luo, Tongxu, Yang, Pengqi, Liu, Weihao, Liu, Hanwen, Lei, Jiahe, Huang, Yiming, Wei, Yifan, He, Shizhu, Zhao, Jun, Liu, Kang
Table-based question answering (TableQA) is an important task in natural language processing, which requires comprehending tables and employing various reasoning ways to answer the questions. This paper introduces TableQAKit, the first comprehensive
Externí odkaz:
http://arxiv.org/abs/2310.15075
Autor:
Wei, Yifan, Su, Yisong, Ma, Huanhuan, Yu, Xiaoyan, Lei, Fangyu, Zhang, Yuanzhe, Zhao, Jun, Liu, Kang
Large language models (LLMs) have shown nearly saturated performance on many natural language processing (NLP) tasks. As a result, it is natural for people to believe that LLMs have also mastered abilities such as time understanding and reasoning. Ho
Externí odkaz:
http://arxiv.org/abs/2310.05157
Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language
Externí odkaz:
http://arxiv.org/abs/2309.12669