Zobrazeno 1 - 10
of 9 215
pro vyhledávání: '"Wang XUN"'
Autor:
Li, Siyuan, Ma, Zhe, Liu, Feifan, Lu, Jiani, Xiao, Qinqin, Sun, Kewu, Cui, Lingfei, Yang, Xirui, Liu, Peng, Wang, Xun
Robot task planning is an important problem for autonomous robots in long-horizon challenging tasks. As large pre-trained models have demonstrated superior planning ability, recent research investigates utilizing large models to achieve autonomous pl
Externí odkaz:
http://arxiv.org/abs/2411.06920
Offline-to-Online Reinforcement Learning has emerged as a powerful paradigm, leveraging offline data for initialization and online fine-tuning to enhance both sample efficiency and performance. However, most existing research has focused on single-ag
Externí odkaz:
http://arxiv.org/abs/2410.19450
Autor:
Liu, Yanming, Peng, Xinyue, Cao, Jiannan, Bo, Shi, Shen, Yanxin, Zhang, Xuhong, Cheng, Sheng, Wang, Xun, Yin, Jianwei, Du, Tianyu
Large language models (LLMs) have shown remarkable capabilities in natural language processing; however, they still face difficulties when tasked with understanding lengthy contexts and executing effective question answering. These challenges often a
Externí odkaz:
http://arxiv.org/abs/2410.01671
Autor:
Zhang, Boyu, Du, Tianyu, Tong, Junkai, Zhang, Xuhong, Chow, Kingsum, Cheng, Sheng, Wang, Xun, Yin, Jianwei
After large models (LMs) have gained widespread acceptance in code-related tasks, their superior generative capacity has greatly promoted the application of the code LM. Nevertheless, the security of the generated code has raised attention to its pot
Externí odkaz:
http://arxiv.org/abs/2410.01488
The rapid development of Large Language Models (LLMs) has brought remarkable generative capabilities across diverse tasks. However, despite the impressive achievements, these LLMs still have numerous inherent vulnerabilities, particularly when faced
Externí odkaz:
http://arxiv.org/abs/2407.16205
Autor:
Wang, Song, Wang, Xun, Mei, Jie, Xie, Yujia, Muarray, Sean, Li, Zhang, Wu, Lingfeng, Chen, Si-Qing, Xiong, Wayne
Hallucination, a phenomenon where large language models (LLMs) produce output that is factually incorrect or unrelated to the input, is a major challenge for LLM applications that require accuracy and dependability. In this paper, we introduce a reli
Externí odkaz:
http://arxiv.org/abs/2407.15441
Effective expression feature representations generated by a triplet-based deep metric learning are highly advantageous for facial expression recognition (FER). The performance of triplet-based deep metric learning is contingent upon identifying the b
Externí odkaz:
http://arxiv.org/abs/2406.16434
Autor:
Xiong, Weimin, Song, Yifan, Zhao, Xiutian, Wu, Wenhao, Wang, Xun, Wang, Ke, Li, Cheng, Peng, Wei, Li, Sujian
Large language model agents have exhibited exceptional performance across a range of complex interactive tasks. Recent approaches have utilized tuning with expert trajectories to enhance agent performance, yet they primarily concentrate on outcome re
Externí odkaz:
http://arxiv.org/abs/2406.11176
Autor:
Liu, Yanming, Peng, Xinyue, Zhang, Yuwei, Ke, Xiaolan, Deng, Songhang, Cao, Jiannan, Ma, Chen, Fu, Mengchen, Zhang, Xuhong, Cheng, Sheng, Wang, Xun, Yin, Jianwei, Du, Tianyu
Large language models have repeatedly shown outstanding performance across diverse applications. However, deploying these models can inadvertently risk user privacy. The significant memory demands during training pose a major challenge in terms of re
Externí odkaz:
http://arxiv.org/abs/2406.11087
Autor:
Liu, Yanming, Peng, Xinyue, Cao, Jiannan, Bo, Shi, Zhang, Yuwei, Zhang, Xuhong, Cheng, Sheng, Wang, Xun, Yin, Jianwei, Du, Tianyu
Large language models (LLMs) have demonstrated exceptional reasoning capabilities, enabling them to solve various complex problems. Recently, this ability has been applied to the paradigm of tool learning. Tool learning involves providing examples of
Externí odkaz:
http://arxiv.org/abs/2406.03807