Zobrazeno 1 - 10
of 505
pro vyhledávání: '"Zhang, Yikai"'
Autor:
Huang, Zhen, Wang, Zengzhi, Xia, Shijie, Li, Xuefeng, Zou, Haoyang, Xu, Ruijie, Fan, Run-Ze, Ye, Lyumanshan, Chern, Ethan, Ye, Yixin, Zhang, Yikai, Yang, Yuqing, Wu, Ting, Wang, Binjie, Sun, Shichao, Xiao, Yang, Li, Yiyuan, Zhou, Fan, Chern, Steffi, Qin, Yiwei, Ma, Yan, Su, Jiadi, Liu, Yixiu, Zheng, Yuxiang, Zhang, Shaoting, Lin, Dahua, Qiao, Yu, Liu, Pengfei
The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and s
Externí odkaz:
http://arxiv.org/abs/2406.12753
Autor:
Gu, Zhouhong, Zhang, Lin, Zhu, Xiaoxuan, Chen, Jiangjie, Huang, Wenhao, Zhang, Yikai, Wang, Shusen, Ye, Zheyu, Gao, Yan, Feng, Hongwei, Xiao, Yanghua
Detecting evidence within the context is a key step in the process of reasoning task. Evaluating and enhancing the capabilities of LLMs in evidence detection will strengthen context-based reasoning performance. This paper proposes a benchmark called
Externí odkaz:
http://arxiv.org/abs/2406.12641
Multi-Modal Knowledge Graphs (MMKGs) have proven valuable for various downstream tasks. However, scaling them up is challenging because building large-scale MMKGs often introduces mismatched images (i.e., noise). Most entities in KGs belong to the lo
Externí odkaz:
http://arxiv.org/abs/2406.10902
Autor:
Yang, Ruihan, Chen, Jiangjie, Zhang, Yikai, Yuan, Siyu, Chen, Aili, Richardson, Kyle, Xiao, Yanghua, Yang, Deqing
Language agents powered by large language models (LLMs) are increasingly valuable as decision-making tools in domains such as gaming and programming. However, these agents often face challenges in achieving high-level goals without detailed instructi
Externí odkaz:
http://arxiv.org/abs/2406.04784
Autor:
Chen, Jiangjie, Wang, Xintao, Xu, Rui, Yuan, Siyu, Zhang, Yikai, Shi, Wei, Xie, Jian, Li, Shuang, Yang, Ruihan, Zhu, Tinghui, Chen, Aili, Li, Nianqi, Chen, Lida, Hu, Caiyu, Wu, Siye, Ren, Scott, Fu, Ziquan, Xiao, Yanghua
Recent advancements in large language models (LLMs) have significantly boosted the rise of Role-Playing Language Agents (RPLAs), i.e., specialized AI systems designed to simulate assigned personas. By harnessing multiple advanced abilities of LLMs, i
Externí odkaz:
http://arxiv.org/abs/2404.18231
As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting i
Externí odkaz:
http://arxiv.org/abs/2402.11296
Despite remarkable advancements in emulating human-like behavior through Large Language Models (LLMs), current textual simulations do not adequately address the notion of time. To this end, we introduce TimeArena, a novel textual simulated environmen
Externí odkaz:
http://arxiv.org/abs/2402.05733
Autor:
Zhang, Yikai, Zheng, Songzhu, Dalirrooyfard, Mina, Wu, Pengxiang, Schneider, Anderson, Raj, Anant, Nevmyvaka, Yuriy, Chen, Chao
Learning and decision-making in domains with naturally high noise-to-signal ratio, such as Finance or Healthcare, is often challenging, while the stakes are very high. In this paper, we study the problem of learning and acting under a general noisy g
Externí odkaz:
http://arxiv.org/abs/2309.14240
Autor:
He, Qianyu, Zeng, Jie, Huang, Wenhao, Chen, Lina, Xiao, Jin, He, Qianxi, Zhou, Xunzhe, Chen, Lida, Wang, Xintao, Huang, Yuncheng, Ye, Haoning, Li, Zihan, Chen, Shisong, Zhang, Yikai, Gu, Zhouhong, Liang, Jiaqing, Xiao, Yanghua
Large language models (LLMs) can understand human instructions, showing their potential for pragmatic applications beyond traditional NLP tasks. However, they still struggle with complex instructions, which can be either complex task descriptions tha
Externí odkaz:
http://arxiv.org/abs/2309.09150
Noisy labels can significantly affect the performance of deep neural networks (DNNs). In medical image segmentation tasks, annotations are error-prone due to the high demand in annotation time and in the annotators' expertise. Existing methods mostly
Externí odkaz:
http://arxiv.org/abs/2308.02498