Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Amayuelas, Alfonso"'
Autor:
Hua, Wenyue, Liu, Ollie, Li, Lingyao, Amayuelas, Alfonso, Chen, Julie, Jiang, Lucas, Jin, Mingyu, Fan, Lizhou, Sun, Fei, Wang, William, Wang, Xintong, Zhang, Yongfeng
This paper investigates the rationality of large language models (LLMs) in strategic decision-making contexts, specifically within the framework of game theory. We evaluate several state-of-the-art LLMs across a spectrum of complete-information and i
Externí odkaz:
http://arxiv.org/abs/2411.05990
Autor:
Wang, Xinyi, Antoniades, Antonis, Elazar, Yanai, Amayuelas, Alfonso, Albalak, Alon, Zhang, Kexun, Wang, William Yang
The impressive capabilities of large language models (LLMs) have sparked debate over whether these models genuinely generalize to unseen tasks or predominantly rely on memorizing vast amounts of pretraining data. To explore this issue, we introduce a
Externí odkaz:
http://arxiv.org/abs/2407.14985
To enhance Large Language Model (LLM) capabilities, multi-agent debates have been introduced, where multiple LLMs discuss solutions to a problem over several rounds of debate. However, LLMs often produce incorrect responses that appear deceptively co
Externí odkaz:
http://arxiv.org/abs/2407.06426
Large language models (LLMs) have shown remarkable performance on code generation tasks. A recent use case is iterative code repair, where an LLM fixes an incorrect program by rationalizing about errors and generating new code. Recent works augment t
Externí odkaz:
http://arxiv.org/abs/2406.14867
Autor:
Amayuelas, Alfonso, Yang, Xianjun, Antoniades, Antonis, Hua, Wenyue, Pan, Liangming, Wang, William
Large Language Models (LLMs) have shown exceptional results on current benchmarks when working individually. The advancement in their capabilities, along with a reduction in parameter size and inference times, has facilitated the use of these models
Externí odkaz:
http://arxiv.org/abs/2406.14711
Autor:
Wang, Xinyi, Amayuelas, Alfonso, Zhang, Kexun, Pan, Liangming, Chen, Wenhu, Wang, William Yang
Pre-trained language models (LMs) are able to perform complex reasoning without explicit fine-tuning. To understand how pre-training with a next-token prediction objective contributes to the emergence of such reasoning capability, we propose that we
Externí odkaz:
http://arxiv.org/abs/2402.03268
This paper investigates the capabilities of Large Language Models (LLMs) in the context of understanding their knowledge and uncertainty over questions. Specifically, we focus on addressing known-unknown questions, characterized by high uncertainty d
Externí odkaz:
http://arxiv.org/abs/2305.13712
Publikováno v:
International Conference on Learning Representations, 2022
Reasoning is a fundamental problem for computers and deeply studied in Artificial Intelligence. In this paper, we specifically focus on answering multi-hop logical queries on Knowledge Graphs (KGs). This is a complicated task because, in real-world s
Externí odkaz:
http://arxiv.org/abs/2209.14464
Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not
Externí odkaz:
http://arxiv.org/abs/2012.11448