Zobrazeno 1 - 10
of 41
pro vyhledávání: '"Xu, Frank F."'
Autor:
Ou, Tianyue, Xu, Frank F., Madaan, Aman, Liu, Jiarui, Lo, Robert, Sridhar, Abishek, Sengupta, Sudipta, Roth, Dan, Neubig, Graham, Zhou, Shuyan
LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e.g., arranging an online meeting). However, accuracy is still far from satisfactory, partly due to a lack of large-scale, direct demonstr
Externí odkaz:
http://arxiv.org/abs/2409.15637
Autor:
Wang, Xingyao, Li, Boxuan, Song, Yufan, Xu, Frank F., Tang, Xiangru, Zhuge, Mingchen, Pan, Jiayi, Song, Yueqi, Li, Bowen, Singh, Jaskirat, Tran, Hoang H., Li, Fuqiang, Ma, Ren, Zheng, Mingzhang, Qian, Bill, Shao, Yanjun, Muennighoff, Niklas, Zhang, Yizhe, Hui, Binyuan, Lin, Junyang, Brennan, Robert, Peng, Hao, Ji, Heng, Neubig, Graham
Software is one of the most powerful tools that we humans have at our disposal; it allows a skilled programmer to interact with the world in complex and profound ways. At the same time, thanks to improvements in large language models (LLMs), there ha
Externí odkaz:
http://arxiv.org/abs/2407.16741
Autor:
Wang, Zora Zhiruo, Asai, Akari, Yu, Xinyan Velocity, Xu, Frank F., Xie, Yiqing, Neubig, Graham, Fried, Daniel
While language models (LMs) have proven remarkably adept at generating code, many programs are challenging for LMs to generate using their parametric knowledge alone. Providing external contexts such as library documentation can facilitate generating
Externí odkaz:
http://arxiv.org/abs/2406.14497
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 8, Pp 423-438 (2020)
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “ Obama is a __ by profession”. These prompts are usually manually created, and quite p
Externí odkaz:
https://doaj.org/article/861ecb5d6ec2467287cf263aa94e6a75
Autor:
Zhou, Shuyan, Xu, Frank F., Zhu, Hao, Zhou, Xuhui, Lo, Robert, Sridhar, Abishek, Cheng, Xianyi, Ou, Tianyue, Bisk, Yonatan, Fried, Daniel, Alon, Uri, Neubig, Graham
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect w
Externí odkaz:
http://arxiv.org/abs/2307.13854
Large language models (LLMs) struggle on processing complicated observations in interactive decision making tasks. To alleviate this issue, we propose a simple hierarchical prompting approach. Diverging from previous prompting approaches that always
Externí odkaz:
http://arxiv.org/abs/2305.14257
Autor:
Jiang, Zhengbao, Xu, Frank F., Gao, Luyu, Sun, Zhiqing, Liu, Qian, Dwivedi-Yu, Jane, Yang, Yiming, Callan, Jamie, Neubig, Graham
Despite the remarkable ability of large language models (LMs) to comprehend and generate language, they have a tendency to hallucinate and create factually inaccurate output. Augmenting LMs by retrieving information from external knowledge resources
Externí odkaz:
http://arxiv.org/abs/2305.06983
Language models (LMs) compute the probability of a text by sequentially computing a representation of an already-seen context and using this representation to predict the next word. Currently, most LMs calculate these representations through a neural
Externí odkaz:
http://arxiv.org/abs/2301.02828
Publicly available source-code libraries are continuously growing and changing. This makes it impossible for models of code to keep current with all available APIs by simply training these models on existing code repositories. Thus, existing models i
Externí odkaz:
http://arxiv.org/abs/2207.05987
While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program deve
Externí odkaz:
http://arxiv.org/abs/2203.08388