Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Naseem, Tahira"'
Autor:
Ramji, Keshav, Lee, Young-Suk, Astudillo, Ramón Fernandez, Sultan, Md Arafat, Naseem, Tahira, Munawar, Asim, Florian, Radu, Roukos, Salim
It is often desirable for Large Language Models (LLMs) to capture multiple objectives when providing a response. In document-grounded response generation, for example, agent responses are expected to be relevant to a user's query while also being gro
Externí odkaz:
http://arxiv.org/abs/2403.00827
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback
Autor:
Pandey, Gaurav, Nandwani, Yatin, Naseem, Tahira, Mishra, Mayank, Xu, Guangxuan, Raghu, Dinesh, Joshi, Sachindra, Munawar, Asim, Astudillo, Ramón Fernandez
Distribution matching methods for language model alignment such as Generation with Distributional Control (GDC) and Distributional Policy Gradient (DPG) have not received the same level of attention in reinforcement learning from human feedback (RLHF
Externí odkaz:
http://arxiv.org/abs/2402.02479
Autor:
Crouse, Maxwell, Astudillo, Ramon, Naseem, Tahira, Chaudhury, Subhajit, Kapanipathi, Pavan, Roukos, Salim, Gray, Alexander
We introduce Logical Offline Cycle Consistency Optimization (LOCCO), a scalable, semi-supervised method for training a neural semantic parser. Conceptually, LOCCO can be viewed as a form of self-learning where the semantic parser being trained is use
Externí odkaz:
http://arxiv.org/abs/2305.20018
The sliding window approach provides an elegant way to handle contexts of sizes larger than the Transformer's input window, for tasks like language modeling. Here we extend this approach to the sequence-to-sequence task of document parsing. For this,
Externí odkaz:
http://arxiv.org/abs/2305.17273
Autor:
Crouse, Maxwell, Kapanipathi, Pavan, Chaudhury, Subhajit, Naseem, Tahira, Astudillo, Ramon, Fokoue, Achille, Klinger, Tim
Nearly all general-purpose neural semantic parsers generate logical forms in a strictly top-down autoregressive fashion. Though such systems have achieved impressive results across a variety of datasets and domains, recent works have called into ques
Externí odkaz:
http://arxiv.org/abs/2305.04346
Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstr
Externí odkaz:
http://arxiv.org/abs/2304.12272
Autor:
Drozdov, Andrew, Zhou, Jiawei, Florian, Radu, McCallum, Andrew, Naseem, Tahira, Kim, Yoon, Astudillo, Ramon Fernandez
Transition-based parsers for Abstract Meaning Representation (AMR) rely on node-to-word alignments. These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-process
Externí odkaz:
http://arxiv.org/abs/2205.01464
Autor:
Thai, Dung, Ravishankar, Srinivas, Abdelaziz, Ibrahim, Chaudhary, Mudit, Mihindukulasooriya, Nandana, Naseem, Tahira, Das, Rajarshi, Kapanipathi, Pavan, Fokoue, Achille, McCallum, Andrew
Knowledge bases (KBs) are often incomplete and constantly changing in practice. Yet, in many question answering applications coupled with knowledge bases, the sparse nature of KBs is often overlooked. To this end, we propose a case-based reasoning ap
Externí odkaz:
http://arxiv.org/abs/2204.08554
Autor:
Naseem, Tahira, Blodgett, Austin, Kumaravel, Sadhana, O'Gorman, Tim, Lee, Young-Suk, Flanigan, Jeffrey, Astudillo, Ramón Fernandez, Florian, Radu, Roukos, Salim, Schneider, Nathan
Despite extensive research on parsing of English sentences into Abstraction Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined rep
Externí odkaz:
http://arxiv.org/abs/2112.08513
Autor:
Bornea, Mihaela, Astudillo, Ramon Fernandez, Naseem, Tahira, Mihindukulasooriya, Nandana, Abdelaziz, Ibrahim, Kapanipathi, Pavan, Florian, Radu, Roukos, Salim
We propose a transition-based system to transpile Abstract Meaning Representation (AMR) into SPARQL for Knowledge Base Question Answering (KBQA). This allows us to delegate part of the semantic representation to a strongly pre-trained semantic parser
Externí odkaz:
http://arxiv.org/abs/2112.07877