Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Nair, Pranav Ajit"'
Autor:
Nair, Pranav Ajit, Suggala, Arun Sai
Large language models (LLMs) have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key tec
Externí odkaz:
http://arxiv.org/abs/2406.17542
Autor:
S, Aishwarya P, Nair, Pranav Ajit, Samaga, Yashas, Boyd, Toby, Kumar, Sanjiv, Jain, Prateek, Netrapalli, Praneeth
The autoregressive nature of conventional large language models (LLMs) inherently limits inference speed, as tokens are generated sequentially. While speculative and parallel decoding techniques attempt to mitigate this, they face limitations: either
Externí odkaz:
http://arxiv.org/abs/2402.08644
Domain generalization is hitherto an underexplored area applied in abstractive summarization. Moreover, most existing works on domain generalization have sophisticated training algorithms. In this paper, we propose a lightweight, weight averaging bas
Externí odkaz:
http://arxiv.org/abs/2305.16820
In this work, we analyse the role of output vocabulary for text-to-text (T2T) models on the task of SPARQL semantic parsing. We perform experiments within the the context of knowledge graph question answering (KGQA), where the task is to convert ques
Externí odkaz:
http://arxiv.org/abs/2305.15108
In this work, we present an end-to-end Knowledge Graph Question Answering (KGQA) system named GETT-QA. GETT-QA uses T5, a popular text-to-text pre-trained language model. The model takes a question in natural language as input and produces a simpler
Externí odkaz:
http://arxiv.org/abs/2303.13284
In this work, we focus on the task of generating SPARQL queries from natural language questions, which can then be executed on Knowledge Graphs (KGs). We assume that gold entity and relations have been provided, and the remaining task is to arrange t
Externí odkaz:
http://arxiv.org/abs/2204.12793