Zobrazeno 1 - 10
of 124
pro vyhledávání: '"Desai, Shrey A"'
Autor:
Shrivastava, Akshat, Desai, Shrey, Gupta, Anchit, Elkahky, Ali, Livshits, Aleksandr, Zotov, Alexander, Aly, Ahmed
Task-oriented semantic parsing models have achieved strong results in recent years, but unfortunately do not strike an appealing balance between model size, runtime latency, and cross-domain generalizability. We tackle this problem by introducing sce
Externí odkaz:
http://arxiv.org/abs/2202.00901
Autor:
Desai, Shrey, Shrivastava, Akshat, Rill, Justin, Moran, Brian, Saleem, Safiyyah, Zotov, Alexander, Aly, Ahmed
Data efficiency, despite being an attractive characteristic, is often challenging to measure and optimize for in task-oriented semantic parsing; unlike exact match, it can require both model- and domain-specific setups, which have, historically, vari
Externí odkaz:
http://arxiv.org/abs/2107.04736
Autor:
Desai, Shrey, Aly, Ahmed
Modern task-oriented semantic parsing approaches typically use seq2seq transformers to map textual utterances to semantic frames comprised of intents and slots. While these models are empirically strong, their specific strengths and weaknesses have l
Externí odkaz:
http://arxiv.org/abs/2105.13496
Autor:
Shrivastava, Akshat, Chuang, Pierce, Babu, Arun, Desai, Shrey, Arora, Abhinav, Zotov, Alexander, Aly, Ahmed
An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized frame with utteranc
Externí odkaz:
http://arxiv.org/abs/2104.07275
Task-oriented semantic parsing models typically have high resource requirements: to support new ontologies (i.e., intents and slots), practitioners crowdsource thousands of samples for supervised fine-tuning. Partly, this is due to the structure of d
Externí odkaz:
http://arxiv.org/abs/2104.07224
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior. In this work, we analyze summarization decoders in both blackbox and white
Externí odkaz:
http://arxiv.org/abs/2010.07882
Compressive summarization systems typically rely on a crafted set of syntactic rules to determine what spans of possible summary sentences can be deleted, then learn a model of what to actually delete by optimizing for content selection (ROUGE). In t
Externí odkaz:
http://arxiv.org/abs/2010.07886
Autor:
Ahuja, Ojas, Desai, Shrey
Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-of-the-art performance on popular natural language understanding benchmarks. However, these models frequently have
Externí odkaz:
http://arxiv.org/abs/2006.03701
Natural disasters (e.g., hurricanes) affect millions of people each year, causing widespread destruction in their wake. People have recently taken to social media websites (e.g., Twitter) to share their sentiments and feelings with the larger communi
Externí odkaz:
http://arxiv.org/abs/2004.14299
Autor:
Desai, Shrey, Durrett, Greg
Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated. Specifically, do these models' posterior probabilities provide an
Externí odkaz:
http://arxiv.org/abs/2003.07892