Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Nazneen Fatema Rajani"'
Autor:
Zachary Taschdjian, Christopher Ré, Karan Goel, Nazneen Fatema Rajani, Mohit Bansal, Jesse Vig
Publikováno v:
NAACL-HLT (Demonstrations)
Despite impressive performance on standard benchmarks, natural language processing (NLP) models are often brittle when deployed in real-world systems. In this work, we identify challenges with evaluating NLP systems and propose a solution in the form
Publikováno v:
NAACL-HLT (Industry Papers)
Named entity linking (NEL) or mapping “strings” to “things” in a knowledge base is a fundamental preprocessing step in systems that require knowledge of entities such as information extraction and question answering. In this work, we lay out
Publikováno v:
ACL (student)
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders. However, they fail to fully utilize the structure information of the input graph. In this paper, we aim to
Publikováno v:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations.
Novel neural architectures, training strategies, and the availability of large-scale corpora haven been the driving force behind recent progress in abstractive text summarization. However, due to the black-box nature of neural models, uninformative e
Publikováno v:
EMNLP (1)
A standard way to address different NLP problems is by first constructing a problem-specific dataset, then building a model to fit this dataset. To build the ultimate artificial intelligence, we desire a single machine that can handle diverse new pro
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9ca875a234f66e6a4e0c4ba075c005cb
http://arxiv.org/abs/2010.02584
http://arxiv.org/abs/2010.02584
Autor:
Xiangru Tang, Ankit Gupta, Rui Zhang, Nadia Irwanto, Nazneen Fatema Rajani, Amrit Rau, Abhinand Sivaprasad, Richard Socher, Chiachun Hsieh, Linyong Nan, Neha Verma, Aadit Vyas, Xi Victoria Lin, Yangxiaokang Liu, Yasin Tarabar, Jessica Pan, Dragomir R. Radev, Tao Yu, Faiaz Rahman, Caiming Xiong, Yi Chern Tan, Mutethia Mutuma, Pranav Krishna, Ahmad Zaidi
Publikováno v:
NAACL-HLT
We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-Text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::41efc82e79bc1a0b3dad70d191185684
http://arxiv.org/abs/2007.02871
http://arxiv.org/abs/2007.02871
Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. In this work, we demonstrate a set of methods for analyzing
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::475f8da1ae12dbb0786e3af094d34678
Autor:
Caiming Xiong, Bryan McCann, Tianlu Wang, Nazneen Fatema Rajani, Xi Victoria Lin, Vicente Ordonez
Publikováno v:
ACL
Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm, apply post-processing proced
Autor:
Abhijit Gupta, Aadit Vyas, Nazneen Fatema Rajani, Richard Socher, Stephan Zheng, Rui Zhang, Caiming Xiong, Yi Chern Tan, Dragomir R. Radev, Jeremy Weiss
Publikováno v:
ACL
Neural networks lack the ability to reason about qualitative physics and so cannot generalize to scenarios and tasks unseen during training. We propose ESPRIT, a framework for commonsense reasoning about qualitative physics in natural language that g
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1a45c7ced28640c5b860df7816ddc3c1
Publikováno v:
ACL (1)
Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense r
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::651348cceb0d3ac28c9bca97b9483e75
http://arxiv.org/abs/1906.02361
http://arxiv.org/abs/1906.02361