Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Atticus Geiger"'
Publikováno v:
Proceedings of the Annual Meeting of the Cognitive Science Society, vol 45, iss 45
When choosing how to describe what happened, we have a number of causal verbs at our disposal. In this paper, we develop a model-theoretic formal semantics for nine causal verbs that span the categories of CAUSE, ENABLE, and PREVENT. We use structura
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5f8e1bcb53ffd4a1d4ae3136d56c409f
https://escholarship.org/uc/item/5802g5m3
https://escholarship.org/uc/item/5802g5m3
Publikováno v:
Psychological review.
The notion of equality (identity) is simple and ubiquitous, making it a key case study for broader questions about the representations supporting abstract relational reasoning. Previous work suggested that neural networks were not suitable models of
Autor:
Pratik Ringshia, Douwe Kiela, Sebastian Riedel, Grusha Prasad, Christopher Potts, Bertie Vidgen, Atticus Geiger, Tristan Thrush, Robin Jia, Zeerak Waseem, Mohit Bansal, Amanpreet Singh, Divyansh Kaushik, Yixin Nie, Zhiyi Ma, Max Bartolo, Pontus Stenetorp, Zhengxuan Wu, Adina Williams
Publikováno v:
NAACL-HLT
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking. Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model wil
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c2d8aac3af28c705abd7c3fc64efaa44
http://arxiv.org/abs/2104.14337
http://arxiv.org/abs/2104.14337
Publikováno v:
ACL/IJCNLP (1)
We introduce DynaSent ('Dynamic Sentiment'), a new English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. DynaSent combines naturally occurring sentences with sentences created using the open-source Dynabench Plat
Autor:
Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, Noah Goodman
Distillation efforts have led to language models that are more compact and efficient without serious drops in performance. The standard approach to distillation trains a student model against two objectives: a task-specific objective (e.g., language
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::915f43c85c77df9f0e6038086db50eaa
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation
Publikováno v:
BlackboxNLP@EMNLP
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systemati
Publikováno v:
EMNLP/IJCNLP (1)
Deep learning models for semantics are generally evaluated using naturalistic corpora. Adversarial testing methods, in which models are evaluated on new examples with known semantic properties, have begun to reveal that good performance at these natu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d5219e987b71382db06963a085065a9c
Autor:
Olivia Li, Atticus Geiger, Alex Tamkin, Clemens Rosenbaum, Sandhini Agarwal, Matthew Riemer, Tim Klinger, Lauri Karttunen, Ignacio Cases, Christopher Potts, Dan Jurafsky, Joshua D. Greene
Publikováno v:
NAACL-HLT (1)
We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component call