Zobrazeno 1 - 10
of 226
pro vyhledávání: '"William W. Cohen"'
Autor:
Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 10, Pp 359-375 (2022)
AbstractWhile many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated. In this work, we introduce a framework to quantify the value of expla
Externí odkaz:
https://doaj.org/article/f777d8bfd156496f83f1cdcc09d24f38
Autor:
Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, William W. Cohen
Publikováno v:
Transactions of the Association for Computational Linguistics, Vol 10, Pp 257-273 (2022)
AbstractMany facts come with an expiration date, from the name of the President to the basketball team Lebron James plays for. However, most language models (LMs) are trained on snapshots of data collected at a specific moment in time. This can limit
Externí odkaz:
https://doaj.org/article/e3775bd76d874aabb6bb93e254c8b254
Publikováno v:
Journal of Artificial Intelligence Research. 67:285-325
We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close
Symbolic reasoning systems based on first-order logics are computationally powerful, and feedforward neural networks are computationally efficient, so unless P=NP, neural networks cannot, in general, emulate symbolic logics. Hence bridging the gap be
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::e657a502dd8c9a3f342a32c94dfd2a14
https://doi.org/10.3233/faia210352
https://doi.org/10.3233/faia210352
Publikováno v:
DeeLIO@NAACL-HLT
Existing work shows the benefits of integrating KBs with textual evidence for QA only on questions that are answerable by KBs alone (Sun et al., 2019). In contrast, real world QA systems often have to deal with questions that might not be directly an
Publikováno v:
NAACL-HLT
Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive.
Autor:
Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen
While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated. In this work, we introduce a framework to quantify the value of explanations
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3b58cc0254285386fb2bc903608feae3
http://arxiv.org/abs/2012.00893
http://arxiv.org/abs/2012.00893
Publikováno v:
NAACL-HLT
Current commonsense reasoning research focuses on developing models that use commonsense knowledge to answer multiple-choice questions. However, systems designed to answer multiple-choice questions may not be useful in applications that do not provid
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::74d860afa4304465f4e5d4439cecc98a
Autor:
William W. Cohen
Publikováno v:
Electronic Proceedings in Theoretical Computer Science. 345:1-1
Publikováno v:
Knowledge-Based Systems. 115:80-86
Knowledge bases (KBs) such as Freebase and Yago are rather incomplete, and the situation is more serious in non-English KBs, such as Chinese KBs. In this paper, we present a language-independent framework to tackle the slot-filling task by searching