Zobrazeno 1 - 10
of 12
pro vyhledávání: '"Harbecke, David"'
An adverse drug effect (ADE) is any harmful event resulting from medical drug treatment. Despite their importance, ADEs are often under-reported in official channels. Some research has therefore turned to detecting discussions of ADEs in social media
Externí odkaz:
http://arxiv.org/abs/2407.02432
Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes. Despite the success of prompting in monolingual settings, applying prompt-based methods in multilingual scenarios has been
Externí odkaz:
http://arxiv.org/abs/2210.13838
Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F1, macro-F1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weigh
Externí odkaz:
http://arxiv.org/abs/2205.09460
Autor:
Harbecke, David
Deep neural networks are powerful statistical learners. However, their predictions do not come with an explanation of their process. To analyze these models, explanation methods are being developed. We present a novel explanation method, called OLM,
Externí odkaz:
http://arxiv.org/abs/2101.11889
Autor:
Harbecke, David, Alt, Christoph
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method that provides explanations on discr
Externí odkaz:
http://arxiv.org/abs/2004.09890
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain. Graph Convolutional Networks (GCN) allow this projection, but existing explainability m
Externí odkaz:
http://arxiv.org/abs/1909.10911
Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end,
Externí odkaz:
http://arxiv.org/abs/1904.01500
Autor:
Schwarzenberg, Robert, Harbecke, David, Macketanz, Vivien, Avramidis, Eleftherios, Möller, Sebastian
Evaluating translation models is a trade-off between effort and detail. On the one end of the spectrum there are automatic count-based methods such as BLEU, on the other end linguistic evaluations by humans, which arguably are more informative but al
Externí odkaz:
http://arxiv.org/abs/1903.12017
PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.
Comment: Appears in 2018 EMNL
Comment: Appears in 2018 EMNL
Externí odkaz:
http://arxiv.org/abs/1808.04127