Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Rateike, Miriam"'
We propose an auditing method to identify whether a large language model (LLM) encodes patterns such as hallucinations in its internal states, which may propagate to downstream tasks. We introduce a weakly supervised auditing technique using a subset
Externí odkaz:
http://arxiv.org/abs/2312.02798
Neglecting the effect that decisions have on individuals (and thus, on the underlying data distribution) when designing algorithmic decision-making policies may increase inequalities and unfairness in the long term - even if fairness considerations w
Externí odkaz:
http://arxiv.org/abs/2311.12447
Decision making algorithms, in practice, are often trained on data that exhibits a variety of biases. Decision-makers often aim to take decisions based on some ground-truth target that is assumed or expected to be unbiased, i.e., equally distributed
Externí odkaz:
http://arxiv.org/abs/2205.04790
In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are available. Without making any parametric assumptions, V
Externí odkaz:
http://arxiv.org/abs/2110.14690