Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Fischer, Marc"'
Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another. However, the vast amount of data these models are trained on can inadvertently lead to contamination with publi
Externí odkaz:
http://arxiv.org/abs/2402.02823
While the ImageNet dataset has been driving computer vision research over the past decade, significant label noise and ambiguity have made top-1 accuracy an insufficient measure of further progress. To address this, new label-sets and evaluation prot
Externí odkaz:
http://arxiv.org/abs/2401.02430
Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially -- first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of pote
Externí odkaz:
http://arxiv.org/abs/2311.04954
As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a r
Externí odkaz:
http://arxiv.org/abs/2306.10426
Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics which are solved during inference. Thought to be inherently more robust against adversarial perturbations, they
Externí odkaz:
http://arxiv.org/abs/2303.05246
Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statist
Externí odkaz:
http://arxiv.org/abs/2212.06094
Reliable neural networks (NNs) provide important inference-time reliability guarantees such as fairness and robustness. Complementarily, privacy-preserving NN inference protects the privacy of client data. So far these two emerging areas have been la
Externí odkaz:
http://arxiv.org/abs/2210.15614
Tree-based models are used in many high-stakes application domains such as finance and medicine, where robustness and interpretability are of utmost importance. Yet, methods for improving and certifying their robustness are severely under-explored, i
Externí odkaz:
http://arxiv.org/abs/2205.13909
Randomized Smoothing (RS) is considered the state-of-the-art approach to obtain certifiably robust models for challenging tasks. However, current RS approaches drastically decrease standard accuracy on unperturbed data, severely limiting their real-w
Externí odkaz:
http://arxiv.org/abs/2204.00487
We present a new abstract interpretation framework for the precise over-approximation of numerical fixpoint iterators. Our key observation is that unlike in standard abstract interpretation (AI), typically used to over-approximate all reachable progr
Externí odkaz:
http://arxiv.org/abs/2110.08260