Zobrazeno 1 - 10
of 2 167
pro vyhledávání: '"Fischer, Marc"'
Autor:
Debenedetti, Edoardo, Zhang, Jie, Balunović, Mislav, Beurer-Kellner, Luca, Fischer, Marc, Tramèr, Florian
AI agents aim to solve complex tasks by combining text-based reasoning with external tool calls. Unfortunately, AI agents are vulnerable to prompt injection attacks where data returned by external tools hijacks the agent to execute malicious tasks. T
Externí odkaz:
http://arxiv.org/abs/2406.13352
Autor:
Balauca, Stefan, Müller, Mark Niklas, Mao, Yuhao, Baader, Maximilian, Fischer, Marc, Vechev, Martin
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training,
Externí odkaz:
http://arxiv.org/abs/2403.07095
To ensure that text generated by large language models (LLMs) is in an expected format, constrained decoding proposes to enforce strict formal language constraints during generation. However, as we show in this work, not only do such methods incur pe
Externí odkaz:
http://arxiv.org/abs/2403.06988
Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another. However, the vast amount of data these models are trained on can inadvertently lead to contamination with publi
Externí odkaz:
http://arxiv.org/abs/2402.02823
As Large Language Models (LLMs) are deployed more widely, customization with respect to vocabulary, style, and character becomes more important. In this work, we introduce model arithmetic, a novel inference framework for composing and biasing LLMs w
Externí odkaz:
http://arxiv.org/abs/2311.14479
While the ImageNet dataset has been driving computer vision research over the past decade, significant label noise and ambiguity have made top-1 accuracy an insufficient measure of further progress. To address this, new label-sets and evaluation prot
Externí odkaz:
http://arxiv.org/abs/2401.02430
Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially -- first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of pote
Externí odkaz:
http://arxiv.org/abs/2311.04954
As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a r
Externí odkaz:
http://arxiv.org/abs/2306.10426
Autor:
Besta, Maciej, Gerstenberger, Robert, Fischer, Marc, Podstawski, Michał, Blach, Nils, Egeli, Berke, Mitenkov, Georgy, Chlapek, Wojciech, Michalewicz, Marek, Niewiadomski, Hubert, Müller, Jürgen, Hoefler, Torsten
Publikováno v:
Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, 2023 (SC '23)
Graph databases (GDBs) are crucial in academic and industry applications. The key challenges in developing GDBs are achieving high performance, scalability, programmability, and portability. To tackle these challenges, we harness established practice
Externí odkaz:
http://arxiv.org/abs/2305.11162
Training certifiably robust neural networks remains a notoriously hard problem. On one side, adversarial training optimizes under-approximations of the worst-case loss, which leads to insufficient regularization for certification, while on the other,
Externí odkaz:
http://arxiv.org/abs/2305.04574