Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Eisenhofer, Thorsten"'
System prompts that include detailed instructions to describe the task performed by the underlying large language model (LLM) can easily transform foundation models into tools and services with minimal overhead. Because of their crucial impact on the
Externí odkaz:
http://arxiv.org/abs/2409.11026
Large Language Models (LLMs) are increasingly augmented with external tools and commercial services into LLM-integrated systems. While these interfaces can significantly enhance the capabilities of the models, they also introduce a new attack surface
Externí odkaz:
http://arxiv.org/abs/2402.06922
Autor:
Frank, Joel, Herbert, Franziska, Ricker, Jonas, Schönherr, Lea, Eisenhofer, Thorsten, Fischer, Asja, Dürmuth, Markus, Holz, Thorsten
AI-generated media has become a threat to our digital society as we know it. These forgeries can be created automatically and on a large scale based on publicly available technology. Recognizing this challenge, academics and practitioners have propos
Externí odkaz:
http://arxiv.org/abs/2312.05976
Model stealing aims at inferring a victim model's functionality at a fraction of the original training cost. While the goal is clear, in practice the model's architecture, weight dimension, and original training data can not be determined exactly, le
Externí odkaz:
http://arxiv.org/abs/2305.05293
Autor:
Eisenhofer, Thorsten, Quiring, Erwin, Möller, Jonas, Riepel, Doreen, Holz, Thorsten, Rieck, Konrad
The number of papers submitted to academic conferences is steadily rising in many scientific disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems use st
Externí odkaz:
http://arxiv.org/abs/2303.14443
A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much li
Externí odkaz:
http://arxiv.org/abs/2212.10318
Autor:
Eisenhofer, Thorsten, Riepel, Doreen, Chandrasekaran, Varun, Ghosh, Esha, Ohrimenko, Olga, Papernot, Nicolas
Machine unlearning aims to remove points from the training dataset of a machine learning model after training; for example when a user requests their data to be deleted. While many machine unlearning methods have been proposed, none of them enable us
Externí odkaz:
http://arxiv.org/abs/2210.09126
Autor:
Eisenhofer, Thorsten, Schönherr, Lea, Frank, Joel, Speckemeier, Lars, Kolossa, Dorothea, Holz, Thorsten
Adversarial examples seem to be inevitable. These specifically crafted inputs allow attackers to arbitrarily manipulate machine learning systems. Even worse, they often seem harmless to human observers. In our digital society, this poses a significan
Externí odkaz:
http://arxiv.org/abs/2102.05431
Autor:
Aghakhani, Hojjat, Schönherr, Lea, Eisenhofer, Thorsten, Kolossa, Dorothea, Holz, Thorsten, Kruegel, Christopher, Vigna, Giovanni
Despite remarkable improvements, automatic speech recognition is susceptible to adversarial perturbations. Compared to standard machine learning architectures, these attacks are significantly more challenging, especially since the inputs to a speech
Externí odkaz:
http://arxiv.org/abs/2010.10682
Autor:
Schönherr, Lea, Golla, Maximilian, Eisenhofer, Thorsten, Wiele, Jan, Kolossa, Dorothea, Holz, Thorsten
Voice assistants like Amazon's Alexa, Google's Assistant, or Apple's Siri, have become the primary (voice) interface in smart speakers that can be found in millions of households. For privacy reasons, these speakers analyze every sound in their envir
Externí odkaz:
http://arxiv.org/abs/2008.00508