Zobrazeno 1 - 10
of 3 275
pro vyhledávání: '"Salem Ahmed"'
Autor:
Siddiqui, Shoaib Ahmed, Gaonkar, Radhika, Köpf, Boris, Krueger, David, Paverd, Andrew, Salem, Ahmed, Tople, Shruti, Wutschitz, Lukas, Xia, Menglin, Zanella-Béguelin, Santiago
Large Language Models (LLMs) are rapidly becoming commodity components of larger software systems. This poses natural security and privacy problems: poisoned data retrieved from one component can change the model's behavior and compromise the entire
Externí odkaz:
http://arxiv.org/abs/2410.03055
Autor:
Du, Xuefeng, Ghosh, Reshmi, Sim, Robert, Salem, Ahmed, Carvalho, Vitor, Lawton, Emily, Li, Yixuan, Stokes, Jack W.
Vision-language models (VLMs) are essential for contextual understanding of both visual and textual information. However, their vulnerability to adversarially manipulated inputs presents significant risks, leading to compromised outputs and raising c
Externí odkaz:
http://arxiv.org/abs/2410.00296
The increasing cost of training machine learning (ML) models has led to the inclusion of new parties to the training pipeline, such as users who contribute training data and companies that provide computing resources. This involvement of such new par
Externí odkaz:
http://arxiv.org/abs/2408.00129
Autor:
Zhang, Boyang, Tan, Yicong, Shen, Yun, Salem, Ahmed, Backes, Michael, Zannettou, Savvas, Zhang, Yang
Recently, autonomous agents built on large language models (LLMs) have experienced significant development and are being deployed in real-world applications. These agents can extend the base LLM's capabilities in multiple ways. For example, a well-bu
Externí odkaz:
http://arxiv.org/abs/2407.20859
Autor:
Russinovich, Mark, Salem, Ahmed
Amid growing concerns over the ease of theft and misuse of Large Language Models (LLMs), the need for fingerprinting models has increased. Fingerprinting, in this context, means that the model owner can link a given model to their original version, t
Externí odkaz:
http://arxiv.org/abs/2407.10887
Open-source large language models (LLMs) have become increasingly popular among both the general public and industry, as they can be customized, fine-tuned, and freely used. However, some open-source LLMs require approval before usage, which has led
Externí odkaz:
http://arxiv.org/abs/2407.03160
Autor:
Debenedetti, Edoardo, Rando, Javier, Paleka, Daniel, Florin, Silaghi Fineas, Albastroiu, Dragos, Cohen, Niv, Lemberg, Yuval, Ghosh, Reshmi, Wen, Rui, Salem, Ahmed, Cherubin, Giovanni, Zanella-Beguelin, Santiago, Schmid, Robin, Klemm, Victor, Miki, Takahiro, Li, Chenhao, Kraft, Stefan, Fritz, Mario, Tramèr, Florian, Abdelnabi, Sahar, Schönherr, Lea
Large language model systems face important security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem, we organized a capture-the-flag competition at IEEE SaT
Externí odkaz:
http://arxiv.org/abs/2406.07954
Autor:
Abdelnabi, Sahar, Fay, Aideen, Cherubin, Giovanni, Salem, Ahmed, Fritz, Mario, Paverd, Andrew
Large Language Models (LLMs) are routinely used in retrieval-augmented applications to orchestrate tasks and process inputs from users and other sources. These inputs, even in a single LLM interaction, can come from a variety of sources, of varying t
Externí odkaz:
http://arxiv.org/abs/2406.00799
Large Language Models (LLMs) have risen significantly in popularity and are increasingly being adopted across multiple applications. These LLMs are heavily aligned to resist engaging in illegal or unethical topics as a means to avoid contributing to
Externí odkaz:
http://arxiv.org/abs/2404.01833
Prompt injection has emerged as a serious security threat to large language models (LLMs). At present, the current best-practice for defending against newly-discovered prompt injection techniques is to add additional guardrails to the system (e.g., b
Externí odkaz:
http://arxiv.org/abs/2312.11513