Zobrazeno 1 - 10
of 1 669
pro vyhledávání: '"AbdelNabi A"'
Scientific discovery is a catalyst for human intellectual advances, driven by the cycle of hypothesis generation, experimental design, data evaluation, and iterative assumption refinement. This process, while crucial, is expensive and heavily depende
Externí odkaz:
http://arxiv.org/abs/2409.02604
Autor:
Debenedetti, Edoardo, Rando, Javier, Paleka, Daniel, Florin, Silaghi Fineas, Albastroiu, Dragos, Cohen, Niv, Lemberg, Yuval, Ghosh, Reshmi, Wen, Rui, Salem, Ahmed, Cherubin, Giovanni, Zanella-Beguelin, Santiago, Schmid, Robin, Klemm, Victor, Miki, Takahiro, Li, Chenhao, Kraft, Stefan, Fritz, Mario, Tramèr, Florian, Abdelnabi, Sahar, Schönherr, Lea
Large language model systems face important security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem, we organized a capture-the-flag competition at IEEE SaT
Externí odkaz:
http://arxiv.org/abs/2406.07954
Autor:
Abdelnabi, Sahar, Fay, Aideen, Cherubin, Giovanni, Salem, Ahmed, Fritz, Mario, Paverd, Andrew
Large Language Models are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugin
Externí odkaz:
http://arxiv.org/abs/2406.00799
Instruction-tuned Large Language Models (LLMs) show impressive results in numerous practical applications, but they lack essential safety features that are common in other areas of computer science, particularly an explicit separation of instructions
Externí odkaz:
http://arxiv.org/abs/2403.06833
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact. Understanding the non-deliberate(ive) mechanism of LLMs in giving responses is essential in explaining their performance an
Externí odkaz:
http://arxiv.org/abs/2402.11005
There is an growing interest in using Large Language Models (LLMs) in multi-agent systems to tackle interactive real-world tasks that require effective collaboration and assessing complex situations. Yet, we still have a limited understanding of LLMs
Externí odkaz:
http://arxiv.org/abs/2309.17234
Autor:
Stivala, Giada, Abdelnabi, Sahar, Mengascini, Andrea, Graziano, Mariano, Fritz, Mario, Pellegrino, Giancarlo
Clickbait PDFs are PDF documents that do not embed malware but trick victims into visiting malicious web pages leading to attacks like password theft or drive-by download. While recent reports indicate a surge of clickbait PDFs, prior works have larg
Externí odkaz:
http://arxiv.org/abs/2308.01273
Publikováno v:
Journal of Islamic Accounting and Business Research, 2023, Vol. 15, Issue 6, pp. 959-987.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/JIABR-01-2022-0021
Autor:
Pranto, Protik Bose, Khan, Waqar Hassan, Abdelnabi, Sahar, Weil, Rebecca, Fritz, Mario, Hasan, Rakibul
We outline a planned experiment to investigate if personal data (e.g., demographics and behavioral patterns) can be used to selectively expose individuals to disinformation such that an adversary can spread disinformation more efficiently compared to
Externí odkaz:
http://arxiv.org/abs/2306.04883
Autor:
Greshake, Kai, Abdelnabi, Sahar, Mishra, Shailesh, Endres, Christoph, Holz, Thorsten, Fritz, Mario
Large Language Models (LLMs) are increasingly being integrated into various applications. The functionalities of recent LLMs can be flexibly modulated via natural language prompts. This renders them susceptible to targeted adversarial prompting, e.g.
Externí odkaz:
http://arxiv.org/abs/2302.12173