Zobrazeno 1 - 10
of 2 389
pro vyhledávání: '"A. NASSI"'
In this paper, we show that with the ability to jailbreak a GenAI model, attackers can escalate the outcome of attacks against RAG-based GenAI-powered applications in severity and scale. In the first part of the paper, we show that attackers can esca
Externí odkaz:
http://arxiv.org/abs/2409.08045
This work explores injection attacks against password managers. In this setting, the adversary (only) controls their own application client, which they use to "inject" chosen payloads to a victim's client via, for example, sharing credentials with th
Externí odkaz:
http://arxiv.org/abs/2408.07054
In this paper we argue that a jailbroken GenAI model can cause substantial harm to GenAI-powered applications and facilitate PromptWare, a new type of attack that flips the GenAI model's behavior from serving an application to attacking it. PromptWar
Externí odkaz:
http://arxiv.org/abs/2408.05061
Autor:
Namavari, Armin, Wang, Barry, Menda, Sanketh, Nassi, Ben, Tyagi, Nirvan, Grimmelmann, James, Zhang, Amy, Ristenpart, Thomas
The increasing harms caused by hate, harassment, and other forms of abuse online have motivated major platforms to explore hierarchical governance. The idea is to allow communities to have designated members take on moderation and leadership duties;
Externí odkaz:
http://arxiv.org/abs/2406.19433
In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services
Externí odkaz:
http://arxiv.org/abs/2403.02817
Autor:
Ebadifard, Nassi, Parihar, Ajitesh, Khmelevsky, Youry, Hains, Gaetan, Wong, Albert, Zhang, Frank
A data warehouse efficiently prepares data for effective and fast data analysis and modelling using machine learning algorithms. This paper discusses existing solutions for the Data Extraction, Transformation, and Loading (ETL) process and automation
Externí odkaz:
http://arxiv.org/abs/2312.12774
Autor:
Biton, Dudi, Misra, Aditi, Levy, Efrat, Kotak, Jaidip, Bitton, Ron, Schuster, Roei, Papernot, Nicolas, Elovici, Yuval, Nassi, Ben
Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability
Externí odkaz:
http://arxiv.org/abs/2309.02159
We demonstrate how images and sounds can be used for indirect prompt and instruction injection in multi-modal LLMs. An attacker generates an adversarial perturbation corresponding to the prompt and blends it into an image or audio recording. When the
Externí odkaz:
http://arxiv.org/abs/2307.10490
In recent years, various watermarking methods were suggested to detect computer vision models obtained illegitimately from their owners, however they fail to demonstrate satisfactory robustness against model extraction attacks. In this paper, we pres
Externí odkaz:
http://arxiv.org/abs/2211.13644
Autor:
NASSI, BEN1 nassib@post.bgu.ac.il, MIRSKY, YISROEL2, SHAMS, JACOB3, BEN-NETANEL, RAZ4, NASSI, DUDI5, ELOVICI, YUVAL2
Publikováno v:
Communications of the ACM. Apr2023, Vol. 66 Issue 4, p56-67. 12p. 6 Color Photographs, 2 Charts, 1 Graph.