Zobrazeno 1 - 10
of 68
pro vyhledávání: '"BITTON, RON"'
In this paper, we show that with the ability to jailbreak a GenAI model, attackers can escalate the outcome of attacks against RAG-based GenAI-powered applications in severity and scale. In the first part of the paper, we show that attackers can esca
Externí odkaz:
http://arxiv.org/abs/2409.08045
In this paper we argue that a jailbroken GenAI model can cause substantial harm to GenAI-powered applications and facilitate PromptWare, a new type of attack that flips the GenAI model's behavior from serving an application to attacking it. PromptWar
Externí odkaz:
http://arxiv.org/abs/2408.05061
In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services
Externí odkaz:
http://arxiv.org/abs/2403.02817
Autor:
Biton, Dudi, Misra, Aditi, Levy, Efrat, Kotak, Jaidip, Bitton, Ron, Schuster, Roei, Papernot, Nicolas, Elovici, Yuval, Nassi, Ben
Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability
Externí odkaz:
http://arxiv.org/abs/2309.02159
Publikováno v:
Computers & Security, 126, 103073 (2023)
IoT devices are known to be vulnerable to various cyber-attacks, such as data exfiltration and the execution of flooding attacks as part of a DDoS attack. When it comes to detecting such attacks using network traffic analysis, it has been shown that
Externí odkaz:
http://arxiv.org/abs/2303.01041
Autor:
Bitton, Ron, Malach, Alon, Meiseles, Amiel, Momiyama, Satoru, Araki, Toshinori, Furukawa, Jun, Elovici, Yuval, Shabtai, Asaf
Model agnostic feature attribution algorithms (such as SHAP and LIME) are ubiquitous techniques for explaining the decisions of complex classification models, such as deep neural networks. However, since complex classification models produce superior
Externí odkaz:
http://arxiv.org/abs/2211.14797
Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years. These attacks cause the model to make incorrect predictions by placing a patch containing an adversarial pattern on the target
Externí odkaz:
http://arxiv.org/abs/2211.08859
State-of-the-art deep neural networks (DNNs) are highly effective at tackling many real-world tasks. However, their wide adoption in mission-critical contexts is hampered by two major weaknesses - their susceptibility to adversarial attacks and their
Externí odkaz:
http://arxiv.org/abs/2211.08686
The sophistication and complexity of cyber attacks and the variety of targeted platforms have been growing in recent years. Various adversaries are abusing an increasing range of platforms, e.g., enterprise platforms, mobile phones, PCs, transportati
Externí odkaz:
http://arxiv.org/abs/2209.04028
Autor:
Habler, Edan, Bitton, Ron, Avraham, Dan, Mimran, Dudu, Klevansky, Eitan, Brodt, Oleg, Lehmann, Heiko, Elovici, Yuval, Shabtai, Asaf
O-RAN is a new, open, adaptive, and intelligent RAN architecture. Motivated by the success of artificial intelligence in other domains, O-RAN strives to leverage machine learning (ML) to automatically and efficiently manage network resources in diver
Externí odkaz:
http://arxiv.org/abs/2201.06093