Zobrazeno 1 - 10
of 9 392
pro vyhledávání: '"Mirsky, A."'
Neural networks, such as image classifiers, are frequently trained on proprietary and confidential datasets. It is generally assumed that once deployed, the training data remains secure, as adversaries are limited to query response interactions with
Externí odkaz:
http://arxiv.org/abs/2411.14516
Despite obesity being widely discussed in the social sciences, the effect of a robot's perceived obesity level on trust is not covered by the field of HRI. While in research regarding humans, Body Mass Index (BMI) is commonly used as an indicator of
Externí odkaz:
http://arxiv.org/abs/2411.06039
Autor:
Avraham, Inbal, Mirsky, Reuth
Shared control problems involve a robot learning to collaborate with a human. When learning a shared control policy, short communication between the agents can often significantly reduce running times and improve the system's accuracy. We extend the
Externí odkaz:
http://arxiv.org/abs/2410.19612
Autor:
Dor, Maor Biton, Mirsky, Yisroel
This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim's model as an oracle
Externí odkaz:
http://arxiv.org/abs/2410.15429
Autor:
Avraham, Bar, Mirsky, Yisroel
Black box attacks, where adversaries have limited knowledge of the target model, pose a significant threat to machine learning systems. Adversarial examples generated with a substitute model often suffer from limited transferability to the target mod
Externí odkaz:
http://arxiv.org/abs/2410.15409
As large language models (LLMs) continue to evolve, their potential use in automating cyberattacks becomes increasingly likely. With capabilities such as reconnaissance, exploitation, and command execution, LLMs could soon become integral to autonomo
Externí odkaz:
http://arxiv.org/abs/2410.15396
Large Language Models (LLMs) have demonstrated an alarming ability to impersonate humans in conversation, raising concerns about their potential misuse in scams and deception. Humans have a right to know if they are conversing to an LLM. We evaluate
Externí odkaz:
http://arxiv.org/abs/2410.09569
Traditionally, Reinforcement Learning (RL) problems are aimed at optimization of the behavior of an agent. This paper proposes a novel take on RL, which is used to learn the policy of another agent, to allow real-time recognition of that agent's goal
Externí odkaz:
http://arxiv.org/abs/2407.16220
Recent progress in generative models has made it easier for a wide audience to edit and create image content, raising concerns about the proliferation of deepfakes, especially in healthcare. Despite the availability of numerous techniques for detecti
Externí odkaz:
http://arxiv.org/abs/2407.15169
Autor:
Bokobza, Roey, Mirsky, Yisroel
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples. Unlike traditional preprocessing defences that rely on sanitizing input samples, our stateless strate
Externí odkaz:
http://arxiv.org/abs/2403.10562