Zobrazeno 1 - 10
of 241
pro vyhledávání: '"PICEK, STJEPAN"'
Evolving Boolean functions with specific properties is an interesting optimization problem since, depending on the combination of properties and Boolean function size, the problem can range from very simple to (almost) impossible to solve. Moreover,
Externí odkaz:
http://arxiv.org/abs/2411.12735
While security vulnerabilities in traditional Deep Neural Networks (DNNs) have been extensively studied, the susceptibility of Spiking Neural Networks (SNNs) to adversarial attacks remains mostly underexplored. Until now, the mechanisms to inject bac
Externí odkaz:
http://arxiv.org/abs/2411.03022
Artificial Neural Networks (ANNs), commonly mimicking neurons with non-linear functions to output floating-point numbers, consistently receive the same signals of a data point during its forward time. Unlike ANNs, Spiking Neural Networks (SNNs) get v
Externí odkaz:
http://arxiv.org/abs/2409.19413
Due to the high cost of training, large model (LM) practitioners commonly use pretrained models downloaded from untrusted sources, which could lead to owning compromised models. In-context learning is the ability of LMs to perform multiple tasks depe
Externí odkaz:
http://arxiv.org/abs/2409.04142
Speaker identification (SI) determines a speaker's identity based on their spoken utterances. Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks. Backdoor attacks involve embedding hidden triggers in DNNs'
Externí odkaz:
http://arxiv.org/abs/2408.01178
Security concerns for large language models (LLMs) have recently escalated, focusing on thwarting jailbreaking attempts in discrete prompts. However, the exploration of jailbreak vulnerabilities arising from continuous embeddings has been limited, as
Externí odkaz:
http://arxiv.org/abs/2407.13796
Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community. Backdoor defenses are mainly based on backdoor inversion, which has been shown to be generic, model-agnostic, and applicable
Externí odkaz:
http://arxiv.org/abs/2405.19928
Large Language Models (LLMs) have gained significant popularity recently. LLMs are susceptible to various attacks but can also improve the security of diverse systems. However, besides enabling more secure systems, how well do open source LLMs behave
Externí odkaz:
http://arxiv.org/abs/2405.15652
Federated Transfer Learning (FTL) is the most general variation of Federated Learning. According to this distributed paradigm, a feature learning pre-step is commonly carried out by only one party, typically the server, on publicly shared data. After
Externí odkaz:
http://arxiv.org/abs/2404.19420
Publikováno v:
NDSS 2023
Dynamic searchable symmetric encryption (DSSE) enables users to delegate the keyword search over dynamically updated encrypted databases to an honest-but-curious server without losing keyword privacy. This paper studies a new and practical security r
Externí odkaz:
http://arxiv.org/abs/2403.15052