Zobrazeno 1 - 10
of 1 924
pro vyhledávání: '"Miller, David J."'
Backdoor data poisoning, inserted within instruction examples used to fine-tune a foundation Large Language Model (LLM) for downstream tasks (\textit{e.g.,} sentiment prediction), is a serious security concern due to the evasive nature of such attack
Externí odkaz:
http://arxiv.org/abs/2406.07778
A variety of defenses have been proposed against backdoors attacks on deep neural network (DNN) classifiers. Universal methods seek to reliably detect and/or mitigate backdoors irrespective of the incorporation mechanism used by the attacker, while r
Externí odkaz:
http://arxiv.org/abs/2402.02034
We study an effective theory of flavour in which the $SU(2)_L$ interaction is `flavour-deconstructed' near the TeV scale. This arises, for example, in UV models that unify all three generations of left-handed fermions via an $Sp(6)_L$ symmetry. Flavo
Externí odkaz:
http://arxiv.org/abs/2312.13346
Well-known (non-malicious) sources of overfitting in deep neural net (DNN) classifiers include: i) large class imbalances; ii) insufficient training-set diversity; and iii) over-training. In recent work, it was shown that backdoor data-poisoning also
Externí odkaz:
http://arxiv.org/abs/2309.16827
Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs), wherein a test instance is (mis)classified to the attacker's target class whenever the attacker's backdoor trigger is present. In this paper,
Externí odkaz:
http://arxiv.org/abs/2308.09850
Deep neural networks are vulnerable to backdoor attacks (Trojans), where an attacker poisons the training set with backdoor triggers so that the neural network learns to classify test-time triggers to the attacker's designated target class. Recent wo
Externí odkaz:
http://arxiv.org/abs/2308.04617
Autor:
Carter, Brittany, Horowitz, Viva R., Hernandez, Uriel, Miller, David J., Blaikie, Andrew, Alemán, Benjamín J.
Nanoelectromechanical (NEMS) resonator networks have drawn increasing interest due to their potential applications in emergent behavior, sensing, phononics, and mechanical information processing. A challenge toward realizing these large-scale network
Externí odkaz:
http://arxiv.org/abs/2302.03680
Backdoor attacks are an important type of adversarial threat against deep neural network classifiers, wherein test samples from one or more source classes will be (mis)classified to the attacker's target class when a backdoor pattern is embedded. In
Externí odkaz:
http://arxiv.org/abs/2205.06900
Backdoor attacks (BAs) are an emerging threat to deep neural network classifiers. A victim classifier will predict to an attacker-desired target class whenever a test sample is embedded with the same backdoor pattern (BP) that was used to poison the
Externí odkaz:
http://arxiv.org/abs/2201.08474
Backdoor (Trojan) attacks are emerging threats against deep neural networks (DNN). A DNN being attacked will predict to an attacker-desired target class whenever a test sample from any source class is embedded with a backdoor pattern; while correctly
Externí odkaz:
http://arxiv.org/abs/2112.03350