Zobrazeno 1 - 10
of 2 317
pro vyhledávání: '"adversarial examples"'
Publikováno v:
Complex & Intelligent Systems, Vol 10, Iss 5, Pp 6667-6692 (2024)
Abstract Adversarial examples which mislead deep neural networks by adding well-crafted perturbations have become a major threat to classification models. Gradient-based white-box attack algorithms have been widely used to generate adversarial exampl
Externí odkaz:
https://doaj.org/article/6c5d9f4f3ab247e0af61e327038eb95b
Publikováno v:
Cybersecurity, Vol 7, Iss 1, Pp 1-20 (2024)
Abstract Most of the adversarial attacks against speech recognition systems focus on specific adversarial perturbations, which are generated by adversaries for each normal example to achieve the attack. Universal adversarial perturbations (UAPs), whi
Externí odkaz:
https://doaj.org/article/aef391dae15d49c3a7d37ac3b7df6cd2
Publikováno v:
Cybersecurity, Vol 7, Iss 1, Pp 1-18 (2024)
Abstract In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials duri
Externí odkaz:
https://doaj.org/article/f1c429155cc746f4a41f95460df7044a
Publikováno v:
IEEE Access, Vol 12, Pp 17522-17540 (2024)
Deep learning (DL) has demonstrated remarkable achievements in various fields. Nevertheless, DL models encounter significant challenges in detecting and defending against adversarial samples (AEs). These AEs are meticulously crafted by adversaries, i
Externí odkaz:
https://doaj.org/article/b5e4057a3373471aa42f4cdc86757586
Publikováno v:
International Journal of Applied Earth Observations and Geoinformation, Vol 133, Iss , Pp 104131- (2024)
Deep neural networks (DNNs) have risen to prominence as key solutions in numerous AI applications for earth observation (AI4EO). However, their susceptibility to adversarial examples poses a critical challenge, compromising the reliability of AI4EO a
Externí odkaz:
https://doaj.org/article/bad043238de64bda9fa672e0fd84c2c9
Publikováno v:
Tạp chí Khoa học Đại học Đà Lạt, Vol 14, Iss 3 (2024)
Artificial intelligence (AI) has found applications across various sectors and industries, offering numerous advantages to human beings. One prominent area where AI has made significant contributions is in machine learning models. These models have r
Externí odkaz:
https://doaj.org/article/7698f8dc161b4756906789cab4887f6f
Publikováno v:
Cybersecurity, Vol 7, Iss 1, Pp 1-9 (2024)
Abstract Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), t
Externí odkaz:
https://doaj.org/article/051c5250046545ea93e6024c4d0a4c2c
Publikováno v:
Vietnam Journal of Computer Science, Vol 11, Iss 01, Pp 23-52 (2024)
Transfer-based attacks, a type of adversarial attack, have become a growing threat in recent years with the proliferation of cloud services. Deep neural networks that exploit human cognitive bias (Loosely Symmetric-Deep Neural Network, LS-DNN) are kn
Externí odkaz:
https://doaj.org/article/d248673e229a4c4e933b68d6e752d29a
Autor:
Xiaoyin Yi, Jiacheng Huang
Publikováno v:
IEEE Access, Vol 12, Pp 105605-105612 (2024)
Adversarial examples, which are inputs deliberately perturbed with imperceptible changes to induce model errors, have raised serious concerns for the reliability and security of deep neural networks (DNNs). While adversarial attacks have been extensi
Externí odkaz:
https://doaj.org/article/5e02a420c16e45b0a5fd8e12cad2ffb1
Publikováno v:
IEEE Access, Vol 12, Pp 86541-86552 (2024)
In this study, we propose novel approaches for generating adversarial examples targeting machine learning-based image cropping systems. Image cropping is crucial for meeting display space restrictions and highlighting content’s interest areas. Howe
Externí odkaz:
https://doaj.org/article/538d210247294a2e8dafefaa0c81db37