Zobrazeno 1 - 10
of 2 330
pro vyhledávání: '"Adversarial Attacks"'
Autor:
R. Uma Maheshwari, B. Paulchamy
Publikováno v:
Automatika, Vol 65, Iss 4, Pp 1517-1532 (2024)
As deepfake technology becomes increasingly sophisticated, the proliferation of manipulated images presents a significant threat to online integrity, requiring advanced detection and mitigation strategies. Addressing this critical challenge, our stud
Externí odkaz:
https://doaj.org/article/84cc9372fbb2473cad9009e2cbd3939c
Publikováno v:
Digital Communications and Networks, Vol 10, Iss 3, Pp 756-764 (2024)
As modern communication technology advances apace, the digital communication signals identification plays an important role in cognitive radio networks, the communication monitoring and management systems. AI has become a promising solution to this p
Externí odkaz:
https://doaj.org/article/78a80631c02b43f48729bb4e85ac3af3
Publikováno v:
EURASIP Journal on Information Security, Vol 2024, Iss 1, Pp 1-23 (2024)
Abstract Machine learning has become prevalent in transforming diverse aspects of our daily lives through intelligent digital solutions. Advanced disease diagnosis, autonomous vehicular systems, and automated threat detection and triage are some prom
Externí odkaz:
https://doaj.org/article/10af52d81ec541b2b00274ddef0c6215
Autor:
Alisa A. Vorobeva, Maxim A. Matuzko, Dmitry I. Sivkov, Roman I. Safiullin, Alexander A. Menshchikov
Publikováno v:
Naučno-tehničeskij Vestnik Informacionnyh Tehnologij, Mehaniki i Optiki, Vol 24, Iss 2, Pp 256-266 (2024)
Modern artificial intelligence (AI) technologies are being used in a variety of fields, from science to everyday life. However, the widespread use of AI-based systems has highlighted a problem with their vulnerability to adversarial attacks. These
Externí odkaz:
https://doaj.org/article/a2f892dd73ad47768e0a0a1184d95093
Publikováno v:
Energy and AI, Vol 17, Iss , Pp 100381- (2024)
The digital transformation process of power systems towards smart grids is resulting in improved reliability, efficiency and situational awareness at the expense of increased cybersecurity vulnerabilities. Given the availability of large volumes of s
Externí odkaz:
https://doaj.org/article/e503b088724a485c87e08026ea4688c2
Autor:
Nour El Houda Sayah Ben Aissa, Ahmed Korichi, Abderrahmane Lakas, Chaker Abdelaziz Kerrache, Carlos T. Calafate
Publikováno v:
SLAS Technology, Vol 29, Iss 4, Pp 100142- (2024)
The classification of motor imagery (MI) using Electroencephalography (EEG) plays a pivotal role in facilitating communication for individuals with physical limitations through Brain-Computer Interface (BCI) systems. Recent strides in Attention-Based
Externí odkaz:
https://doaj.org/article/8729b19812354d208f42ced831fb3778
Publikováno v:
Scientific Reports, Vol 14, Iss 1, Pp 1-25 (2024)
Abstract In the ongoing battle against adversarial attacks, adopting a suitable strategy to enhance model efficiency, bolster resistance to adversarial threats, and ensure practical deployment is crucial. To achieve this goal, a novel four-component
Externí odkaz:
https://doaj.org/article/5ead175d79b64c999b4885a5d5899c38
Publikováno v:
Cybersecurity, Vol 7, Iss 1, Pp 1-9 (2024)
Abstract Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), t
Externí odkaz:
https://doaj.org/article/051c5250046545ea93e6024c4d0a4c2c
Publikováno v:
EURASIP Journal on Audio, Speech, and Music Processing, Vol 2024, Iss 1, Pp 1-14 (2024)
Abstract Claimed identities of speakers can be verified by means of automatic speaker verification (ASV) systems, also known as voice biometric systems. Focusing on security and robustness against spoofing attacks on ASV systems, and observing that t
Externí odkaz:
https://doaj.org/article/76c6fa0cd1de49d1b8b11fb6d2a8e914
Publikováno v:
IEEE Access, Vol 12, Pp 126729-126737 (2024)
Recent studies have shown that machine-learning models are vulnerable to adversarial attacks. Adversarial attacks are deliberate attempts to modify the input data of a machine learning model in a way that causes it to produce incorrect predictions. O
Externí odkaz:
https://doaj.org/article/f71f28bc91c74525a11cd6a6be74cd3f