Zobrazeno 1 - 10
of 3 427
pro vyhledávání: '"Adversarial attack"'
Publikováno v:
Visual Intelligence, Vol 2, Iss 1, Pp 1-17 (2024)
Abstract In recent years, defending against adversarial examples has gained significant importance, leading to a growing body of research in this area. Among these studies, pre-processing defense approaches have emerged as a prominent research direct
Externí odkaz:
https://doaj.org/article/e76dea8d69e54007b5aed81e05df1263
Publikováno v:
工程科学学报, Vol 46, Iss 9, Pp 1630-1637 (2024)
Deep neural network-based video classification models enjoy widespread use because of their superior performance on visual tasks. However, with its broad-based application comes a deep-rooted concern about its security aspect. Recent research signals
Externí odkaz:
https://doaj.org/article/870860a303d249c29d70568f78b5c89c
Publikováno v:
AI, Vol 5, Iss 3, Pp 1216-1234 (2024)
Recent studies have exposed the vulnerabilities of deep neural networks to some carefully perturbed input data. We propose a novel untargeted white box adversarial attack, the dynamic programming-based sub-pixel score method (SPSM) attack (DPSPSM), w
Externí odkaz:
https://doaj.org/article/ccb0129471324c8eabdde1cee7ed7aa8
Autor:
Yuwei Chen, Shiyong Chu
Publikováno v:
Frontiers in Computer Science, Vol 6 (2024)
Deep learning-based aerial detection is an essential component in modern aircraft, providing fundamental functions such as navigation and situational awareness. Though promising, aerial detection has been shown to be vulnerable to adversarial attacks
Externí odkaz:
https://doaj.org/article/4b2a3bd8af5c490badb6faaec72675d0
Publikováno v:
Радіоелектронні і комп'ютерні системи, Vol 2024, Iss 3, Pp 55-66 (2024)
Neural network object detectors are increasingly being used for aerial video analysis, with a growing demand for onboard processing on UAVs and other limited resources. However, the vulnerability of neural networks to adversarial noise, out-of-distri
Externí odkaz:
https://doaj.org/article/fb0518b83e8c4af6909e2c785f31707d
Publikováno v:
Complex & Intelligent Systems, Vol 10, Iss 6, Pp 8355-8382 (2024)
Abstract In this paper, based on facial landmark approaches, the possible vulnerability of ensemble algorithms to the FGSM attack has been assessed using three commonly used models: convolutional neural network-based antialiasing (A_CNN), Xc_Deep2-ba
Externí odkaz:
https://doaj.org/article/3116a8c15dca4ab6b74ce53f6b83ecb3
Publikováno v:
International Journal of Computational Intelligence Systems, Vol 17, Iss 1, Pp 1-16 (2024)
Abstract Adversarial patches, a type of adversarial example, pose serious security threats to deep neural networks (DNNs) by inducing erroneous outputs. Existing gradient stabilization methods aim to stabilize the optimization direction of adversaria
Externí odkaz:
https://doaj.org/article/19206d3a309a44ea8e8093627c459d08
Autor:
Khaleel Yahya Layth, Habeeb Mustafa Abdulfattah, Albahri A. S., Al-Quraishi Tahsien, Albahri O. S., Alamoodi A. H.
Publikováno v:
Journal of Intelligent Systems, Vol 33, Iss 1, Pp 122223-78 (2024)
This study aims to perform a thorough systematic review investigating and synthesizing existing research on defense strategies and methodologies in adversarial attacks using machine learning (ML) and deep learning methods. A methodology was conducted
Externí odkaz:
https://doaj.org/article/3474733128cd43279488cca2a0e6d7c7
Publikováno v:
Machine Learning and Knowledge Extraction, Vol 6, Iss 3, Pp 1545-1563 (2024)
Images and text have become essential parts of the multimodal machine learning (MMML) framework in today’s world because data are always available, and technological breakthroughs bring disparate forms together, and while text adds semantic richnes
Externí odkaz:
https://doaj.org/article/783c570368f349d5a77501df3c7bcd35
Publikováno v:
Complex & Intelligent Systems, Vol 10, Iss 5, Pp 6825-6837 (2024)
Abstract Significant structural differences in DNN-based object detectors hinders the transferability of adversarial attacks. Studies show that intermediate features extracted by the detector contain more model-independent information, and disrupting
Externí odkaz:
https://doaj.org/article/372584eec4da4ccdafc23686cbfd546c