Zobrazeno 1 - 10
of 515
pro vyhledávání: '"Data poisoning"'
Publikováno v:
Digital Communications and Networks, Vol 10, Iss 2, Pp 416-428 (2024)
The security of Federated Learning (FL)/Distributed Machine Learning (DML) is gravely threatened by data poisoning attacks, which destroy the usability of the model by contaminating training samples, so such attacks are called causative availability
Externí odkaz:
https://doaj.org/article/482c07dc7e154d4ab3e527506cbfe2aa
Publikováno v:
IEEE Open Journal of the Communications Society, Vol 5, Pp 7278-7300 (2024)
Federated Learning (FL) has transformed machine learning by facilitating decentralized, privacy-focused data processing. Despite its advantages, FL remains vulnerable to data poisoning attacks, particularly Label-Flipping Attacks (LFA). In LFA, malic
Externí odkaz:
https://doaj.org/article/5c52861f5a644afb935179bbccac4d0b
Autor:
Jueal Mia, M. Hadi Amini
Publikováno v:
IEEE Open Journal of Intelligent Transportation Systems, Vol 5, Pp 495-508 (2024)
Federated Learning is a decentralized machine learning technique that creates a global model by aggregating local models from multiple edge devices without a need to access the local data. However, due to the distributed nature of federated learning,
Externí odkaz:
https://doaj.org/article/2231ae17b10f4a4cb3e1752cbe85b312
Publikováno v:
IEEE Access, Vol 12, Pp 114057-114072 (2024)
Amidst the recent technological breakthroughs and increased integration of Artificial Intelligence (AI) technologies across various domains, it is imperative to consider the myriad security threats posed by AI. One of the significant attack vectors o
Externí odkaz:
https://doaj.org/article/693dc37952594dc9bd592cffd9c72f3c
Autor:
Sahaya Beni Prathiba, Yeshwanth Govindarajan, Vishal Pranav Amirtha Ganesan, Anirudh Ramachandran, Arikumar K. Selvaraj, Ali Kashif Bashir, Thippa Reddy Gadekallu
Publikováno v:
IEEE Access, Vol 12, Pp 68968-68980 (2024)
Ensuring robustness against adversarial attacks is imperative for Machine Learning (ML) systems within the critical infrastructures of the Industrial Internet of Things (IIoT). This paper addresses vulnerabilities in IIoT systems, particularly in dis
Externí odkaz:
https://doaj.org/article/402846f923ec4a1799b5ab59ab5eafe2
Publikováno v:
IEEE Access, Vol 12, Pp 33843-33851 (2024)
The motivation for the development of multi-exit networks (MENs) lies in the desire to minimize the delay and energy consumption associated with the inference phase. Moreover, MENs are designed to expedite predictions for easily identifiable inputs b
Externí odkaz:
https://doaj.org/article/375fdbd74a284b5195f9843324df5de6
Publikováno v:
IEEE Access, Vol 12, Pp 11674-11687 (2024)
As Generative Adversarial Networks advance, deepfakes have become increasingly realistic, thereby escalating societal, economic, and political threats. In confronting these heightened risks, the research community has identified two promising defensi
Externí odkaz:
https://doaj.org/article/891010f0a39542f9adad4a32b95b27e5
Autor:
Visger, Mark A., author
Publikováno v:
Big Data and Armed Conflict : Legal Issues Above and Below the Armed Conflict Threshold, 2024.
Externí odkaz:
https://doi.org/10.1093/oso/9780197668610.003.0008
Publikováno v:
Sensors, Vol 24, Iss 19, p 6416 (2024)
In this paper, we introduce a security approach for on-device learning Edge AIs designed to detect abnormal conditions in factory machines. Since Edge AIs are easily accessible by an attacker physically, there are security risks due to physical attac
Externí odkaz:
https://doaj.org/article/b3a87a7e1d1b42eebb99e62186603b25
Publikováno v:
Applied Sciences, Vol 14, Iss 19, p 8742 (2024)
Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regar
Externí odkaz:
https://doaj.org/article/6540fff00f54480cbbfa3da04ada6bc6