Zobrazeno 1 - 10
of 524
pro vyhledávání: '"Poisoning attacks"'
Autor:
Suzan Almutairi, Ahmed Barnawi
Publikováno v:
Results in Engineering, Vol 24, Iss , Pp 103295- (2024)
Due to the increase in data regulations amid rising privacy concerns, the machine learning (ML) community has proposed a novel distributed training paradigm called federated learning (FL). FL enables untrusted groups of clients to train collaborative
Externí odkaz:
https://doaj.org/article/52c072de0c8d47f6b08c992391623659
Publikováno v:
Digital Communications and Networks, Vol 10, Iss 2, Pp 416-428 (2024)
The security of Federated Learning (FL)/Distributed Machine Learning (DML) is gravely threatened by data poisoning attacks, which destroy the usability of the model by contaminating training samples, so such attacks are called causative availability
Externí odkaz:
https://doaj.org/article/482c07dc7e154d4ab3e527506cbfe2aa
Autor:
Almutairi, Suzan ⁎, Barnawi, Ahmed
Publikováno v:
In Results in Engineering December 2024 24
Publikováno v:
Applied Network Science, Vol 9, Iss 1, Pp 1-31 (2024)
Abstract Vertex classification using graph convolutional networks is susceptible to targeted poisoning attacks, in which both graph structure and node attributes can be changed in an attempt to misclassify a target node. This vulnerability decreases
Externí odkaz:
https://doaj.org/article/13df139afd7f4534bfd44ab9ee3f81f0
Autor:
Abdul Majeed, Seong Oun Hwang
Publikováno v:
IEEE Access, Vol 12, Pp 84643-84679 (2024)
Federated learning (FL) is considered a de facto standard for privacy preservation in AI environments because it does not require data to be aggregated in some central place to train an AI model. Preserving data on the client side and sharing only th
Externí odkaz:
https://doaj.org/article/90dda5207b0a4167b1d079c28539b60a
Publikováno v:
Applied Sciences, Vol 14, Iss 22, p 10706 (2024)
Federated learning is a new paradigm where multiple data owners, referred to as clients, work together with a global server to train a shared machine learning model without disclosing their personal training data. Despite its many advantages, the sys
Externí odkaz:
https://doaj.org/article/43f7faf512f24ff2995bae2901e081e1
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
IEEE Access, Vol 11, Pp 10708-10722 (2023)
Federated learning faces many security and privacy issues. Among them, poisoning attacks can significantly impact global models, and malicious attackers can prevent global models from converging or even manipulating the prediction results of global m
Externí odkaz:
https://doaj.org/article/97f862c2e15d484a9279149a10b9bb90
Publikováno v:
Algorithms, Vol 17, Iss 4, p 155 (2024)
The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These syst
Externí odkaz:
https://doaj.org/article/a77769e949d44d979ce522e00a46b2a0
Publikováno v:
Applied Sciences, Vol 14, Iss 8, p 3255 (2024)
In this study, we introduce a novel collaborative federated learning (FL) framework, aiming at enhancing robustness in distributed learning environments, particularly pertinent to IoT and industrial automation scenarios. At the core of our contributi
Externí odkaz:
https://doaj.org/article/986da8e5caa541b4a2d2ca1095e284cb