Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Yalame, Hossein"'
In August 2021, Liu et al. (IEEE TIFS'21) proposed a privacy-enhanced framework named PEFL to efficiently detect poisoning behaviours in Federated Learning (FL) using homomorphic encryption. In this article, we show that PEFL does not preserve privac
Externí odkaz:
http://arxiv.org/abs/2409.19964
The success of machine learning (ML) has been accompanied by increased concerns about its trustworthiness. Several jurisdictions are preparing ML regulatory frameworks. One such concern is ensuring that model training data has desirable distributiona
Externí odkaz:
http://arxiv.org/abs/2308.09552
Autor:
Marx, Felix, Schneider, Thomas, Suresh, Ajith, Wehrle, Tobias, Weinert, Christian, Yalame, Hossein
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices. However, recent research has uncovered vulnerabilities in FL, impacting both security
Externí odkaz:
http://arxiv.org/abs/2302.09904
Autor:
Ben-Itzhak, Yaniv, Möllering, Helen, Pinkas, Benny, Schneider, Thomas, Suresh, Ajith, Tkachenko, Oleksandr, Vargaftik, Shay, Weinert, Christian, Yalame, Hossein, Yanai, Avishay
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orth
Externí odkaz:
http://arxiv.org/abs/2210.07376
Autor:
Harth-Kitzerow, Christopher, Suresh, Ajith, Wang, Yongqin, Yalame, Hossein, Carle, Georg, Annavaram, Murali
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption. While most existing works focus on improving total communication complexity, chal
Externí odkaz:
http://arxiv.org/abs/2206.03776
Autor:
Nguyen, Thien Duc, Rieger, Phillip, Chen, Huili, Yalame, Hossein, Möllering, Helen, Fereidooni, Hossein, Marchal, Samuel, Miettinen, Markus, Mirhoseini, Azalia, Zeitouni, Shaza, Koushanfar, Farinaz, Sadeghi, Ahmad-Reza, Schneider, Thomas
Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to bac
Externí odkaz:
http://arxiv.org/abs/2101.02281
Autor:
Marx, Felix, Schneider, Thomas, Suresh, Ajith, Wehrle, Tobias, Weinert, Christian, Yalame, Hossein
Federated learning (FL) has emerged as an efficient approach for large-scale distributed machine learning, ensuring data privacy by keeping training data on client devices. However, recent research has highlighted vulnerabilities in FL, including the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8637255c276660c85a071a6fdaf5af52
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
ACM International Conference Proceeding Series; 8/25/2020, p1-10, 10p