Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Ozfatura, Kerem"'
Federated learning (FL) has been introduced to enable a large number of clients, possibly mobile devices, to collaborate on generating a generalized machine learning model thanks to utilizing a larger number of local samples without sharing to offer
Externí odkaz:
http://arxiv.org/abs/2404.06230
The increasing popularity of the federated learning (FL) framework due to its success in a wide range of collaborative learning tasks also induces certain security concerns. Among many vulnerabilities, the risk of Byzantine attacks is of particular c
Externí odkaz:
http://arxiv.org/abs/2208.09894
Autor:
Wilhelmi, Francesc, Hribar, Jernej, Yilmaz, Selim F., Ozfatura, Emre, Ozfatura, Kerem, Yildiz, Ozlem, Gündüz, Deniz, Chen, Hao, Ye, Xiaoying, You, Lizhao, Shao, Yulin, Dini, Paolo, Bellalta, Boris
As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency. To unleash the potential of such new features, artificial intelligence (AI) an
Externí odkaz:
http://arxiv.org/abs/2203.10472
A common observation regarding adversarial attacks is that they mostly give rise to false activation at the penultimate layer to fool the classifier. Assuming that these activation values correspond to certain features of the input, the objective bec
Externí odkaz:
http://arxiv.org/abs/2106.10252
Federated learning (FL) enables multiple clients to collaboratively train a shared model without disclosing their local datasets. This is achieved by exchanging local model updates with the help of a parameter server (PS). However, due to the increas
Externí odkaz:
http://arxiv.org/abs/2101.08837
Federated learning (FL) has become de facto framework for collaborative learning among edge devices with privacy concern. The core of the FL strategy is the use of stochastic gradient descent (SGD) in a distributed manner. Large scale implementation
Externí odkaz:
http://arxiv.org/abs/2012.09102
Distributed learning, particularly variants of distributed stochastic gradient descent (DSGD), are widely employed to speed up training by leveraging computational resources of several workers. However, in practise, communication delay becomes a bott
Externí odkaz:
http://arxiv.org/abs/2011.06495
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Wilhelmi, Francesc, Hribar, Jernej, Yilmaz, Selim F., Ozfatura, Emre, Ozfatura, Kerem, Yildiz, Ozlem, Gündüz, Deniz, Chen, Hao, Ye, Xiaoying, You, Lizhao, Shao, Yulin, Dini, Paolo, Bellalta, Boris
Publikováno v:
ITU Journal on Future and Evolving Technologies. 3(2):117-133
As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency. To unleash the potential of such new features, Artificial Intelligence (AI) an