Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Hasircioglu, Burak"'
We consider collaborative inference at the wireless edge, where each client's model is trained independently on their local datasets. Clients are queried in parallel to make an accurate decision collaboratively. In addition to maximizing the inferenc
Externí odkaz:
http://arxiv.org/abs/2407.21151
Autor:
Hasircioglu, Burak, Gunduz, Deniz
The task of preserving privacy while ensuring efficient communication is a fundamental challenge in federated learning. In this work, we tackle this challenge in the trusted aggregator model, and propose a solution that achieves both objectives simul
Externí odkaz:
http://arxiv.org/abs/2309.07809
Autor:
Hasircioglu, Burak, Gunduz, Deniz
Running a randomized algorithm on a subsampled dataset instead of the entire dataset amplifies differential privacy guarantees. In this work, in a federated setting, we consider random participation of the clients in addition to subsampling their loc
Externí odkaz:
http://arxiv.org/abs/2205.01556
We consider distributed inference at the wireless edge, where multiple clients with an ensemble of models, each trained independently on a local dataset, are queried in parallel to make an accurate decision on a new sample. In addition to maximizing
Externí odkaz:
http://arxiv.org/abs/2202.03129
We consider the problem of secure distributed matrix multiplication (SDMM). Coded computation has been shown to be an effective solution in distributed matrix multiplication, both providing privacy against workers and boosting the computation speed b
Externí odkaz:
http://arxiv.org/abs/2106.07731
We consider the problem of private distributed matrix multiplication under limited resources. Coded computation has been shown to be an effective solution in distributed matrix multiplication, both providing privacy against the workers and boosting t
Externí odkaz:
http://arxiv.org/abs/2102.08304
Autor:
Malekzadeh, Mohammad, Hasircioglu, Burak, Mital, Nitish, Katarya, Kunal, Ozfatura, Mehmet Emre, Gündüz, Deniz
While rich medical datasets are hosted in hospitals distributed across the world, concerns on patients' privacy is a barrier against using such data to train deep neural networks (DNNs) for medical diagnostics. We propose Dopamine, a system to train
Externí odkaz:
http://arxiv.org/abs/2101.11693
Autor:
Hasircioglu, Burak, Gunduz, Deniz
In conventional federated learning (FL), differential privacy (DP) guarantees can be obtained by injecting additional noise to local model updates before transmitting to the parameter server (PS). In the wireless FL scenario, we show that the privacy
Externí odkaz:
http://arxiv.org/abs/2011.08579
Coded computing is an effective technique to mitigate "stragglers" in large-scale and distributed matrix multiplication. In particular, univariate polynomial codes have been shown to be effective in straggler mitigation by making the computation time
Externí odkaz:
http://arxiv.org/abs/2001.07227
Coded computing is an effective technique to mitigate "stragglers" in large-scale and distributed matrix multiplication. In particular, univariate polynomial codes have been shown to be effective in straggler mitigation by making the computation time
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::04436597cfd0ef63d32384f7de4efafb