Zobrazeno 1 - 10
of 88
pro vyhledávání: '"Ali, Ramy"'
Data privacy is of great concern in cloud machine-learning service platforms, when sensitive data are exposed to service providers. While private computing environments (e.g., secure enclaves), and cryptographic approaches (e.g., homomorphic encrypti
Externí odkaz:
http://arxiv.org/abs/2312.05264
Autor:
Ali, Ramy E.
Cadambe and Lyu 2021 presents an erasure coding based algorithm called CausalEC that ensures causal consistency based on cross-object erasure coding. This note shows that the algorithm presented in Cadambe and Lyu 2021 and the main ideas behind it ar
Externí odkaz:
http://arxiv.org/abs/2305.12699
Federated learning (FL) typically relies on synchronous training, which is slow due to stragglers. While asynchronous training handles stragglers efficiently, it does not ensure privacy due to the incompatibility with the secure aggregation protocols
Externí odkaz:
http://arxiv.org/abs/2110.02177
Leveraging parallel hardware (e.g. GPUs) for deep neural network (DNN) training brings high computing performance. However, it raises data privacy concerns as GPUs lack a trusted environment to protect the data. Trusted execution environments (TEEs)
Externí odkaz:
http://arxiv.org/abs/2110.01229
Autor:
So, Jinhyun, He, Chaoyang, Yang, Chien-Sheng, Li, Songze, Yu, Qian, Ali, Ramy E., Guler, Basak, Avestimehr, Salman
Secure model aggregation is a key component of federated learning (FL) that aims at protecting the privacy of each user's individual model while allowing for their global aggregation. It can be applied to any aggregation-based FL approach for trainin
Externí odkaz:
http://arxiv.org/abs/2109.14236
Due to the surge of cloud-assisted AI services, the problem of designing resilient prediction serving systems that can effectively cope with stragglers/failures and minimize response delays has attracted much interest. The common approach for tacklin
Externí odkaz:
http://arxiv.org/abs/2109.09868
Autor:
Tang, Tingting, Ali, Ramy E., Hashemi, Hanieh, Gangwani, Tynan, Avestimehr, Salman, Annavaram, Murali
Stragglers, Byzantine workers, and data privacy are the main bottlenecks in distributed cloud computing. Some prior works proposed coded computing strategies to jointly address all three challenges. They require either a large number of workers, a si
Externí odkaz:
http://arxiv.org/abs/2107.12958
Publikováno v:
AAAI 2023
Secure aggregation is a critical component in federated learning (FL), which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the p
Externí odkaz:
http://arxiv.org/abs/2106.03328
We consider the problem of coded computing, where a computational task is performed in a distributed fashion in the presence of adversarial workers. We propose techniques to break the adversarial toleration threshold barrier previously known in coded
Externí odkaz:
http://arxiv.org/abs/2101.11653
Outsourcing deep neural networks (DNNs) inference tasks to an untrusted cloud raises data privacy and integrity concerns. While there are many techniques to ensure privacy and integrity for polynomial-based computations, DNNs involve non-polynomial c
Externí odkaz:
http://arxiv.org/abs/2011.05530