Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Kamani, Mohammad Mahdi"'
This paper advocates a new paradigm Personalized Empirical Risk Minimization (PERM) to facilitate learning from heterogeneous data sources without imposing stringent constraints on computational resources shared by participating devices. In PERM, we
Externí odkaz:
http://arxiv.org/abs/2310.17761
Autor:
Yao, Yuhang, Kamani, Mohammad Mahdi, Cheng, Zhongwei, Chen, Lin, Joe-Wong, Carlee, Liu, Tianqiang
Much of the value that IoT (Internet-of-Things) devices bring to ``smart'' homes lies in their ability to automatically trigger other devices' actions: for example, a smart camera triggering a smart lock to unlock a door. Manually setting up these ru
Externí odkaz:
http://arxiv.org/abs/2211.06812
To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective. However, the existing approaches to learning a distributionally robust model either requir
Externí odkaz:
http://arxiv.org/abs/2203.09607
Knowledge Distillation is becoming one of the primary trends among neural network compression algorithms to improve the generalization performance of a smaller student model with guidance from a larger teacher model. This momentous rise in applicatio
Externí odkaz:
http://arxiv.org/abs/2110.09674
In this paper we prove that Local (S)GD (or FedAvg) can optimize deep neural networks with Rectified Linear Unit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth funct
Externí odkaz:
http://arxiv.org/abs/2107.10868
As algorithmic decision-making systems are becoming more pervasive, it is crucial to ensure such systems do not become mechanisms of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. Moreover, due to the inherent trade-off
Externí odkaz:
http://arxiv.org/abs/2104.01634
Publikováno v:
Advances in Neural Information Processing Systems (NeurIPS), Vol. 33, 2020
In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of
Externí odkaz:
http://arxiv.org/abs/2102.12660
In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous
Externí odkaz:
http://arxiv.org/abs/2007.01154
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize. In this paper, we advocate an adaptive pe
Externí odkaz:
http://arxiv.org/abs/2003.13461
It has been shown that dimension reduction methods such as PCA may be inherently prone to unfairness and treat data from different sensitive groups such as race, color, sex, etc., unfairly. In pursuit of fairness-enhancing dimensionality reduction, u
Externí odkaz:
http://arxiv.org/abs/1911.04931