Zobrazeno 1 - 10
of 130
pro vyhledávání: '"Wu, Xidong"'
Autor:
Wu, Xidong, Gao, Shangqian, Zhang, Zeyu, Li, Zhenzhen, Bao, Runxue, Zhang, Yanfu, Wang, Xiaoqian, Huang, Heng
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging. To address the limitation, the Only-Train-Once (OTO) and OTOv2
Externí odkaz:
http://arxiv.org/abs/2403.14729
Federated Averaging (FedAvg) is known to experience convergence issues when encountering significant clients system heterogeneity and data heterogeneity. Server momentum has been proposed as an effective mitigation. However, existing server momentum
Externí odkaz:
http://arxiv.org/abs/2312.12670
Autor:
Wu, Xidong, Lin, Wan-Yi, Willmott, Devin, Condessa, Filipe, Huang, Yufei, Li, Zhenzhen, Ganesh, Madan Ravi
Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data. However, FL faces a significant challenge in the form of heterogeneo
Externí odkaz:
http://arxiv.org/abs/2311.08479
The minimax problems arise throughout machine learning applications, ranging from adversarial training and policy evaluation in reinforcement learning to AUROC maximization. To address the large-scale data challenges across multiple clients with comm
Externí odkaz:
http://arxiv.org/abs/2310.03613
Conditional stochastic optimization has found applications in a wide range of machine learning tasks, such as invariant learning, AUPRC maximization, and meta-learning. As the demand for training models with large-scale distributed data grows in thes
Externí odkaz:
http://arxiv.org/abs/2310.02524
The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to incorporate watermarking techniques into LLMs, allowing for the tracking and attrib
Externí odkaz:
http://arxiv.org/abs/2310.10669
Multi-party collaborative training, such as distributed learning and federated learning, is used to address the big data challenges. However, traditional multi-party collaborative training algorithms were mainly designed for balanced data mining task
Externí odkaz:
http://arxiv.org/abs/2308.03035
Machine learning models have achieved remarkable success in various real-world applications such as data science, computer vision, and natural language processing. However, model training in machine learning requires large-scale data sets and multipl
Externí odkaz:
http://arxiv.org/abs/2305.00798
The minimax optimization over Riemannian manifolds (possibly nonconvex constraints) has been actively applied to solve many problems, such as robust dimensionality reduction and deep neural networks with orthogonal weights (Stiefel manifold). Althoug
Externí odkaz:
http://arxiv.org/abs/2302.03825
Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, federated learning in practice still faces numerous
Externí odkaz:
http://arxiv.org/abs/2212.00974