Zobrazeno 1 - 10
of 226
pro vyhledávání: '"Cui, Laizhong"'
Pre-training exploits public datasets to pre-train an advanced machine learning model, so that the model can be easily tuned to adapt to various downstream tasks. Pre-training has been extensively explored to mitigate computation and communication re
Externí odkaz:
http://arxiv.org/abs/2408.09478
To preserve the data privacy, the federated learning (FL) paradigm emerges in which clients only expose model gradients rather than original data for conducting model training. To enhance the protection of model gradients in FL, differentially privat
Externí odkaz:
http://arxiv.org/abs/2408.08642
Viewport prediction is the crucial task for adaptive 360-degree video streaming, as the bitrate control algorithms usually require the knowledge of the user's viewing portions of the frames. Various methods are studied and adopted for viewport predic
Externí odkaz:
http://arxiv.org/abs/2403.02693
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds, without touching private data owned b
Externí odkaz:
http://arxiv.org/abs/2402.03770
Recently, federated learning (FL) has gained momentum because of its capability in preserving data privacy. To conduct model training by FL, multiple clients exchange model updates with a parameter server via Internet. To accelerate the communication
Externí odkaz:
http://arxiv.org/abs/2402.03815
Distributed machine learning (DML) in mobile environments faces significant communication bottlenecks. Gradient compression has proven as an effective solution to this issue, offering substantial benefits in environments with limited bandwidth and me
Externí odkaz:
http://arxiv.org/abs/2311.07324
Recently, federated learning (FL) has received intensive research because of its ability in preserving data privacy for scattered clients to collaboratively train machine learning models. Commonly, a parameter server (PS) is deployed for aggregating
Externí odkaz:
http://arxiv.org/abs/2209.01750
Recently, blockchain-based federated learning (BFL) has attracted intensive research attention due to that the training process is auditable and the architecture is serverless avoiding the single point failure of the parameter server in vanilla feder
Externí odkaz:
http://arxiv.org/abs/2208.06095
Convolutional neural network (CNN) and Transformer have achieved great success in multimedia applications. However, little effort has been made to effectively and efficiently harmonize these two architectures to satisfy image deraining. This paper ai
Externí odkaz:
http://arxiv.org/abs/2207.10455
Federated Learning (FL) incurs high communication overhead, which can be greatly alleviated by compression for model updates. Yet the tradeoff between compression and model accuracy in the networked environment remains unclear and, for simplicity, mo
Externí odkaz:
http://arxiv.org/abs/2112.06694