Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Mortaheb, Matin"'
Federated learning (FL) is a collaborative approach where multiple clients, coordinated by a parameter server (PS), train a unified machine-learning model. The approach, however, suffers from two key challenges: data heterogeneity and communication o
Externí odkaz:
http://arxiv.org/abs/2410.22192
We consider the version age of information (AoI) in a network where a subset of nodes act as sensing nodes, sampling a source that in general can follow a continuous distribution. Any sample of the source constitutes a new version of the information
Externí odkaz:
http://arxiv.org/abs/2409.16285
The transformer structure employed in large language models (LLMs), as a specialized category of deep neural networks (DNNs) featuring attention mechanisms, stands out for their ability to identify and highlight the most relevant aspects of input dat
Externí odkaz:
http://arxiv.org/abs/2405.01521
Ensuring high-quality video content for wireless users has become increasingly vital. Nevertheless, maintaining a consistent level of video quality faces challenges due to the fluctuating encoded bitrate, primarily caused by dynamic video content, es
Externí odkaz:
http://arxiv.org/abs/2311.12918
Providing wireless users with high-quality video content has become increasingly important. However, ensuring consistent video quality poses challenges due to variable encoded bitrate caused by dynamic video content and fluctuating channel bitrate ca
Externí odkaz:
http://arxiv.org/abs/2310.06857
Deep learning based joint source-channel coding (JSCC) has demonstrated significant advancements in data reconstruction compared to separate source-channel coding (SSCC). This superiority arises from the suboptimality of SSCC when dealing with finite
Externí odkaz:
http://arxiv.org/abs/2308.11604
Autor:
Mortaheb, Matin, Ulukus, Sennur
Decentralized and federated learning algorithms face data heterogeneity as one of the biggest challenges, especially when users want to learn a specific task. Even when personalized headers are used concatenated to a shared network (PF-MTL), aggregat
Externí odkaz:
http://arxiv.org/abs/2212.11268
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network where each task has a distinct personalized header network for fine-tuning. MTL can be integrated into a federated learning (
Externí odkaz:
http://arxiv.org/abs/2212.07414
Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network where each task has its distinct personalized header network for fine-tuning. MTL can be implemented in federated learning settings as w
Externí odkaz:
http://arxiv.org/abs/2203.13663
Autor:
Mortaheb, Matin1 (AUTHOR), Vahapoglu, Cemil1 (AUTHOR), Ulukus, Sennur1 (AUTHOR) ulukus@umd.edu
Publikováno v:
Algorithms. Nov2022, Vol. 15 Issue 11, p421. 25p.