Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Mortaheb, Matin"'
The transformer structure employed in large language models (LLMs), as a specialized category of deep neural networks (DNNs) featuring attention mechanisms, stands out for their ability to identify and highlight the most relevant aspects of input dat
Externí odkaz:
http://arxiv.org/abs/2405.01521
Ensuring high-quality video content for wireless users has become increasingly vital. Nevertheless, maintaining a consistent level of video quality faces challenges due to the fluctuating encoded bitrate, primarily caused by dynamic video content, es
Externí odkaz:
http://arxiv.org/abs/2311.12918
Providing wireless users with high-quality video content has become increasingly important. However, ensuring consistent video quality poses challenges due to variable encoded bitrate caused by dynamic video content and fluctuating channel bitrate ca
Externí odkaz:
http://arxiv.org/abs/2310.06857
Deep learning based joint source-channel coding (JSCC) has demonstrated significant advancements in data reconstruction compared to separate source-channel coding (SSCC). This superiority arises from the suboptimality of SSCC when dealing with finite
Externí odkaz:
http://arxiv.org/abs/2308.11604
Autor:
Mortaheb, Matin, Ulukus, Sennur
Decentralized and federated learning algorithms face data heterogeneity as one of the biggest challenges, especially when users want to learn a specific task. Even when personalized headers are used concatenated to a shared network (PF-MTL), aggregat
Externí odkaz:
http://arxiv.org/abs/2212.11268
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network where each task has a distinct personalized header network for fine-tuning. MTL can be integrated into a federated learning (
Externí odkaz:
http://arxiv.org/abs/2212.07414
Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network where each task has its distinct personalized header network for fine-tuning. MTL can be implemented in federated learning settings as w
Externí odkaz:
http://arxiv.org/abs/2203.13663
Autor:
Mortaheb, Matin1 (AUTHOR), Vahapoglu, Cemil1 (AUTHOR), Ulukus, Sennur1 (AUTHOR) ulukus@umd.edu
Publikováno v:
Algorithms. Nov2022, Vol. 15 Issue 11, p421. 25p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Mortaheb, Matin, Abbasfar, Aliazam
Publikováno v:
Transactions on Emerging Telecommunications Technologies; Feb2021, Vol. 32 Issue 2, p1-15, 15p