Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Anit Kumar Sahu"'
Publikováno v:
EURASIP Journal on Advances in Signal Processing, Vol 2018, Iss 1, Pp 1-15 (2018)
Abstract The paper addresses design and analysis of communication-efficient distributed algorithms for solving weighted non-linear least squares problems in multi-agent networks. Communication efficiency is highly relevant in modern applications like
Externí odkaz:
https://doaj.org/article/4c48e9c204ca434891287e4c2968eb14
Publikováno v:
IEEE Transactions on Signal Processing. 71:1319-1333
Autor:
Anit Kumar Sahu, Soummya Kar
Publikováno v:
Proceedings of the IEEE. 108:1890-1905
Zeroth-order optimization algorithms are an attractive alternative for stochastic optimization problems, when gradient computations are expensive or when closed-form loss functions are not available. Recently, there has been a surge of activity in ut
Publikováno v:
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022).
Publikováno v:
ACSSC
Decentralized stochastic gradient descent (SGD) has recently become one of the most promising methods to use data parallelism in order to train a machine learning model on a network of arbitrarily connected nodes/edge devices. Although the error conv
Publikováno v:
ACSSC
Federated learning aims to jointly learn statistical models over massively distributed remote devices. In this work, we propose FedDANE, an optimization method that we adapt from DANE, a method for classical distributed optimization, to handle the pr
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3b661edf9b2c9e4d5d0acf9c8a9b6b19
Publikováno v:
2019 Sixth Indian Control Conference (ICC).
Decentralized stochastic gradient descent (SGD) is a promising approach to learn a machine learning model over a network of workers connected in an arbitrary topology. Although a densely-connected network topology can ensure faster convergence in ter
Publikováno v:
ACSSC
In this paper, we present stochastic optimization for empirical risk minimization over directed graphs. Using a novel information fusion approach that utilizes both row- and column-stochastic weights simultaneously, we propose $\mathcal{S}\mathcal{A}
Publikováno v:
EUROCON
Recently, a communication efficient recursive distributed estimator, $C\mathcal{R}\mathcal{E}\mathcal{D}\mathcal{O}$, has been proposed, that utilizes increasingly sparse randomized bidirectional communications. $\lt p\gt C\mathcal{R}\mathcal{E}\math
Autor:
Anit Kumar Sahu, Soummya Kar
Publikováno v:
Data Fusion in Wireless Sensor Networks: A statistical signal processing perspective ISBN: 9781785615849
This chapter considers the problem of recursive composite hypothesis testing in a network of sparsely connected agents. In classical centralized composite hypothesis testing, procedures such as the generalized likelihood ratio test (GLRT), i.e., the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::4cb2cbfd46d18d8d6ecfd11b45a9ae00
https://doi.org/10.1049/pbce117e_ch8
https://doi.org/10.1049/pbce117e_ch8