Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Sindri Magnusson"'
Autor:
Nancy Victor, Rajeswari Chengoden, Mamoun Alazab, Sweta Bhattacharya, Sindri Magnusson, Praveen Kumar Reddy Maddikunta, Kadiyala Ramana, Thippa Reddy Gadekallu
Publikováno v:
IEEE Internet of Things Magazine. 5:36-41
Publikováno v:
2022 IEEE Energy Conversion Congress and Exposition (ECCE).
Publikováno v:
2022 15th International Conference on Human System Interaction (HSI).
Publikováno v:
IEEE Signal Processing Letters. 28:1180-1184
One of the main advantages of second-order methods in a centralized setting is that they are insensitive to the condition number of the objective function's Hessian. For applications such as regression analysis, this means that less pre-processing of
Publikováno v:
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Publikováno v:
IEEE Transactions on Smart Grid. 11:3469-3482
The increased penetration of volatile renewable energy into distribution networks necessities more efficient distributed voltage control. In this paper, we design distributed feedback control algorithms where each bus can inject \emph{both active and
The paper studies the problem of leakage localization in water distribution networks. For the case of a single pipe that suffers from a single leak, by taking recourse to pressure and flow measurements, and assuming those are noiseless, we provide a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7ebd08421754849748c52d135502533b
http://arxiv.org/abs/2204.00050
http://arxiv.org/abs/2204.00050
Publikováno v:
Artificial Intelligence in Medicine ISBN: 9783031093418
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::f9d6fa246e2047a981f742321b6d1be1
https://doi.org/10.1007/978-3-031-09342-5_18
https://doi.org/10.1007/978-3-031-09342-5_18
Publikováno v:
ICASSP
Noise is inherited in many optimization methods such as stochastic gradient methods, zeroth-order methods and compressed gradient methods. For such methods to converge toward a global optimum, it is intuitive to use large step-sizes in the initial it
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3507b255b570c43fd9f3c62dd304b325
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295604
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295604
We consider the problem of communication efficient distributed optimization where multiple nodes exchange important algorithm information in every iteration to solve large problems. In particular, we focus on the stochastic variance-reduced gradient
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7bdeb410efc0470807e34576b60678ed