Zobrazeno 1 - 10
of 71
pro vyhledávání: '"Feyzmahdavian, Hamid"'
Asynchronous optimization algorithms are at the core of modern machine learning and resource allocation systems. However, most convergence results consider bounded information delays and several important algorithms lack guarantees when they operate
Externí odkaz:
http://arxiv.org/abs/2203.04611
In scalable machine learning systems, model training is often parallelized over multiple nodes that run without tight synchronization. Most analysis results for the related asynchronous algorithms use an upper bound on the information delays in the s
Externí odkaz:
http://arxiv.org/abs/2202.08550
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts t
Externí odkaz:
http://arxiv.org/abs/2109.04522
Publikováno v:
In IFAC PapersOnLine 2024 58(14):360-366
Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade. Asynchronous methods do not
Externí odkaz:
http://arxiv.org/abs/2006.13838
Autor:
Feyzmahdavian, Hamid Reza
Time-delay dynamical systems are used to model many real-world engineering systems, where the future evolution of a system depends not only on current states but also on the history of states. For this reason, the study of stability and control of ti
Externí odkaz:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177651
Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning. This paper presents a unified analysis framework for distributed gradient met
Externí odkaz:
http://arxiv.org/abs/1806.06573
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) reg
Externí odkaz:
http://arxiv.org/abs/1610.05507
We analyze stability properties of monotone nonlinear systems via max-separable Lyapunov functions, motivated by the following observations: first, recent results have shown that asymptotic stability of a monotone nonlinear system implies the existen
Externí odkaz:
http://arxiv.org/abs/1607.07966
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state of the art parallel mini-batch algorithms assume synchronous operation or cyclic update orders. When worker nodes are heterogeneous (due to diff
Externí odkaz:
http://arxiv.org/abs/1505.04824