Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Aytekin, Arda"'
Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade. Asynchronous methods do not
Externí odkaz:
http://arxiv.org/abs/2006.13838
Autor:
Demirel, Burak, Aytekin, Arda
We analyze the closed-loop control performance of a networked control system that consists of $N$ independent linear feedback control loops, sharing a communication network with $M$ channels ($M
Externí odkaz:
http://arxiv.org/abs/2006.08015
With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes. Early work on gradient compression focused on the bottleneck between CPUs and GPUs, but communication-efficiency is now
Externí odkaz:
http://arxiv.org/abs/2003.06377
Autor:
Aytekin, Arda, Johansson, Mikael
The event-driven and elastic nature of serverless runtimes makes them a very efficient and cost-effective alternative for scaling up computations. So far, they have mostly been used for stateless, data parallel and ephemeral computations. In this wor
Externí odkaz:
http://arxiv.org/abs/1901.03161
Autor:
Aytekin, Arda
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously,
Externí odkaz:
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-203812
We present POLO --- a C++ library for large-scale parallel optimization research that emphasizes ease-of-use, flexibility and efficiency in algorithm design. It uses multiple inheritance and template programming to decompose algorithms into essential
Externí odkaz:
http://arxiv.org/abs/1810.03417
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) reg
Externí odkaz:
http://arxiv.org/abs/1610.05507
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state of the art parallel mini-batch algorithms assume synchronous operation or cyclic update orders. When worker nodes are heterogeneous (due to diff
Externí odkaz:
http://arxiv.org/abs/1505.04824
Publikováno v:
In Advances in Engineering Software October 2019 136
Autor:
Demirel, Burak, Aytekin, Arda
Publikováno v:
2021 European Control Conference (ECC).
We analyze the closed-loop control performance of a networked control system that consists of $N$ independent linear feedback control loops, sharing a communication network with $M$ channels ($M