Zobrazeno 1 - 10
of 108
pro vyhledávání: '"Stipanovic, Dusan"'
This paper presents a family of algorithms for decentralized convex composite problems. We consider the setting of a network of agents that cooperatively minimize a global objective function composed of a sum of local functions plus a regularizer. Th
Externí odkaz:
http://arxiv.org/abs/2204.06380
Publikováno v:
In Computational and Structural Biotechnology Journal December 2024 24:126-135
Large-scale optimization problems require algorithms both effective and efficient. One such popular and proven algorithm is Stochastic Gradient Descent which uses first-order gradient information to solve these problems. This paper studies almost-sur
Externí odkaz:
http://arxiv.org/abs/2110.12634
In this work, multiplicative stochasticity is applied to the learning rate of stochastic optimization algorithms, giving rise to stochastic learning-rate schemes. In-expectation theoretical convergence results of Stochastic Gradient Descent equipped
Externí odkaz:
http://arxiv.org/abs/2110.10710
In a multi-agent network, we consider the problem of minimizing an objective function that is expressed as the sum of private convex and smooth functions, and a (possibly) non-differentiable convex regularizer. We propose a novel distributed second-o
Externí odkaz:
http://arxiv.org/abs/2109.14243
We consider a class of distributed optimization problem where the objective function consists of a sum of strongly convex and smooth functions and a (possibly nonsmooth) convex regularizer. A multi-agent network is assumed, where each agent holds a p
Externí odkaz:
http://arxiv.org/abs/2109.14804
Biological evidence shows that animals are capable of evading eminent collision without using depth information, relying solely on looming stimuli. In robotics, collision avoidance among uncooperative vehicles requires measurement of relative distanc
Externí odkaz:
http://arxiv.org/abs/2103.12239
Convolutional and recurrent neural networks have been widely employed to achieve state-of-the-art performance on classification tasks. However, it has also been noted that these networks can be manipulated adversarially with relative ease, by careful
Externí odkaz:
http://arxiv.org/abs/2009.02874
To use neural networks in safety-critical settings it is paramount to provide assurances on their runtime operation. Recent work on ReLU networks has sought to verify whether inputs belonging to a bounded box can ever yield some undesirable output. I
Externí odkaz:
http://arxiv.org/abs/1902.07247
Publikováno v:
In Journal of the Franklin Institute December 2021 358(18):9621-9652