Zobrazeno 1 - 10
of 154
pro vyhledávání: '"Davis, Damek"'
We demonstrate that in situ coherent diffractive imaging (CDI), which harnesses the coherent interference between a strong and a weak beam illuminating a static and dynamic structure, can be a very dose-efficient imaging method. At low doses, in situ
Externí odkaz:
http://arxiv.org/abs/2306.11283
Modern machine learning paradigms, such as deep learning, occur in or close to the interpolation regime, wherein the number of model parameters is much larger than the number of data samples. In this work, we propose a regularity condition within the
Externí odkaz:
http://arxiv.org/abs/2306.02601
In their seminal work, Polyak and Juditsky showed that stochastic approximation algorithms for solving smooth equations enjoy a central limit theorem. Moreover, it has since been argued that the asymptotic covariance of the method is best possible am
Externí odkaz:
http://arxiv.org/abs/2301.06632
Autor:
Davis, Damek, Jiang, Tao
We analyze a preconditioned subgradient method for optimizing composite functions $h \circ c$, where $h$ is a locally Lipschitz function and $c$ is a smooth nonlinear mapping. We prove that when $c$ satisfies a constant rank property and $h$ is semis
Externí odkaz:
http://arxiv.org/abs/2212.13278
Autor:
Davis, Damek1 (AUTHOR), Drusvyatskiy, Dmitriy2 (AUTHOR) ddrusv@uw.edu, Charisopoulos, Vasileios1 (AUTHOR)
Publikováno v:
Mathematical Programming. Sep2024, Vol. 207 Issue 1/2, p145-190. 46p.
Autor:
Davis, Damek, Jiang, Liwei
Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. T
Externí odkaz:
http://arxiv.org/abs/2205.00064
Autor:
Charisopoulos, Vasileios, Davis, Damek
Subgradient methods comprise a fundamental class of nonsmooth optimization algorithms. Classical results show that certain subgradient methods converge sublinearly for general Lipschitz convex functions and converge linearly for convex functions that
Externí odkaz:
http://arxiv.org/abs/2201.04611
Zhang et al. introduced a novel modification of Goldstein's classical subgradient method, with an efficiency guarantee of $O(\varepsilon^{-4})$ for minimizing Lipschitz functions. Their work, however, makes use of a nonstandard subgradient oracle mod
Externí odkaz:
http://arxiv.org/abs/2112.06969
We investigate a clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix. We start by considering Gaussian mixtures with two equally-sized components and derive a Ma
Externí odkaz:
http://arxiv.org/abs/2110.01602
We show that the subgradient method converges only to local minimizers when applied to generic Lipschitz continuous and subdifferentially regular functions that are definable in an o-minimal structure. At a high level, the argument we present is appe
Externí odkaz:
http://arxiv.org/abs/2108.11832