Zobrazeno 1 - 10
of 1 027
pro vyhledávání: '"Nguyen, Ta"'
Autor:
Nguyen, Ta Duy, Ene, Alina
We study the densest subgraph problem and give algorithms via multiplicative weights update and area convexity that converge in $O\left(\frac{\log m}{\epsilon^{2}}\right)$ and $O\left(\frac{\log m}{\epsilon}\right)$ iterations, respectively, both wit
Externí odkaz:
http://arxiv.org/abs/2405.18809
In this work, we study the convergence \emph{in high probability} of clipped gradient methods when the noise distribution has heavy tails, ie., with bounded $p$th moments, for some $1
Externí odkaz:
http://arxiv.org/abs/2304.01119
In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expect
Externí odkaz:
http://arxiv.org/abs/2302.14843
While the convergence behaviors of stochastic gradient methods are well understood \emph{in expectation}, there still exist many gaps in the understanding of their convergence with \emph{high probability}, where the convergence rate has a logarithmic
Externí odkaz:
http://arxiv.org/abs/2302.05437
We study the application of variance reduction (VR) techniques to general non-convex stochastic optimization problems. In this setting, the recent work STORM [Cutkosky-Orabona '19] overcomes the drawback of having to compute gradients of "mega-batche
Externí odkaz:
http://arxiv.org/abs/2209.14853
Existing analysis of AdaGrad and other adaptive methods for smooth convex optimization is typically for functions with bounded domain diameter. In unconstrained problems, previous works guarantee an asymptotic convergence rate without an explicit con
Externí odkaz:
http://arxiv.org/abs/2209.14827
In this paper, we study the finite-sum convex optimization problem focusing on the general convex case. Recently, the study of variance reduced (VR) methods and their accelerated variants has made exciting progress. However, the step size used in the
Externí odkaz:
http://arxiv.org/abs/2201.12302
Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the behavior of a model across different groups, by imposing a fairness constra
Externí odkaz:
http://arxiv.org/abs/2006.08669
Publikováno v:
International Medical Case Reports Journal, Vol Volume 15, Pp 361-366 (2022)
Hong Duc Pham,1,2 The Anh Nguyen,3 Thi Giang Doan,1,2 Van Giang Bui,2,4 Thanh Van Phan-Nguyen5 1Radiology Department, Saint Paul Hospital of Ha Noi, Ha Noi, Vietnam; 2Radiology Department, Hanoi Medical University, Ha Noi, Vietnam; 3Department of Res
Externí odkaz:
https://doaj.org/article/9de58188f32040d1b4a9109bdbfafe76
Publikováno v:
Proceedings of GECCO 2017
For genetic algorithms using a bit-string representation of length~$n$, the general recommendation is to take $1/n$ as mutation rate. In this work, we discuss whether this is really justified for multimodal functions. Taking jump functions and the $(
Externí odkaz:
http://arxiv.org/abs/1703.03334