Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Hinder, Oliver"'
Autor:
Hamad, Fadi, Hinder, Oliver
Adaptive trust-region methods attempt to maintain strong convergence guarantees without depending on conservative estimates of problem properties such as Lipschitz constants. However, on close inspection, one can show existing adaptive trust-region m
Externí odkaz:
http://arxiv.org/abs/2408.01874
We propose a method that achieves near-optimal rates for smooth stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to opti
Externí odkaz:
http://arxiv.org/abs/2404.00666
Autor:
Carmon, Yair, Hinder, Oliver
We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a "price of adaptivity" (PoA) that, roughly speaking, measures the multiplicative increase in
Externí odkaz:
http://arxiv.org/abs/2402.10898
Nanophotonic structures have versatile applications including solar cells, anti-reflective coatings, electromagnetic interference shielding, optical filters, and light emitting diodes. To design and understand these nanophotonic structures, electrody
Externí odkaz:
http://arxiv.org/abs/2310.19053
Autor:
Hinder, Oliver
We analyze restarted PDHG on totally unimodular linear programs. In particular, we show that restarted PDHG finds an $\epsilon$-optimal solution in $O( H m_1^{2.5} \sqrt{\textbf{nnz}(A)} \log(H m_2 /\epsilon) )$ matrix-vector multiplies where $m_1$ i
Externí odkaz:
http://arxiv.org/abs/2309.03988
We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no ``learning rate'' param
Externí odkaz:
http://arxiv.org/abs/2302.12022
Preconditioning has long been a staple technique in optimization, often applied to reduce the condition number of a matrix and speed up the convergence of algorithms. Although there are many popular preconditioning techniques in practice, most lack g
Externí odkaz:
http://arxiv.org/abs/2209.00809
Autor:
Carmon, Yair, Hinder, Oliver
We develop an algorithm for parameter-free stochastic convex optimization (SCO) whose rate of convergence is only a double-logarithmic factor larger than the optimal rate for the corresponding known-parameter setting. In contrast, the best previously
Externí odkaz:
http://arxiv.org/abs/2205.02160
Autor:
Applegate, David, Díaz, Mateo, Hinder, Oliver, Lu, Haihao, Lubin, Miles, O'Donoghue, Brendan, Schudy, Warren
We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is
Externí odkaz:
http://arxiv.org/abs/2106.04756
First-order primal-dual methods are appealing for their low memory overhead, fast iterations, and effective parallelization. However, they are often slow at finding high accuracy solutions, which creates a barrier to their use in traditional linear p
Externí odkaz:
http://arxiv.org/abs/2105.12715