OFFO minimization algorithms for second-order optimality and their complexity
Autor: | S. Gratton, Ph. L. Toint |
---|---|
Přispěvatelé: | Gratton, Serge, Artificial and Natural Intelligence Toulouse Institute - - ANITI2019 - ANR-19-P3IA-0004 - P3IA - VALID |
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
Evaluation complexity
evaluation complexity Control and Optimization G.1.6 Applied Mathematics Objective-function-free optimization (OFFO) [MATH] Mathematics [math] F.2.1 Second-order optimality Adagrad Computational Mathematics Optimization and Control (math.OC) objective-function-free optimization (OFFO) Global rate of convergence FOS: Mathematics global rate of convergence 90C60 90C30 90C26 90C15 49N30 second-order optimality Mathematics - Optimization and Control |
Zdroj: | Gratton, S & TOINT, P 2023, ' OFFO minimization algorithms for second-order optimality and their complexity ', Computational Optimization and Applications, vol. 84, no. 2, pp. 573-607 . https://doi.org/10.1007/s10589-022-00435-2 TOINT, P & Gratton, S 2022 ' OFFO minimization algorithms for second-order optimality and their complexity ' . |
DOI: | 10.1007/s10589-022-00435-2 |
Popis: | An Adagrad-inspired class of algorithms for smooth unconstrained optimizationis presented in which the objective function is never evaluated and yet thegradient norms decrease at least as fast as lO(1/\sqrt{k+1}) whilesecond-order optimality measures converge to zero at least as fast asO(1/(k+1)^{1/3}). This latter rate of convergence is shown to beessentially sharp and is identical to that known for more standard algorithms(like trust-region or adaptive-regularization methods) using both function andderivatives' evaluations. A related ``divergent stepsize'' method is alsodescribed, whose essentially sharp rate of convergence is slighly inferior. Itis finally discussed how to obtain weaker second-order optimality guaranteesat a (much) reduced computional cost. |
Databáze: | OpenAIRE |
Externí odkaz: |