Zobrazeno 1 - 10
of 129
pro vyhledávání: '"TOINT, PH. L."'
An algorithm for unconstrained non-convex optimization is described, which does not evaluate the objective function and in which minimization is carried out, at each iteration, within a randomly selected subspace. It is shown that this random approxi
Externí odkaz:
http://arxiv.org/abs/2310.16580
Autor:
Toint, Ph. L.
A very simple unidimensional function with Lipschitz continuous gradient is constructed such that the ADAM algorithm with constant stepsize, started from the origin, diverges when applied to minimize this function in the absence of noise on the gradi
Externí odkaz:
http://arxiv.org/abs/2308.00720
A class of multi-level algorithms for unconstrained nonlinear optimization is presented which does not require the evaluation of the objective function. The class contains the momentum-less AdaGrad method as a particular (single-level) instance. The
Externí odkaz:
http://arxiv.org/abs/2302.07049
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which the objective function is never evaluated, but only derivatives are used. This algorithm belongs to the class of adaptive regularization methods, for
Externí odkaz:
http://arxiv.org/abs/2203.09947
Autor:
Gratton, S., Toint, Ph. L.
Publikováno v:
Computational Optimization and Applications, 84, pages 573 - 607, 2023
An Adagrad-inspired class of algorithms for smooth unconstrained optimization is presented in which the objective function is never evaluated and yet the gradient norms decrease at least as fast as $\calO(1/\sqrt{k+1})$ while second-order optimality
Externí odkaz:
http://arxiv.org/abs/2203.03351
A class of algorithms for unconstrained nonconvex optimization is considered where the value of the objective function is never computed. The class contains a deterministic version of the first-order Adagrad method typically used for minimization of
Externí odkaz:
http://arxiv.org/abs/2203.01757
A parametric class of trust-region algorithms for unconstrained nonconvex optimization is considered where the value of the objective function is never computed. The class contains a deterministic version of the first-order Adagrad method typically u
Externí odkaz:
http://arxiv.org/abs/2203.01647
A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in
Externí odkaz:
http://arxiv.org/abs/2112.06176
Autor:
Gould, N. I. M., Toint, Ph. L.
An adaptive regularization algorithm for unconstrained nonconvex optimization is proposed that is capable of handling inexact objective-function and derivative values, and also of providing approximate minimizer of arbitrary order. In comparison with
Externí odkaz:
http://arxiv.org/abs/2111.14098
A trust-region algorithm using inexact function and derivatives values is introduced for solving unconstrained smooth optimization problems. This algorithm uses high-order Taylor models and allows the search of strong approximate minimizers of arbitr
Externí odkaz:
http://arxiv.org/abs/2011.00854