Popis: |
In this work we propose a general nonmonotone line-search method for nonconvex multi\-objective optimization problems with convex constraints. At the $k$th iteration, the degree of nonmonotonicity is controlled by a vector $\nu_{k}$ with nonnegative components. Different choices for $\nu_{k}$ lead to different nonmonotone step-size rules. Assuming that the sequence $\left\{\nu_{k}\right\}_{k\geq 0}$ is summable, and that the $i$th objective function has H\"older continuous gradient with smoothness parameter $\theta_i \in(0,1]$, we show that the proposed method takes no more than $\mathcal{O}\left(\epsilon^{-\left(1+\frac{1}{\theta_{\min}}\right)}\right)$ iterations to find a $\epsilon$-approximate Pareto critical point for a problem with $m$ objectives and $\theta_{\min}= \min_{i=1,\dots, m} \{\theta_i\}$. In particular, this complexity bound applies to the methods proposed by Drummond and Iusem (Comput. Optim. Appl. 28: 5--29, 2004), by Fazzio and Schuverdt (Optim. Lett. 13: 1365--1379, 2019), and by Mita, Fukuda and Yamashita (J. Glob. Optim. 75: 63--90, 2019). The generality of our approach also allows the development of new methods for multiobjective optimization. As an example, we propose a new nonmonotone step-size rule inspired by the Metropolis criterion. Preliminary numerical results illustrate the benefit of nonmonotone line searches and suggest that our new rule is particularly suitable for multiobjective problems in which at least one of the objectives has many non-global local minimizers. |