Convergence rates analysis of a multiobjective proximal gradient method
Autor: | Hiroki Tanabe, Ellen H. Fukuda, Nobuo Yamashita |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: | |
Popis: | Many descent algorithms for multiobjective optimization have been developed in the last two decades. Tanabe et al. (Comput Optim Appl 72(2):339--361, 2019) proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex ($O(\sqrt{1 / k})$), convex ($O(1 / k)$), and strongly convex ($O(r^k)$ for some $r \in (0, 1)$) problems. We also extend the so-called Polyak-{\L}ojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities ($O(r^k)$ for some $r \in (0, 1)$). Comment: will appear in Optim. Lett |
Databáze: | OpenAIRE |
Externí odkaz: |