Autor: |
Berger, M., Eckmann, B., de la Harpe, P., Hirzebruch, F., Hitchin, N., Hörmander, L., Knus, M.-A., Kupiainen, A., Lebeau, G., Ratner, M., Serre, D., Sinai, Ya. G., Sloane, N.J.A., Totaro, B., Vershik, A., Waldschmidt, M., Chenciner, A., Coates, J., Varadhan, S.R.S., Mordukhovich, Boris S. |
Zdroj: |
Variational Analysis & Generalized Differentiation I; 2006, p171-259, 89p |
Abstrakt: |
It is well known that the convex separation principle plays a fundamental role in many aspects of nonlinear analysis, optimization, and their applications. Actually the whole convex analysis revolves around using separation theorems for convex sets. In problems with nonconvex data separation theorems are applied to convex approximations. This is a conventional way to derive necessary optimality conditions in constrained optimization: first build tangential convex approximations of the problem data around an optimal solution in primal spaces and then apply convex separation theorems to get supporting elements in dual spaces (Lagrange multipliers, adjoint arcs, prices, etc.). For problems of nonsmooth optimization this approach inevitably leads to the usage of convex sets of normals and subgradients, whose calculus is also based on convex separation theorems. [ABSTRACT FROM AUTHOR] |
Databáze: |
Supplemental Index |
Externí odkaz: |
|