Popis: |
Convex optimisation is used to solve many problems of interest in optimal control, signal processing, finance, operations research, and machine learning. While medium-size problems can be solved efficiently by interior-point methods, modern applications often yield problems with very large dimensions. Compared to interior-point methods, operator splitting methods are computationally inexpensive, which allows them to solve much larger problems. However, they tend to converge slowly to high accuracy solutions and their performance is sensitive to algorithm parameters and problem scaling. This thesis investigates different acceleration and scaling techniques to both improve the convergence of operator splitting methods to achieve higher accuracy solutions and to reduce the per-iteration computation time of the methods. We consider general convex conic problems and focus especially on the subset of semidefinite programs (SDPs), for which finding a solution is particularly challenging at large dimensions. The first part of the thesis introduces COSMO, a novel general purpose solver for convex conic programs based on the alternating direction method of multipliers (ADMM). We demonstrate that our splitting, which combines a quadratic objective function with convex conic constraints, leads to competitive performance both for problems in the conventional conic problem form but also for quadratic programs and SDPs with norm constraints. We further show how the user can use custom convex constraints to allow more natural problem formulations. The complexity of solving SDPs can be greatly reduced if the coefficient matrices that constrain the matrix variable are sparse. Chordal decomposition techniques then allow an equivalent problem formulation with multiple constraints only on the nonzero blocks of the matrix variable which often leads to orders of magnitude speed-up. In the second part of the thesis we develop a novel clique merging algorithm that combines some of the blocks after the initial decomposition to speed up the projection step of ADMM. We demonstrate that the method leads to significant performance improvements and avoids disadvantageous decompositions. In the third part we demonstrate how to use repeated restarts and safeguarding checks to globalize Anderson acceleration for an ADMM solver. Comprehensive benchmark results show a significant reduction in both iterations and solve time to higher accuracy solution and a more robust algorithm. |