Autor: |
Noack MM; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA., Perryman D; Physics Department, The University of Tennessee at Knoxville, Knoxville, Tennessee, USA., Krishnan H; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA., Zwart PH; The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA. |
Jazyk: |
angličtina |
Zdroj: |
Annual Workshop on Extreme-scale Experiment-in-the-Loop Computing : XLOOP. Annual Workshop on Extreme-scale Experiment-in-the-Loop Computing [Annu Workshop Extrem Scale Exp Loop Comput] 2021 Nov; Vol. 2021, pp. 24-29. Date of Electronic Publication: 2021 Dec 27. |
DOI: |
10.1109/xloop54565.2021.00009 |
Abstrakt: |
Mathematical optimization lies at the core of many science and industry applications. One important issue with many current optimization strategies is a well-known trade-off between the number of function evaluations and the probability to find the global, or at least sufficiently high-quality local optima. In machine learning (ML), and by extension in active learning - for instance for autonomous experimentation - mathematical optimization is often used to find the underlying uncertain surrogate model from which subsequent decisions are made and therefore ML relies on high-quality optima to obtain the most accurate models. Active learning often has the added complexity of missing offline training data; therefore, the training has to be conducted during the data collection which can stall the acquisition if standard methods are used. In this work, we highlight recent efforts to create a high-performance hybrid optimization algorithm (HGDL), combining derivative-free global optimization strategies with local, derivative-based optimization, ultimately yielding an ordered list of unique local optima. Redundancies are avoided by deflating the objective function around earlier encountered optima. HGDL is designed to take full advantage of parallelism by having the most computationally expensive process, the local first and second-order-derivative-based optimizations, run in parallel on separate compute nodes in separate processes. In addition, the algorithm runs asynchronously; as soon as the first solution is found, it can be used while the algorithm continues to find more solutions. We apply the proposed optimization and training strategy to Gaussian-Process-driven stochastic function approximation and active learning. |
Databáze: |
MEDLINE |
Externí odkaz: |
|