Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks
Autor: | Guannan Qu, Spyros Chatzivasileiadis, Steven H. Low, Andreas Venzke |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Mathematical optimization Computer Science - Artificial Intelligence Computer science 050801 communication & media studies Systems and Control (eess.SY) Electrical Engineering and Systems Science - Systems and Control Domain (software engineering) Machine Learning (cs.LG) Electric power system 0508 media and communications 0502 economics and business FOS: Electrical engineering electronic engineering information engineering FOS: Mathematics Mathematics - Optimization and Control Artificial neural network 05 social sciences Work (physics) Constraint (information theory) Range (mathematics) Smart grid Artificial Intelligence (cs.AI) Optimization and Control (math.OC) 050211 marketing Optimal decision |
Zdroj: | SmartGridComm |
DOI: | 10.48550/arxiv.2006.11029 |
Popis: | This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantially reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions related to (i) maximum constraint violations, (ii) maximum distances between predicted and optimal decision variables, and (iii) maximum sub-optimality. We demonstrate our methods on a range of PGLib-OPF networks up to 300 buses. We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods. More importantly, we show that the worst-case predictions appear at the boundaries of the training input domain, and we demonstrate how we can systematically reduce the worst-case guarantees by training on a larger input domain than the domain they are evaluated on. Comment: The code to reproduce the simulation results is available https://doi.org/10.5281/zenodo.3871755 |
Databáze: | OpenAIRE |
Externí odkaz: |