Autor: |
Doseok Jang, Lucas Spangher, Selvaprabu Nadarajah, Costas Spanos |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Energy and AI, Vol 11, Iss , Pp 100204- (2023) |
Druh dokumentu: |
article |
ISSN: |
2666-5468 |
DOI: |
10.1016/j.egyai.2022.100204 |
Popis: |
Building energy demand response is projected to be important in decarbonizing energy use. A demand response program that communicates “artificial” hourly price signals to workers as part of a social game has the potential to elicit energy consumption changes that simultaneously reduce energy costs and emissions. The efficacy of such a program depends on the pricing agent’s ability to learn how workers respond to prices and mitigate the risk of high energy costs during this learning process. We assess the value of deep reinforcement learning (RL) for mitigating this risk. Specifically, we explore the value of combining: (i) a model-free RL method that can learn by posting price signals to workers, (ii) a supervisory “planning model” that provides a synthetic learning environment, and (iii) a guardrail method that determines whether a price should be posted to real workers or the planning environment for feedback. In a simulated medium-sized office building, we compare our pricing agent against existing model-free and model-based deep RL agents, and the simpler strategy of passing on the time-of-use price signal to workers. We find that our controller eliminates 175,000 US Dollars in initial investment, decreases by 30% the energy cost, and curbs emissions by 32% compared to energy consumption under the time-of-use rate. In contrast, the model-free and model-based deep RL benchmarks are unable to overcome initial learning costs. Our results bode well for risk-aware deep RL facilitating the deployment of building demand response. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|