Autor: |
Zeng, Yu, Pou, Josep, Sun, Changjiang, Li, Xinze, Liang, Gaowen, Xia, Yang, Mukherjee, Suvajit, Gupta, Amit Kumar |
Zdroj: |
IEEE Transactions on Industrial Electronics; 2024, Vol. 71 Issue: 6 p5818-5829, 12p |
Abstrakt: |
This article proposes a distributed uniform control approach for a dc solid state transformer (DCSST) that feeds constant power loads. The proposed approach utilizes a multiagent deep reinforcement learning (MADRL) technique to coordinate multiple control objectives. During the offline training stage, each DRL agent supervises a submodule (SM) of the DCSST, and outputs real-time actions based on the received states. Optimal phase-shift ratio combinations are learned using triple phase-shift modulation, and soft actor-critic (SAC) agents optimize the neural network parameters to enhance controller performance. The well-trained agents act as fast surrogate models that provide online control decisions for the DCSST, adapting to variant environmental conditions using only local SM information. The proposed distributed configuration improves redundancy and modularity, facilitating hot-swap experiments. Experimental results demonstrate the excellent performance of the proposed multiagent SAC algorithm. |
Databáze: |
Supplemental Index |
Externí odkaz: |
|