Full Program »
Efficient Policy Adaptation For Voltage Control Under Unknown Topology Changes
Reinforcement learning (RL) has shown great potential for designing voltage control policies, but their performance often degrades under changing system conditions such as topology reconfigurations and load variations. We introduce a topology-aware online policy optimization framework that leverages data-driven estimation of voltage–reactive power sensitivities to achieve efficient policy adaptation for medium-voltage radial distribution networks. Exploiting the sparsity of topologyswitching events, where only a few lines change at a time, our method efficiently detects topology changes and identifies the affected lines and parameters, enabling fast and accurate sensitivity updates without recomputing the full sensitivity matrix. The estimated sensitivity is subsequently used for online policy optimization of a pre-trained neural-network-based RL controller. Simulations on both the IEEE 13-bus and SCE 56- bus systems demonstrate over 90% line identification accuracy, using only 15 data points. The proposed method also significantly improves voltage regulation performance compared with nonadaptive policies and adaptive policies that rely on regressionbased online optimization methods for sensitivity estimation.
