Skip to main content
Power Systems Computation Conference 2024

Full Program »

Multi-Agent Reinforcement Learning For Multi-Area Power Exchange

Increasing renewable integration leads to faster and more frequent fluctuations in the power system net-load (load minus non-dispatchable renewable generation) along with greater uncertainty in its forecast. These can exacerbate the computational burden of centralized power system optimization (or market clearing) that accounts for variability and uncertainty in net load. Another layer of complexity pertains to estimating accurate models of spatio-temporal net-load uncertainty. Taken together, decentralized approaches for learning to optimize (or to clear a market) using only local information are compelling to explore. This paper develops a decentralized multi-agent reinforcement learning (MARL) approach that seeks to learn optimal policies for operating interconnected power systems under uncertainty. The proposed method incurs less computational and communication burden compared to a centralized stochastic programming approach and offers improved privacy preservation. Numerical simulations involving a three-area test system yield desirable results, with the resulting average net operation costs being less than $5\%$ away from those obtained in a benchmark centralized model predictive control solution.

Jiachen Xi
Texas A&M University
United States

Alfredo Garcia
Texas A&M University
United States

Yu Christine Chen
The University of British Columbia
Canada

Roohallah Khatami
Southern Illinois University
United States

 


Powered by OpenConf®
Copyright ©2002-2024 Zakon Group LLC