Document Type
Article
Version Deposited
Published Version
Publication Date
10-20-2024
Publication Title
Energies
DOI
10.3390/en17205211
Abstract
This paper investigates a Local Strategy-Driven Multi-Agent Deep Deterministic Policy Gradient (LSD-MADDPG) method for demand-side energy management systems (EMS) in smart communities. LSD-MADDPG modifies the conventional MADDPG framework by limiting data sharing during centralized training to only discretized strategic information. During execution, it relies solely on local information, eliminating post-training data exchange. This approach addresses critical challenges commonly faced by EMS solutions serving dynamic, increasing-scale communities, such as communication delays, single-point failures, scalability, and nonstationary environments. By leveraging and sharing only strategic information among agents, LSD-MADDPG optimizes decision-making while enhancing training efficiency and safeguarding data privacy—a critical concern in the community EMS. The proposed LSD-MADDPG has proven to be capable of reducing energy costs and flattening the community demand curve by coordinating indoor temperature control and electric vehicle charging schedules across multiple buildings. Comparative case studies reveal that LSD-MADDPG excels in both cooperative and competitive settings by ensuring fair alignment between individual buildings’ energy management actions and community-wide goals, highlighting its potential for advancing future smart community energy management.
Recommended Citation
Wilk, P.; Wang, N.; Li, J. Multi-Agent Reinforcement Learning for Smart Community Energy Management. Energies 2024, 17, 5211. https://doi.org/10.3390/en17205211
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Comments
© 2024 by the authors. Licensee MDPI, Basel, Switzerland.