ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00459
20
0

Comparing Traditional and Reinforcement-Learning Methods for Energy Storage Control

31 May 2025
Elinor Ginzburg
Itay Segev
Yoash Levron
Sarah Keren
    OffRL
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:1 Pages
1 Tables
Abstract

We aim to better understand the tradeoffs between traditional and reinforcement learning (RL) approaches for energy storage management. More specifically, we wish to better understand the performance loss incurred when using a generative RL policy instead of using a traditional approach to find optimal control policies for specific instances. Our comparison is based on a simplified micro-grid model, that includes a load component, a photovoltaic source, and a storage device. Based on this model, we examine three use cases of increasing complexity: ideal storage with convex cost functions, lossy storage devices, and lossy storage devices with convex transmission losses. With the aim of promoting the principled use RL based methods in this challenging and important domain, we provide a detailed formulation of each use case and a detailed description of the optimization challenges. We then compare the performance of traditional and RL methods, discuss settings in which it is beneficial to use each method, and suggest avenues for future investigation.

View on arXiv
@article{ginzburg2025_2506.00459,
  title={ Comparing Traditional and Reinforcement-Learning Methods for Energy Storage Control },
  author={ Elinor Ginzburg and Itay Segev and Yoash Levron and Sarah Keren },
  journal={arXiv preprint arXiv:2506.00459},
  year={ 2025 }
}
Comments on this paper