ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07607
31
0

Multi-Objective Reinforcement Learning for Energy-Efficient Industrial Control

12 May 2025
Georg Schafer
Raphael Seliger
Jakob Rehrl
Stefan Huber
Simon Hirlaender
    AI4CE
ArXivPDFHTML
Abstract

Industrial automation increasingly demands energy-efficient control strategies to balance performance with environmental and cost constraints. In this work, we present a multi-objective reinforcement learning (MORL) framework for energy-efficient control of the Quanser Aero 2 testbed in its one-degree-of-freedom configuration. We design a composite reward function that simultaneously penalizes tracking error and electrical power consumption. Preliminary experiments explore the influence of varying the Energy penalty weight, alpha, on the trade-off between pitch tracking and energy savings. Our results reveal a marked performance shift for alpha values between 0.0 and 0.25, with non-Pareto optimal solutions emerging at lower alpha values, on both the simulation and the real system. We hypothesize that these effects may be attributed to artifacts introduced by the adaptive behavior of the Adam optimizer, which could bias the learning process and favor bang-bang control strategies. Future work will focus on automating alpha selection through Gaussian Process-based Pareto front modeling and transitioning the approach from simulation to real-world deployment.

View on arXiv
@article{schäfer2025_2505.07607,
  title={ Multi-Objective Reinforcement Learning for Energy-Efficient Industrial Control },
  author={ Georg Schäfer and Raphael Seliger and Jakob Rehrl and Stefan Huber and Simon Hirlaender },
  journal={arXiv preprint arXiv:2505.07607},
  year={ 2025 }
}
Comments on this paper