ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1211.6616
72
140
v1v2v3 (latest)

TACT: A Transfer Actor-Critic Learning Framework for Energy Saving in Cellular Radio Access Networks

28 November 2012
Rongpeng Li
Zhifeng Zhao
Xianfu Chen
J. Palicot
Honggang Zhang
ArXiv (abs)PDFHTML
Abstract

Recent works have validated the possibility of energy efficiency improvement in radio access networks (RAN), depending on dynamically turn on/off some base stations (BSs). In this paper, we extend the research over BS switching operation, which should match up with traffic load variations. However, instead of depending on the predicted traffic loads, which is still quite challenging to precisely forecast, we firstly formulate the traffic variation as a Markov decision process (MDP). Afterwards, in order to foresightedly minimize the energy consumption of RAN, we adopt the actor-critic algorithm and design a reinforcement learning framework based BS switching operation scheme. Furthermore, to avoid the underlying curse of dimensionality in reinforcement learning, we propose a transfer actor-critic algorithm (TACT), which utilizes the transferred learning expertise in neighboring regions or historical periods. The proposed TACT algorithm provably converges and contributes to a performance jumpstart. In the end, we evaluate our proposed scheme by extensive simulations under various practical configurations and prove the feasibility of significant energy efficiency improvement.

View on arXiv
Comments on this paper