ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02552
26
19

Online Learning Schemes for Power Allocation in Energy Harvesting Communications

8 July 2016
Pranav Sakulkar
Bhaskar Krishnamachari
    OffRL
ArXivPDFHTML
Abstract

We consider the problem of power allocation over a time-varying channel with unknown distribution in energy harvesting communication systems. In this problem, the transmitter has to choose the transmit power based on the amount of stored energy in its battery with the goal of maximizing the average rate obtained over time. We model this problem as a Markov decision process (MDP) with the transmitter as the agent, the battery status as the state, the transmit power as the action and the rate obtained as the reward. The average reward maximization problem over the MDP can be solved by a linear program (LP) that uses the transition probabilities for the state-action pairs and their reward values to choose a power allocation policy. Since the rewards associated the state-action pairs are unknown, we propose two online learning algorithms: UCLP and Epoch-UCLP that learn these rewards and adapt their policies along the way. The UCLP algorithm solves the LP at each step to decide its current policy using the upper confidence bounds on the rewards, while the Epoch-UCLP algorithm divides the time into epochs, solves the LP only at the beginning of the epochs and follows the obtained policy in that epoch. We prove that the reward losses or regrets incurred by both these algorithms are upper bounded by constants. Epoch-UCLP incurs a higher regret compared to UCLP, but reduces the computational requirements substantially. We also show that the presented algorithms work for online learning in cost minimization problems like the packet scheduling with power-delay tradeoff with minor changes.

View on arXiv
Comments on this paper