ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.01919
8
7

A Reinforcement Learning Approach for the Multichannel Rendezvous Problem

2 July 2019
Jen-Hung Wang
Ping-En Lu
Cheng-Shang Chang
D. Lee
ArXivPDFHTML
Abstract

In this paper, we consider the multichannel rendezvous problem in cognitive radio networks (CRNs) where the probability that two users hopping on the same channel have a successful rendezvous is a function of channel states. The channel states are modelled by two-state Markov chains that have a good state and a bad state. These channel states are not observable by the users. For such a multichannel rendezvous problem, we are interested in finding the optimal policy to minimize the expected time-to-rendezvous (ETTR) among the class of {\em dynamic blind rendezvous policies}, i.e., at the ttht^{th}tth time slot each user selects channel iii independently with probability pi(t)p_i(t)pi​(t), i=1,2,…,Ni=1,2, \ldots, Ni=1,2,…,N. By formulating such a multichannel rendezvous problem as an adversarial bandit problem, we propose using a reinforcement learning approach to learn the channel selection probabilities pi(t)p_i(t)pi​(t), i=1,2,…,Ni=1,2, \ldots, Ni=1,2,…,N. Our experimental results show that the reinforcement learning approach is very effective and yields comparable ETTRs when comparing to various approximation policies in the literature.

View on arXiv
Comments on this paper