ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15011
8
0

GCN-Driven Reinforcement Learning for Probabilistic Real-Time Guarantees in Industrial URLLC

17 June 2025
Eman Alqudah
Ashfaq Khokhar
    AI4CE
ArXiv (abs)PDFHTML
Main:8 Pages
4 Figures
Bibliography:1 Pages
Abstract

Ensuring packet-level communication quality is vital for ultra-reliable, low-latency communications (URLLC) in large-scale industrial wireless networks. We enhance the Local Deadline Partition (LDP) algorithm by introducing a Graph Convolutional Network (GCN) integrated with a Deep Q-Network (DQN) reinforcement learning framework for improved interference coordination in multi-cell, multi-channel networks. Unlike LDP's static priorities, our approach dynamically learns link priorities based on real-time traffic demand, network topology, remaining transmission opportunities, and interference patterns. The GCN captures spatial dependencies, while the DQN enables adaptive scheduling decisions through reward-guided exploration. Simulation results show that our GCN-DQN model achieves mean SINR improvements of 179.6\%, 197.4\%, and 175.2\% over LDP across three network configurations. Additionally, the GCN-DQN model demonstrates mean SINR improvements of 31.5\%, 53.0\%, and 84.7\% over our previous CNN-based approach across the same configurations. These results underscore the effectiveness of our GCN-DQN model in addressing complex URLLC requirements with minimal overhead and superior network performance.

View on arXiv
@article{alqudah2025_2506.15011,
  title={ GCN-Driven Reinforcement Learning for Probabilistic Real-Time Guarantees in Industrial URLLC },
  author={ Eman Alqudah and Ashfaq Khokhar },
  journal={arXiv preprint arXiv:2506.15011},
  year={ 2025 }
}
Comments on this paper