20
0

Prompt-Tuned LLM-Augmented DRL for Dynamic O-RAN Network Slicing

Main:5 Pages
5 Figures
Bibliography:1 Pages
1 Tables
Abstract

Modern wireless networks must adapt to dynamic conditions while efficiently managing diverse service demands. Traditional deep reinforcement learning (DRL) struggles in these environments, as scattered and evolving feedback makes optimal decision-making challenging. Large Language Models (LLMs) offer a solution by structuring unorganized network feedback into meaningful latent representations, helping RL agents recognize patterns more effectively. For example, in O-RAN slicing, concepts like SNR, power levels and throughput are semantically related, and LLMs can naturally cluster them, providing a more interpretable state representation. To leverage this capability, we introduce a contextualization-based adaptation method that integrates learnable prompts into an LLM-augmented DRL framework. Instead of relying on full model fine-tuning, we refine state representations through task-specific prompts that dynamically adjust to network conditions. Utilizing ORANSight, an LLM trained on O-RAN knowledge, we develop Prompt-Augmented Multi agent RL (PA-MRL) framework. Learnable prompts optimize both semantic clustering and RL objectives, allowing RL agents to achieve higher rewards in fewer iterations and adapt more efficiently. By incorporating prompt-augmented learning, our approach enables faster, more scalable, and adaptive resource allocation in O-RAN slicing. Experimental results show that it accelerates convergence and outperforms other baselines.

View on arXiv
@article{lotfi2025_2506.00574,
  title={ Prompt-Tuned LLM-Augmented DRL for Dynamic O-RAN Network Slicing },
  author={ Fatemeh Lotfi and Hossein Rajoli and Fatemeh Afghah },
  journal={arXiv preprint arXiv:2506.00574},
  year={ 2025 }
}
Comments on this paper