ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08551
31
0
v1v2 (latest)

DeepForm: Reasoning Large Language Model for Communication System Formulation

10 June 2025
Panlong Wu
Ting Wang
Yifei Zhong
Haoqi Zhang
Zitong Wang
Fangxin Wang
    OffRLLRM
ArXiv (abs)PDFHTML
Abstract

Communication system formulation is critical for advancing 6G and future wireless technologies, yet it remains a complex, expertise-intensive task. While Large Language Models (LLMs) offer potential, existing general-purpose models often lack the specialized domain knowledge, nuanced reasoning capabilities, and access to high-quality, domain-specific training data required for adapting a general LLM into an LLM specially for communication system formulation. To bridge this gap, we introduce DeepForm, the first reasoning LLM specially for automated communication system formulation. We propose the world-first large-scale, open-source dataset meticulously curated for this domain called Communication System Formulation Reasoning Corpus (CSFRC). Our framework employs a two-stage training strategy: first, Supervised Fine-Tuning (SFT) with Chain-of-Thought (CoT) data to distill domain knowledge; second, a novel rule-based Reinforcement Learning (RL) algorithm, C-ReMax based on ReMax, to cultivate advanced modeling capabilities and elicit sophisticated reasoning patterns like self-correction and verification. Extensive experiments demonstrate that our model achieves state-of-the-art performance, significantly outperforming larger proprietary LLMs on diverse senerios. We will release related resources to foster further research in this area after the paper is accepted.

View on arXiv
@article{wu2025_2506.08551,
  title={ DeepForm: Reasoning Large Language Model for Communication System Formulation },
  author={ Panlong Wu and Ting Wang and Yifei Zhong and Haoqi Zhang and Zitong Wang and Fangxin Wang },
  journal={arXiv preprint arXiv:2506.08551},
  year={ 2025 }
}
Main:7 Pages
7 Figures
Bibliography:2 Pages
Comments on this paper