ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10003
17
0

AI2MMUM: AI-AI Oriented Multi-Modal Universal Model Leveraging Telecom Domain Large Model

15 May 2025
Tianyu Jiao
Zhuoran Xiao
Yihang Huang
Chenhui Ye
Yijia Feng
Liyu Cai
Jiang Chang
Fangkun Liu
Yin Xu
Dazhi He
Yunfeng Guan
Wenjun Zhang
ArXivPDFHTML
Abstract

Designing a 6G-oriented universal model capable of processing multi-modal data and executing diverse air interface tasks has emerged as a common goal in future wireless systems. Building on our prior work in communication multi-modal alignment and telecom large language model (LLM), we propose a scalable, task-aware artificial intelligence-air interface multi-modal universal model (AI2MMUM), which flexibility and effectively perform various physical layer tasks according to subtle task instructions. The LLM backbone provides robust contextual comprehension and generalization capabilities, while a fine-tuning approach is adopted to incorporate domain-specific knowledge. To enhance task adaptability, task instructions consist of fixed task keywords and learnable, implicit prefix prompts. Frozen radio modality encoders extract universal representations and adapter layers subsequently bridge radio and language modalities. Moreover, lightweight task-specific heads are designed to directly output task objectives. Comprehensive evaluations demonstrate that AI2MMUM achieves SOTA performance across five representative physical environment/wireless channel-based downstream tasks using the WAIR-D and DeepMIMO datasets.

View on arXiv
@article{jiao2025_2505.10003,
  title={ AI2MMUM: AI-AI Oriented Multi-Modal Universal Model Leveraging Telecom Domain Large Model },
  author={ Tianyu Jiao and Zhuoran Xiao and Yihang Huang and Chenhui Ye and Yijia Feng and Liyu Cai and Jiang Chang and Fangkun Liu and Yin Xu and Dazhi He and Yunfeng Guan and Wenjun Zhang },
  journal={arXiv preprint arXiv:2505.10003},
  year={ 2025 }
}
Comments on this paper