ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13940
44
0

Multi-Modal Self-Supervised Semantic Communication

18 March 2025
Hang Zhao
Hongru Li
Dongfang Xu
Shenghui Song
Khaled B. Letaief
ArXivPDFHTML
Abstract

Semantic communication is emerging as a promising paradigm that focuses on the extraction and transmission of semantic meanings using deep learning techniques. While current research primarily addresses the reduction of semantic communication overhead, it often overlooks the training phase, which can incur significant communication costs in dynamic wireless environments. To address this challenge, we propose a multi-modal semantic communication system that leverages multi-modal self-supervised learning to enhance task-agnostic feature extraction. The proposed approach employs self-supervised learning during the pre-training phase to extract task-agnostic semantic features, followed by supervised fine-tuning for downstream tasks. This dual-phase strategy effectively captures both modality-invariant and modality-specific features while minimizing training-related communication overhead. Experimental results on the NYU Depth V2 dataset demonstrate that the proposed method significantly reduces training-related communication overhead while maintaining or exceeding the performance of existing supervised learning approaches. The findings underscore the advantages of multi-modal self-supervised learning in semantic communication, paving the way for more efficient and scalable edge inference systems.

View on arXiv
@article{zhao2025_2503.13940,
  title={ Multi-Modal Self-Supervised Semantic Communication },
  author={ Hang Zhao and Hongru Li and Dongfang Xu and Shenghui Song and Khaled B. Letaief },
  journal={arXiv preprint arXiv:2503.13940},
  year={ 2025 }
}
Comments on this paper